From Civil Penalties to Prison Time: Australia’s New Deepfake Criminal Laws

The rapid advancement of generative artificial intelligence has led to a troubling increase in deepfake image-based abuse (Deepfake I-BA), a form of gendered cyber violence in which realistic, non-consensual sexual content is created using AI technology. Victims, most often women, are digitally manipulated into explicit content without their knowledge or consent.

Until recently, Australia lacked a criminal law framework to directly address this harm. This changed in August 2024 with the introduction of the Criminal Code Amendment (Deepfake Sexual Material) Act 2024 (Cth), which amended the Criminal Code Act 1995 (Cth) to introduce criminal penalties for the creation and distribution of deepfake sexual material. This development, while a significant step forward, highlights the complex interplay between technological evolution and the limitations of existing regulatory frameworks.

Prior to the reform, Australia’s regulatory response relied primarily on civil penalties under the Online Safety Act 2021 (Cth), enforced by the eSafety Commissioner. While the Commissioner could compel internet service providers (ISPs) and content hosts to remove harmful material, enforcement was slow, limited by jurisdictional barriers, and lacking the punitive deterrence necessary to prevent repeat offending. The maximum civil penalty, $156,500, was often insufficient in cases of widespread or repeat dissemination.

The criminalisation of Deepfake I-BA represents a landmark legal development. Under new provisions, individuals who use a carriage service to transmit deepfake sexual material without consent face up to six years’ imprisonment. Aggravating circumstances, such as prior removal notices from the eSafety Commissioner or direct involvement in the creation of the content, can increase the maximum penalty to seven years. Importantly, the legislation adopts a consent-based model, aligning with broader reforms in image-based abuse law.

This reform followed high-profile incidents such as the Bacchus Marsh Grammar scandal, in which a school student allegedly distributed AI-generated explicit images of over 50 female students. This case shocked the public and reinforced longstanding academic concerns that the absence of direct criminal penalties would embolden perpetrators. The eSafety Commissioner reported a 550% increase in reports of Deepfake I-BA since 2019, with 99% of victims being women.

Despite this progress, critical gaps remain in Australia’s regulatory regime—particularly around attribution and intermediary liability. Identifying the originator of deepfake content is notoriously difficult, especially when perpetrators use offshore servers, encrypted platforms, or anonymous accounts. In the recent case of eSafety Commissioner v Rotondo [2023] FCA 1296, even after civil penalties were imposed, the perpetrator simply refused to comply with takedown orders.

Moreover, Australia currently does not limit the liability of internet intermediaries such as platforms and ISPs. This contrasts with the United States, where Section 230 of the Communications Decency Act provides broad immunity for online platforms in respect of third-party content. While Australia’s more expansive liability model can incentivise platform moderation, it also creates legal uncertainty, particularly as platforms argue for protections if they implement content complaints systems and take reasonable preventative steps.

The unresolved question is whether Australia should introduce clearer statutory obligations for intermediaries. This might include mandatory monitoring, reporting thresholds, or proactive takedown systems for AI-generated abuse. Alternatively, the government could adopt a tiered model of liability, where liability is limited if intermediaries demonstrate genuine and ongoing efforts to prevent harm.

There are useful analogies in Australian law. After the 2019 Christchurch terrorist attack, the government passed the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 (Cth), imposing criminal penalties on platforms for failing to remove extremist content. This led to unprecedented proactive moderation by platforms, including the temporary blocking of entire websites like LiveLeak. Scholars such as Gacutan and Selvadurai argue that non-limited liability frameworks encourage platforms to take real responsibility for harmful content.

However, any expansion of intermediary liability must be balanced against the risk of overregulation, platform censorship, and the stifling of internet freedoms. Traditionalists caution against forcing platforms to regulate the entire internet, and the technical difficulty of filtering all user-generated content remains a significant concern.

Ultimately, the criminalisation of Deepfake I-BA is an important turning point. It sends a clear message that technology-facilitated abuse is no longer beyond the reach of the law. But to ensure meaningful protection for victims, further reform is essential. This includes:

–        Strengthening intermediary liability frameworks to encourage proactive moderation

–        Enhancing cross-border enforcement mechanisms and attribution tools

–        Increasing funding and resourcing for the eSafety Commissioner

–        Improving victim support pathways and education campaigns

Deepfake I-BA is a symptom of a wider issue: the gendered risks of emerging technologies. Without a coordinated legal, technological, and social response, the law will continue to play catch-up. Australia’s reforms are commendable, but more must be done to ensure that victims are not only protected—but empowered.

 Bilbie Faraday Harrison offers clear, practical advice across a broad range of legal issues. If you need assistance or would like to discuss your situation with our team, get in touch, we’re here to help.

The information provided on this website is intended for general informational purposes only. It does not constitute legal advice and should not be relied upon as a substitute for professional legal consultation. We do not accept any liability for loss or damage arising from reliance on the material contained on this site.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *