The Australian media made international headlines this week, with Channel Nine being front and centre of the debate about AI generated images.

The story goes like this: Channel Nine sourced an image of Animal Justice Party MP Georgie Purcell and digitally edited it for their own purposes. In the editing process, which Channel 9 says was done using Artificial Intelligence (AI), Ms Purcell was given larger breasts and a crop top.

Ms Purcell, who says that she has previously been a victim of image-based abuse, and that this incident felt like a violation.

Image-based abuse

Image-based abuse laws were strengthened in Australia in 2022, imposing harsh penalties, including a 3-year prison sentence and fines of up to $11,000, or both.

However, the laws specifically relate to images that have been doctored to the point where they are sexually explicit – meaning, they depict nudity, child sexualisation or pornography.

Having breasts enlarged and your top shortened to show a little waistline would not be suitable for prosecution under these laws.

And, there is no law in Australia that prevents someone from using a photo or video of you without permission, except in specific circumstances for example, if you are shown to be endorsing a business or product you have not agreed to, or, if the image is sexualised in nature, or is an image of you in a private act, otherwise there is no general right to privacy.

Other pieces of law, which relate to data protection, copyright, anti-discrimination and cyber bullying can offer some protection when the use of generative AI goes wrong, but there is no one single piece of AI-specific legislation in Australia.

AI without oversight

Clearly, with AI being increasingly used – particularly by news desks with an apparent lack of human editorial oversight at the helm, lawmakers need to consider the various impacts of AI, and how protections can be built into either existing laws or by drafting new ones.

Only last year, the Microsoft Start website re-published a Guardian news story about the death of Lilie James, the water polo coach whose body was found in the bathroom of elite school St Andrews in Sydney.

Alongside the article, an AI generated poll asked readers to speculate on the cause of Lilie's death offering three choices – murder, accident, or suicide.

The poll was eventually removed, but, as the Guardian so rightly pointed out in the aftermath, it was possible to perceive that its journalists had created the poll, when in fact no member of The Guardian had anything to do with it, as it went into overdrive to protect its reputation for journalistic integrity.

Here in Australia, incidences of AI slip-ups have been relatively minor and also few, but they're increasing in the US at a considerable rate.

Just recently, the AI generated voice of President Biden was robocalling householders and spreading misinformation about the New Hampshire presidential primary elections. While investigations are underway, there's also a lot of pressure on the regulators to stop fakes from interfering with the election process.

But how?

Combatting fake news

This highlights the other problem we've got without appropriate legal frameworks in place – AI can very easily be made the scapegoat.

At a Federal Government level in Australia, there have been expert panels looking into the use of AI in the education sector and in business. Certainly these are positive steps forward, but they're also offering a piecemeal approach.

Australia's misinformation laws

Australia's drafted misinformation laws are currently pretty heavy-handed, and while that might make sense, because non-disclosed AI certainly makes it much harder to detect fact from fiction, creates suspicion, and has the potential for damaging social institutions, government structures and trustworthy news sources as we know them.

But these laws in their current form also have the potential to negatively affect freedom of speech and there needs to be much more balanced.

A key concern with the current draft is the fact that definitions of misinformation and disinformation are not clear enough, that to cause ‘reasonable harm' is also too general and needs to have a much high threshold for proof, and, perhaps most significantly that draft bill defines any content that is authorised by the government as being ‘excluded' from the law.

This is not just an issue that Australia is grappling with. Last December the member countries of the EU came to a provisional agreement regarding the AI Act.

It intends to employ a sliding-scale framework that will assess the user-risk levels of various AI applications.

The offence of producing child abuse material in New South Wales

It is important to be aware that producing child abuse material is an offence under section 91H of the Crimes Act 1900 (NSW) which carries a maximum penalty of 10 years in prison

The definition of production is broad, and includes filming, photographing, printing or otherwise making, altering or manipulating an image to produce an image of child abuse material.

So, it's important to know that taking an image of a child's face and using artificial intelligence to generate a sexualised image from it – for instance, incorporating a naked body – can amount to a crime under the law.

But what is child abuse material?

‘Child abuse material' is that which depicts or describes in a way that reasonable persons would regard as being offensive:

  1. The private parts of a person who is, or appears to be or is implied to be, a child, or
  2. A person who is, or appears to be or is implied to be, a child:

As a victim of torture, cruelty or physical abuse, or engaged in or apparently engaged in a sexual pose or sexual activity, or in the presence of another who is engaged in or apparently engaged in a sexual pose or sexual activity.

What is considered when determining whether something is child abuse material?

In determining whether material is offensive to a reasonable person, the following matters must be taken into account:

  • The standards of morality, decency and propriety accepted by reasonable adults,
  • The literary, artistic or educational merit (if any) of the material,
  • The journalistic merit (if any) of the material, and/or
  • The general character of the material.

What is a private part?

‘Private parts' is defined by the law as, the genital or anal area, whether bare or covered by underwear, or the breasts of a female, or transgender or intersex person identifying as female whether or not the breasts are developed.

Who is considered a child?

For the purposes of the offence, a ‘child' is a person under the age of 16 years.

Defences and exceptions to the charge

Legal defences to a charge of producing child abuse material include where:

  1. You did not know, and could not reasonable have known, that you possessed, disseminated or produced it,
  2. Your conduct benefited the public through law enforcement or administration, or the administration of justice, and did not extend beyond it,
  3. The material received a classification for publication,
  4. The use of the material was approved by the Attorney-General for research, and
  5. The material depicts you and would not be child abuse material in the absence of your image.

An additional defence to possessing child abuse material is where you received it unsolicited and took reasonable steps to get rid of it upon becoming aware of its nature

An exception to the offence is where:

  1. The possession of the material occurred when you were under 18, and
  2. A reasonable person would consider the possession acceptable considering:
  • The nature and content of the material,
  • The circumstances whereby you came to possess it,
  • The age, vulnerability and circumstances of the child depicted,
  • Your age, vulnerability and circumstances, and
  • The relationship between you and the child depicted.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.