Recommended Sponsor Painted-Moon.com - Buy Original Artwork Directly from the Artist

Source: Radio New Zealand

Grok has allowed users to create sexualised images of people without their knowledge or consent. Jonathan Raa / NurPhoto via AFP

New Zealand lags behind other countries in clamping down on fake images of naked women and children, an organisation working to prevent child sexual exploitation says.

The British government is considering blocking Grok – X’s AI chatbot – which has drawn international condemnation for allowing users to create sexualised images of people without their knowledge or consent.

Ecpat national director Eleanor Parkes said New Zealand first needed enforceable legislation for AI that prevented the technology being weaponised.

“Bans and restrictions are tools and they’re not the starting point especially as the platform changes, whether the tool is Grok today or another image generator tomorrow, the principle is the same companies must prevent their products being used to create and spread child sexual abuse material,” she said.

“Just reactively banning an individual platform, certainly it’s a tool that can be used, but in New Zealand we need to first step back and have that bigger conversation around what privacy means in the AI era here.”

Parkes said banning chatbots was one measure, but there were many AI tools used to generate harmful nude images.

“Certainly in Aotearoa, we’ve seen a huge surge in AI-generated fake nudes and nudified images and that shows how quickly this technology is being used to sexualize people’s photos, whether it’s through Grok, which is built into part of X, formerly known as Twitter, or whether it’s on ChatGPT or another channel.”

She said it was not a problem linked to just about one platform or channel.

“New Zealand needs an AI-fit safety and privacy approach that protects young children’s images and their likeness as well so that it covers deepfakes. We’ve seen we can’t rely on goodwill here. We need enforceable standards.”

Education Minister Erica Stanford has promised regulatory change to address social media harm, in response to calls for a minimum age of 16 to access social media.

Last year she was tasked with exploring options for legislation and implementation of possible restrictions, and expected to announce in the “near future” exactly what that bill would look like.

“We’re looking at a really clever, world-leading approach at how we protect our kids. And we are going to need a regulator. We are going to need a Child Protection Act. And we are going to need some form of a ban,” she said.

Netsafe chief executive Brent Carey said New Zealand’s laws that governed digital media needed updating.

“The creation and distribution of sexual deepfake imagery can cause serious harm. New Zealand is already responding in sensible ways with the Harmful Digital Communications Act,” he said.

“The answer lies in modernising our laws and expectations so they work for AI-enabled harm. Blaming users alone for content generated by a company’s own AI tool is not an adequate response.”

Carey said the Act should be updated to explicitly cover AI-generated harm.

“If you build it, you’re responsible for how it can be misused especially when sexualised and if young people are at risk. That’s why initiatives like Laura McClure’s deepfake bill are important – they recognise that image-based abuse and non-consensual synthetic content need clearer, faster pathways for accountability.

Sign up for Ngā Pitopito Kōrero, a daily newsletter curated by our editors and delivered straight to your inbox every weekday.

– Published by EveningReport.nz and AsiaPacificReport.nz, see: MIL OSI in partnership with Radio New Zealand

NO COMMENTS