Recommended Sponsor Painted-Moon.com - Buy Original Artwork Directly from the Artist

Source: The Conversation (Au and NZ) – By Stephanie Price, Research Fellow | Sexual Violence Research and Prevention Unit, University of the Sunshine Coast

Sander van der Werf/Shutterstock

The scale of online child sexual abuse is immense: estimates suggest there are more than 300 million child victims of online sexual abuse globally.

But what is the scale of online child sexual abuse in Australia?

Answering this question with certainty is difficult because so many of these crimes go unreported and undetected.

We can estimate, though.

For example, in 2022–23 the Australian Centre to Counter Child Exploitation Child Protection Triage Unit received 40,232 reports of online child sexual abuse materials, while the Australian Federal Police charged 186 offenders with online child sexual abuse crimes. In the past financial year the child protection triage unit number rose to 58,503.

The United States-based National Center for Missing and Exploited Children also received 74,919 reports of child sexual abuse material from Australia

It is difficult to know whether these numbers reflect an increase in perpetration, or improvements in reporting or detection, or a combination. But they do highlight a significant problem that requires immediate action.

Prevention, prevention, prevention

There are many ways to address online child sexual abuse perpetration but broadly, there are three levels: primary, secondary and tertiary.

Traditionally, the most common approach to address online child sexual abuse is tertiary prevention, which means detecting and responding to offences that have already occurred.

This can involve online “stings” or other police operations.

Then there are primary prevention initiatives, which aim to reduce the potential for risk and prevent the offence from occurring in the first place.

These examples – such as the Australian Federal Police’s Think You Know program, and Keeping Kids Safe by the Daniel Morcombe Foundation – provide education and resources to encourage healthy and acceptable online (and offline) behaviours.

Much less is known about the secondary prevention level, which looks to intervene early by targeting people who might be most at-risk of or on the cusp of offending.

This approach is important because we want to stop the harm before it happens – and given the scope of the problem it just isn’t practical or sufficient to rely solely on detection and arrest.

How technology can help

So, this was the focus of our study – what digital secondary prevention interventions have been implemented to prevent online child sexual abuse?

By “digital intervention” we mean “any electronic or online technology that interferes with a course of action that would otherwise result in the perpetration of sexual abuse.”

After reviewing more than 1,100 research articles, book chapters and reports, we found just six relevant sources that described digital interventions which had been put into action worldwide.

Of these six examples, three featured pop-up warning messages, one featured a chatbot, one featured both warning messages and a chatbot, and one featured an online media campaign that included warning messages.

In most of these examples, a warning message is a pop-up message that is triggered by an inappropriate search for child sexual abuse material on a pornography website.

Some of these messages included information about the harms to the viewer or the harms to children and young people, while others included warnings that the content is illegal or that police may be able to detect the search.

Some messages also included links to support services so users could seek help for themselves.

Chatbots on that other hand are pop-up interactive chat windows that use artificial intelligence to simulate conversations.

In this context, a chatbot can provide warning messages and links to support services while engaging users in a “conversation” to discourage offending behaviour and/or encourage help-seeking behaviours.

What are the takeaways?

Overall, our study concluded warning messages and chatbots can be effective at stopping people from continuing to search for child sexual abuse materials, and that messages which increase the perceived risks of detection can be a strong deterrent.

We also suggest these messages could benefit from including information about available supports more often.

But we need more data. We know there are other examples, like warning messages through Meta and Google, which have not yet been studied and could strengthen our findings.

So, while we’re onto something really promising, these types of responses are still relatively new and technology is ever-evolving. We do though expect to see examples of digital interventions like these becoming increasingly common and widespread to help keep children and young people safe from harm.

No single approach can solve this problem, but a combination of these approaches could make a world of difference in the fight against online child sexual abuse.

The author would like to acknowledge and thank research collaborators Nadine McKillop, Susan Rayment-McHugh and Lara Christensen from the Sexual Violence Research and Prevention Unit of the University of the Sunshine Coast, and Joel Scanlan and Jeremy Prichard of the University of Tasmania.

Stephanie Price does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How pop-up warnings and chatbots can be used to disrupt online child sex abusers – https://theconversation.com/how-pop-up-warnings-and-chatbots-can-be-used-to-disrupt-online-child-sex-abusers-244507

NO COMMENTS