Source: The Conversation (Au and NZ) – By Amanda Third, Co-Director, Young and Resilient Research Centre/Professorial Research Fellow, Institute for Culture and Society, Western Sydney University
Banning children under 16 from social media sounds like a seductive idea. For overwhelmed parents navigating their kids’ lives in a digital age, this move from the Australian government may seem like welcome relief.
But evidence shows it’s highly unlikely bans will positively impact the youth mental health crisis in this country. Indeed, bans may make our children even more vulnerable online.
Children and young people go online primarily to socialise with their peers. Online spaces are one of the few avenues our overscheduled children have to interact freely with each other, which is crucial for their wellbeing.
A social media ban will close down this avenue and force children into lower-quality online environments. Children already say adults don’t understand what they do online and are underequipped to support them.
A blanket ban affirms parents “don’t get it”. Kids will find ways to get around the ban. And if their interactions turn sour on social media, the fact they were not supposed to be there will make it more difficult to reach out to adults for help.Crucially, demands for blanket bans – challenging to implement – also force tech platforms into “compliance mode”. They divert company resources away from designing better online environments for children and into litigation.
What should we do instead of a ban?
Our children’s online safety is a collective responsibility. There are constructive steps we can take, but they need more cooperation between governments, industry, the community sector, parents, caregivers, educators, researchers, and children and young people themselves.
All children learn by taking risks and making mistakes. The focus needs to be on eliminating online harms, and equipping children and their caregivers to deal confidently with the digital world.
Tighter regulation is part of the solution. But making the internet a better place for children – not just banning them – is the very best protection we can provide.
So, what would that look like?
One way is to implement safety-by-design principles. Popularised internationally by the Australian eSafety Commissioner, safety by design is what it sounds like – baking safety features into the DNA of technological products and platforms.
Here, we should take the lead from children themselves. They are urging platforms and governments to do several things:
- give minors privacy by default
- provide standardised, easily accessible and well-explained reporting processes across diverse platforms
- use AI to detect bad actors attempting to interact with children.
Children also want to know what data is collected from them, how it is used, by whom, and for what purposes.
They’re also calling for safety-by-design features that eliminate sexual, violent and other age-inappropriate content from their feeds.
All of these steps would help to strengthen the things they already do to take care of themselves and others online – like being cautious when interacting with people they don’t know, and not sharing personal information or images online.
Not just safe, but optimal
Safety by design is not the whole solution. Building on the efforts to develop industry codes, industry and government should come together to develop a wider range of standards that deliver not just safe, but optimal digital environments for children.
How? High-quality, child-centred evidence can help major platforms develop industry-wide standards that define what kinds of content are appropriate for children of different ages.
We also need targeted education for children that builds their digital capabilities and prepares them to deal with and grow through their engagement online.
For example, rather than education that focuses on extreme harms, children are calling for online safety education in schools and elsewhere that supports them to manage the low-level, everyday risks of harm they encounter online: disagreements with friends, inappropriate content or feeling excluded.
Heed the evidence
Some authoritative, evidence-based guidance already exists. It tells us how to ensure children can mitigate potential harms and maximise the benefits of the digital environment.
Where the evidence doesn’t yet exist, we need to invest in child-centred research. It’s the best method for gaining nuanced accounts of children’s digital practices, and can guide a coherent and strategic long-term approach to policy and practice.
Drawing on lessons from the COVID pandemic, we also need to better align evidence with decision-making processes. This means speeding up high-quality, robust research processes or finding ways for research to better anticipate and generate evidence around emerging challenges. This way, governments can weigh up the benefits and drawbacks of particular policy actions.
Technology is not beyond our control. Rather, we need to decide, together, what role we want technology to play in childhood.
We need to move beyond a protectionist focus and work with children themselves to build the very best digital environments we can imagine. Nothing short of the future is at stake in doing so.
Amanda Third currently receives funding from Tech Coalition Safe Online Research Fund; UNICEF Australia; ChildFund, World Vision, Save the Children. She is a member of the Australian eSafety Commissioner’s Online Safety Advisory; Google Kids and Families Global Advisory Board; and Snapchat’s Online Safety Advisory.
– ref. Instead of banning kids from online spaces, here’s what we should offer them instead – https://theconversation.com/instead-of-banning-kids-from-online-spaces-heres-what-we-should-offer-them-instead-238798