]]>
Recommended Sponsor Painted-Moon.com - Buy Original Artwork Directly from the Artist

Source: The Conversation (Au and NZ) – By Dan Jerker B. Svantesson, Co-Director Centre for Commercial Law, Bond University

The terrorist attack in Christchurch is a horrific attack on society. We must consider all measures available to avoid something like this ever happening again, anywhere.

Now in Australia, Prime Minister Scott Morrison wants to introduce new criminal laws for social media companies that fail to quickly remove footage like that broadcast by the gunman in the New Zealand massacre. The alleged gunman live-streamed his activities on Facebook, and the footage was republished across many platforms in the days following.


Read more: Morrison flags new laws to stop social media platforms being ‘weaponised’


This is an indication that Australian leaders may now be prepared to move beyond just blaming technology for its role in the Christchurch massacre.

Laws are typically based on social values and social duties. However, penalties can of course only stem from violations of law – not violations of social duties – and it is governments that make law.

How is the internet regulated?

Internet platforms such as Facebook and Google are already subject to a complex web of laws stemming from around the globe.

A project at Stanford University has started mapping out this web of regulation.

The site points to several laws in Australia that apply to internet platforms. Of these, the Broadcasting Services Act 1992 (Cth) is most relevant. But this is a largely untested legal provision providing certain protections for internet platforms handling content posted by users.

Prime Minister Scott Morrison has indicated he aims to create laws that:

  • make it a criminal offence to fail to remove the offending footage as soon as possible after it was reported or it otherwise became known to the company

  • allow the government to declare footage of an incident filmed by a perpetrator and being hosted on a site was “abhorrent violent material”. It would be a crime for a social media provider not to quickly remove the material after receiving a notice to do so. There would be escalating penalties the longer it remained on the social media platform.

These laws would not prevent violent livestreaming from taking place in the first place, but if drafted carefully may help control its spread and impact.


Read more: Anxieties over livestreams can help us design better Facebook and YouTube content moderation


This is an important point, as there is a strong argument that banning live-streaming on the major platforms will not prevent terrorists live-streaming their acts via other outlets.

Along with Home Affairs Minister Peter Dutton, Attorney-General Christian Porter and Communications Minister Mitch Fifield, today the prime minister will meet with representatives of Google, Facebook and Twitter and telcos including Telstra, Optus and Vodafone to discuss the responsibilities of social media companies when violence is streamed online.


Read more: Four ways social media companies and security agencies can tackle terrorism


Global examples for improving regulation

Recent activity around the world shows increasing attention paid to regulating online hate and terrorist content.

In October 2018, the US Department of Justice launched a new website to improve the identification and reporting of hate crimes.

And in the European Union, work has advanced to stop terrorists from using the internet to radicalise, recruit and incite to violence. The EU proposal includes a framework for strengthened cooperation between hosting service providers, member states and Europol (the EU’s law enforcement agency). Within that framework, service providers must designate points of contact reachable 24/7 to facilitate the follow up to removal orders and referrals.

Using the powers of the Office of Film and Literature Classification, New Zealand has banned possession and distribution of the “manifesto” said to be written by the suspect behind the Christchurch mosque attack. (This accompanies other measures like stricter gun control updated recently in New Zealand).

Australia can draw upon these experiences, copying the good and developing what needs improvement.

International cooperation is key

Morrison has placed the matter of social media platforms being misused to promote violence on the G20 agenda. This is a good step. The major tech companies are established overseas so this is an issue that can only be addressed via international cooperation.

However, the G20 is only one forum of many. Ultimately, what we need are multi-stakeholder discussions involving governments, the tech industry, civil society and academia.

A relevant example in this context is the work the Paris-based Internet & Jurisdiction Policy Network, and more specifically its work on cross-border content take-down and blocking. Its work is advanced, and includes concrete suggestions aimed at managing globally available content in light of the diversity of local laws and norms applicable on the internet.

ref. Why new laws are vital to help us control violence and extremism online – http://theconversation.com/why-new-laws-are-vital-to-help-us-control-violence-and-extremism-online-114069

NO COMMENTS