Prime Minister Jacinda Ardern is winning praise for her campaign to clean up the internet, and in particular for her announcement of the “Christchurch Call” Summit to be held with French President Emmanuel Macron in Paris next month. And if they can come up with some meaningful and effective ways to make the internet less available to terrorists and violent extremists then this will be a major accomplishment.
Regulating the internet is notoriously difficult, however. It might be one of the big issues of our time, but no one seems to have the answers for how to do it in a way that will be both effective and satisfactory. There’s a good chance the whole episode will amount to yet another talkfest of platitudes and politicking. This is certainly the view of Newstalk ZB’s Barry Soper, who forecasts an outcome of “full, frank and meaningless words” – see: Irony to New Zealand and France’s terrorism summit next month.
Not only this, Soper suggests that the motivations for the summit are opportunistic: “The idea no doubt came from the French President Emmanuel Macron who’s been haemorrhaging in the opinion polls at home… The international voice of reason and compassion Jacinda Ardern would have immediately come to mind and the pledge she’s now calling the Christchurch Call was born.”
The Herald’s political editor takes umbrage at such scepticism, declaring this type of view out of place: “They are the sort of critic who would never start anything unless success were guaranteed. The suggestion that Ardern do nothing after the murders of 50 people in New Zealand were live-streamed and shared on social media is to deny human nature and New Zealand’s own instincts” – see: Jacinda Ardern is knee-deep in planning joint initiative with France.
Audrey Young predicts real change will emerge from a difficult area of reform: “It won’t eliminate the evils that lurk within social media. But it won’t be nothing either.” She sees it as a positive sign that Ardern and Macron are being so inclusive in their approach: “Ardern’s natural instincts are to collaborate as broadly as possible… That factor alone makes it important to get co-operation from social media themselves, rather than using heavy-handed regulation or attempting to bully the corporates into participation.”
However, as with other international agreements, the more people you bring to the table, the greater the likelihood of a watered-down outcome. And this is the point made in Tom Pullar-Strecker’s article, The devil will be in the detail of the ‘Christchurch Call’. This reports Colin Gavaghan, director of the Centre for Law and Policy in Emerging Technologies at Otago University, as cautioning against going too broadly: “The risk, he argues, is you can end up with texts that are pitched at such a level that ‘no-one could disagree with them’ but which don’t tend to mean anything in practice.”Pullar-Strecker’s article emphasises the uniqueness of this summit, as normally the outcomes are relatively pre-determined, with a text negotiated in advance for participants to sign up to. This won’t necessarily happen in this instance.
The success or otherwise of the initiative will be determined, it seems, by how ambitious the internet regulation campaign ends up being. Ardern, herself, is very keen to see a narrow focus for the regulations, which deal specifically with the online sharing of terrorist acts. Ardern says: “This is not about freedom of expression. This is about preventing violence and extremism and terrorism online”.
This approach is easier than going down the route of attempting to take on “hate speech” and extremist politics in general. And that is also the advice of Paul Brislen: “There are a number of things they should be looking at. The trick will be narrowing it down to something that is achievable because there are so many things that are getting out of control with the world of social media that need a regulator to step in… Trying to stay focused is going to be critical” – see Thomas Coughlan’s Speculation rife on value of ‘Christchurch Call’.
But even a focus just on violence and terrorism could be incredibly difficult. The same article makes this point: “Victoria University of Wellington media studies lecturer Peter Thompson said just defining what terrorism was presented difficulties. ‘It’s not a straightforward thing to decide what is and isn’t terrorism: live-streaming mass murder, well yes, but how do you decide which groups are considered terrorists or not?’ he said.”
Rick Shera from Netsafe and Internet NZ is also pleased that the Government is focused on dealing to the narrower and less contentious issue of terrorism: “I’m glad we are sticking to violent extremism and terrorism. Once you go into fake news, damage to democracy and other forms of online harm it becomes very difficult. Freedom of speech and the US position on that make it hard to make gains, so if the target is narrow it may be easier” – see Colin Peacock’s Does social media reform have the law on its side?
In this article by Peacock, the major issue of the United States is brought into the debate. After all, the US tech companies are based there, and benefit from that country’s very strong ethos and constitutional protections of political freedoms. This is lamented by some participants in the debate. For example, Internet NZ’s chief executive Jordan Carter is quoted, saying “The nature of their black and white constitutional protections on free speech in the US – and the current state of their politics – don’t leave me with any confidence that they will be able to drive change in this area”.
Clearly, the strong US resistance to censorship and over-regulation of speech means that Ardern’s “Christchurch Call” could run into problems. And it’s not just the US Constitution that might stymie reform, as explained by tech expert and journalist Bill Bennett, in Peacock’s article: “The problem with the US is they have two things that stop them from acting. One is the First Amendment which is all about free speech and not censoring people. The second thing is something called Section 230 that gives social media companies an out. They are not responsible for things posted on their site”.
There are, however, some major debates going on in the US about Section 230 of the Communications Decency Act. And the above article reports internet law academic Eric Goldman suggesting that any subsequent changes from that debate might be crucial: “He thinks cutbacks of Section 230’s scope do pose serious risks to free speech online. So is it the outcome of this behind-the-scenes legal argument playing out in the US right now – and not a headline-making political summit in France – which will really determine whether internet giants take responsibility for extreme content on their platforms?”
For the best discussion of these political freedom issues, see Gordon Campbell’s column, On Ardern and Macron’s campaign against violent social media content. In this, Campbell explains what might be coming after two decades of self-regulation of the internet, given the strong political appetite for serious regulation.
He worries that Ardern and co will end up going beyond just the clampdown on terrorist and extremist violence, and might produce something that impacts on general political activity: “Once you get beyond those low hanging fruit….it becomes difficult to censor online content without doing real damage to freedom of expression, and to genuine political dissent. It would be unfortunate if the best friends of the Ardern/Macron initiatives turn out to be the tyrants in countries that would (a) dearly love to see tech companies forced to hand over the keys to encryption, and (b) would readily embrace further restrictions being put on the online content their dissidents are allowed to post.”
He also believes regulation could ultimately prove unpopular, which is why Facebook and the like want it to be carried out by governments, “presumably, so that the politicians then get to wear the backlash once people realise the full implications of allowing the state to define and police the content deemed acceptable on the Net.”
Mostly likely, there will be simple progress made in Paris, such as tightening up of Facebook Live. The big question will be whether online providers end up having to do more vetting of content before it’s published, which would be of huge consequence, and what Campbell calls a “disastrous outcome”.
And he gives the example of his own media platform, Scoop: “Every year, Scoop also publishes close on a million New Zealand press releases issued by all and sundry. In that respect, Scoop functions as a national community noticeboard. It rejects press releases that contain libels and/or socially inflammatory hate speech. Imagine though, if Scoop was required to pre-check every one of those press releases for accuracy, balance and for whether or not they might hurt the feelings of people in public office. It would not be remotely practical or affordable for Scoop to do so – and its efforts would be gamed by those with malice in mind against the organisations issuing the press releases in question.”
Similarly, Internet NZ’s Jordan Carter suggests that relying on artificial intelligence to vet and remove content could be a problem: “Applying overly tight automated filtering would lead to very widespread overblocking. What if posting a Radio New Zealand story about the Sri Lanka attacks over the weekend on Facebook was automatically blocked? Imagine if a link to a donations site for the victims of the Christchurch attacks led to the same outcome? How about sharing a video of TV news reports on either story?”.
Carter has his own list of “six thoughts” about how to make the regulation of the internet work, including keeping the scope of the exercise narrow, and striking the right balance between “preventing the spread of such abhorrent material on the one hand, and maintaining free expression on the other” – see: How to stop the ‘Christchurch Call’ on social media and terrorism falling flat.
There really will be difficulties, no matter what approach is chosen. Claire Trevett points out: “As with climate change, making the right noises and getting the desired results are two very different things. It will be something akin to Hercules wrestling the Hydra. As soon as one head is chopped off, another two will appear” – see: PM Jacinda Ardern gathers allies to wrestle the social-media Hydra.
And it’s the politicians themselves who might have the most to lose, given their increasing preference to use Facebook and the like “to bypass the filter of the traditional media and speak directly to supporters and voters. This has some pluses for those politicians – but not necessarily for democracy. Over-reliance on social media over journalistic media allows them to escape questioning on issues they may not want to face. Macron has also come in for criticism for trying to stifle the ‘Yellow Vest’ protest use of social media. Ardern herself has been known to vote with her fingers when it comes to expressing her disapproval with certain social media platforms.”
Facebook and Instagram have been key parts of Ardern’s campaigning, and Trevett points out that “in the last election, Labour spent $475,000 on advertising on Facebook – four times as much as National – as it tried to appeal to younger voters.”
Finally, for the lighter side of the debate and some apparent irregularities in social media regulation, see Hamish McNeilly’s Gone in 20 minutes: Facebook strips student nude mag cover and Andrew Gunn’s We’re taking urgent steps to address this.