Source: The Conversation (Au and NZ) – By Tauel Harper, Lecturer, Media and Communication, UWA, University of Western Australia
The federal government’s News Media and Digital Platforms Mandatory Bargaining Code, which passed the Senate today, makes strong points about the need to regulate misinformation.
In response, Google, Facebook, Microsoft, TikTok, Redbubble and Twitter have agreed to abide by a code of conduct targeting misinformation.
Suspiciously, however, the so-called Australian Code of Practice on Disinformation and Misinformation was developed by, well, these same companies. Behind it is the Digital Industries Group (DIGI), an association formed by them and some other companies.
In self-regulating, they hope to show the government they’re addressing the proliferation of misinformation (false content spread despite intent to deceive) and disinformation (content that intends to deceive) on their platforms.
But the only real commitment under the code would be to appear to be doing something. Since the code is voluntary, the platforms signed up can basically “opt in” to the measures at their own discretion.A modest goal
The code suggests platforms might release data trends about known misinformation, or might label known false content or content spread by seemingly unreliable sources. They might identify and restrict paid political ads trying to deceive users, or they might reveal the sources of misinformation.
These are all great actions the platforms “might” take, as they aren’t bound by the code. Rather, the code will likely encourage them to police misinformation around an “issue of the day” by taking visible action around one topic, without confronting the spread of other profitable false information on their platforms.
The consequences of this would be great. False “news” can lead to dangerous conspiracies and armed attacks. It can even influence elections, which we saw in 2019 when Facebook hosted posts claiming the Labor party would introduce a “death tax” on inheritance. Things quickly spiralled.
The government has promised tougher regulation of misinformation if it feels the voluntary code isn’t working. Although, we should be careful about allowing the powerful regulate the powerful.
It’s unclear, for instance, whether the Morrison government would view posts about a supposed Labor “death tax” as being a real threat to democracy — even though this is misinformation.
Read more: How political parties legally harvest your data and use it to bombard you with election spam
There are better options
Regulating speech on the internet is difficult. In particular, misinformation is hard to define because often the distinction between genuinely dangerous misinformation, and valued myth or opinion, is based on a community’s values.
The latter is information that may not be accurate but which people still have a right to express. For instance:
Nickelback is the best band on the planet.
This is probably untrue. But the statement is relatively harmless. While the actual “truthfullness” is lacking, its subjective nature is clear. Considering this nuance, the solution then is for misinformation to be policed by the community itself, not an elite body.
Reset Australia, an independent group that targets digital threats to democracy, recently proposed a project in which interested tech platforms and members of the public could be subscribed to a live list of the most popular misinformation content.
A citizen-run jury could monitor the list to help ensure public oversight. This would involve the whole public sphere in the debate about misinformation, not just the government and platforms.
Once fake news is in the open, it becomes easier for public figures, journalists and academics to expose.
Who can you trust more?
Another effective strategy would be to create a national register of misinformation sources and content. Anyone could register what they think is misinformation to the Australian Communications and Media Authority, helping it quickly identify malicious sources and alert the platforms.
Digital platforms already do this internally, both through moderators and and by allowing the public to report posts. But they don’t show how posts are judged and don’t release the data. By creating a public register, ACMA could monitor whether platforms are self-regulating effectively.
Such a register could also keep a record of legitimate and illegitimate information sources and give each one a “reputation score”. People who accurately reported misinformation could also receive high ratings, similar to Uber’s ratings for drivers and passengers.
While this wouldn’t restrict anyone’s right to expression, it would be easier to point to the reliability of the source of information.
It’s worth noting this type of community-based peer review system would be open to potential abuse. Movie review site Rotten Tomatoes has had serious problems with people trolling film reviews.
For example, Captain Marvel was awarded a low audience rating because toxic online communities decided they didn’t like the idea of a female superhero, so they coordinated to rate the film poorly. But the platform was able to identify this pattern of behaviour.
The site ultimately protected the film’s score by ensuring only people who had bought a ticket to see the movie could rate it. While any system is open to abuse, so is ‘self regulation’ and communities have shown they can (and are willing to) solve such problems.
Wikipedia is another community-driven peer review resource and one which most people consider highly valuable. It works because there are enough people in the world who care about the truth.
Judging the accuracy of claims made in public allows for a consensus that is open to be challenged. On the other hand, leaving decisions about truth to private companies or political parties could actually exacerbate the misinformation problem.
A chance to move news into the 21st century
The news media bargaining code has finally passed. Facebook is set to bring news back to Australia, as well as start making deals to pay local news publishers for content.
The agreement between the government and Facebook — which serves the interests of those parties — seems like just another echo of the past. Large media players will retain some revenue and Google and Facebook will continue to expand their immense control of the internet.
Meanwhile, users remain reliant on the benevolence of tech platforms to do just enough about misinformation to satisfy the government of the day. We should be careful about surrendering power to both platforms and governments.
This new code won’t force significant change out of either, despite the pressing need for it.
Read more: Google is leading a vast, covert human experiment. You may be one of the guinea pigs
– ref. We can’t trust big tech or the government to weed out fake news, but a public-led approach just might work – https://theconversation.com/we-cant-trust-big-tech-or-the-government-to-weed-out-fake-news-but-a-public-led-approach-just-might-work-155955