Source: The Conversation (Au and NZ) – By Madeleine Hale, PhD Candidate in Free Speech, Democracy and Social Media, Deakin University
Recently, when scrolling through TikTok – purely for research purposes of course – we paused on a video “spilling the tea” (that means sharing the goss) on the hottest new social media app, BeReal.
As social media researchers and teachers of Gen Z university students, we try to stay current with the latest trends. BeReal is a refreshing change from curated feeds – however, as with most new social media platforms, free speech issues may lurk just around the corner.
A collection of mundane photos
Self-described as “not another social network” by French founders Alexis Barreyat and Kévin Perreau, BeReal is the anti-Instagram version of Snapchat, where users are encouraged to be “real” and “authentic”.
Once a day, BeReal users receive a notification with a strict deadline of two minutes in which to post an unfiltered photo of themselves, in all their pyjama-clad, Netflix-watching glory. Or engaging in whatever activity that makes up most of our days – studying, working, running errands, making dinner.
The end result is a feed of mundane photos of friends (or strangers found through the Discovery feature) that is equal parts liberating and comforting. It reminds us most people’s lives are just as ordinary as our own, and a refreshing change from the highly curated social feeds we usually see.
As a fledgling app still in its honeymoon phase, with little scrutiny to date, BeReal is yet to see significant public scandals. For now, the app may just be a fun way for young people to connect.
Read more:
Social network BeReal shares unfiltered and unedited moments from our lives – will it last?
But, as researchers examining the interplay between social media and free speech, we can imagine free speech issues on the horizon for BeReal, whose vague terms of use give the company a high degree of discretion over content moderation.
We’ve seen this before on platforms like Facebook, which has been accused of censorship by deleting Black Lives Matter protest content and the iconic “Napalm Girl” photo.
Free speech tensions in this area are fraught. While social media companies are often accused of unwarranted censorship, they also face significant pressure to limit harmful content like hate speech on their platforms.
Just days ago, TikTok, Twitter and Instagram rushed to remove anti-Semitic content shared by Ye (formerly known as Kanye West). Now the rapper has pledged to buy so-called “free speech” app Parler as a reaction against his deplatforming.
Read more:
Parler: what you need to know about the ‘free speech’ Twitter alternative
Meanwhile, the proposed banning of TikTok in the US due to data security concerns has been met with claims of free speech violations from critics, such as free speech organisation Article 19.
Terms of use with little guidance
For the more recent BeReal, free speech may become an issue if it attracts more nefarious uses. So far, the app’s terms of use encourage users to report “illicit or inappropriate” content while shirking liability as a mere “hosting company” rather than an actual publisher of content.
The terms also require users not to post any “content of a sexual and/or child pornographic nature, [or material] calling for hatred, terrorism, violence in general or against a group of people in particular, inciting others to endanger themselves or provoking suicide”.
While this is in line with content policies on other social media platforms, problems potentially remain for free speech.
Firstly, BeReal’s terms provide little guidance on what constitutes this undesired content, leaving the platform with a high degree of discretion as to what content can be censored.
Secondly, although BeReal reserves the right to “remove or temporarily or permanently suspend access” to violating content, it is not clear whether users will receive warnings prior to content removal, what breaches will trigger which disciplinary actions, and whether there are any avenues of appeal available to users.
Censorship would be easy
Imagine, for example, that a BeReal user with many followers posts an image of police using unreasonable force to arrest a person at a protest. BeReal might delete the content for depicting violence. Yet images such as these can often constitute speech.
Free speech laws usually only prevent censorship by governments rather than companies – some academics have called this into question amidst the unprecedented power of social media companies to censor speech online.
With this in mind, we should be concerned material on BeReal could be removed without explanation, warning, transparency or avenue for appeal.
Consumers should be concerned about the sweeping discretion social media companies like BeReal have over so much of our speech. In addition to independent oversight, greater regulation and a code of conduct, we should demand clearer avenues of appeal and greater transparency regarding content moderation decisions from all social media platforms.
BeReal is currently not much more than a gallery of the mundane, and as such, these may not be pressing issues. Especially if the app is a passing fad, which it may well prove to be.
But with all eyes on Meta, TikTok and Twitter, smaller companies can fly under the radar. We always need to be ready to protect speech in the next potential “marketplace of ideas”.
Read more:
Why the business model of social media giants like Facebook is incompatible with human rights
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
– ref. Millions of users are flocking to the BeReal app – but it may pose free speech issues – https://theconversation.com/millions-of-users-are-flocking-to-the-bereal-app-but-it-may-pose-free-speech-issues-192629