Tuesday, December 8, 2020

 

Does censoring the radical right on social media work?

Censorship leads to various responses on the radical right such as migration to other platforms, such as Parler.

CREST Research / Copyright ©2017 R. Stevens / CREST (CC BY-SA 4.0). creativecommons.org/licenses/by-nc-sa/4.0/


Ofra Klein-7 December 2020

Since the recent US election the user-base of the “free-speech” platform Parler increased. It is not the first time conservatives flock away from Twitter and Facebook in the search of a less controlled alternative. Moderation of hateful or radical content on social media has been a central point of discussion in the recent years. Yet, much remains unknown about the decisions of platforms on how content is moderated and what the consequences are for mobilization on the radical right.

The 2020 US elections showed a shift in how platforms dealt with political content. Twitter did not allow for political advertisements at all, and Facebook restricted advertisement a few days before the election. During the election and vote count, Facebook and Instagram added real-time information from news sources directly to the posts sent out by both presidential candidates. Twitter used various notifications to inform users that content of tweets was disputed and made it harder to share such tweets.

Platform interference is not a new phenomenon. Behind the screens, thousands of content moderators are cleaning social media platforms of its most gruesome content. Child pornography or videos of beheadings are clear cases that need to be removed. Most cases of removal are not so obvious. When it comes to the moderation practices of removing hateful tweets or radical right pages, it is often more opaque why certain content is removed. Often times, content moderators are so overwhelmed with the amount of flagged content that they lack time to provide feedback on individual removals.

To get an insight in the practices of content removal, I tracked Facebook networks of western-European radical right pages over time. While in 2014 and 2015 in Germany, local Facebook pages of anti-migrant groups were in abundance, nowadays such groups have largely disappeared. Certain actors – subgroups, movements and their leaders – are less often present on mainstream social media compared to parties, politicians and partisan news media. Earlier studies, such as by Jasper Muis and myself, as well as by Caterina Froio and Bharath Ganesh show that such movements and subgroups groups did use these platforms quite extensively before 2016. This suggests that over time, moderation has indeed become harsher.

much remains unknown about how content is moderated and what the consequences are for mobilization on the radical right

I tracked radical right actors over time and got an insight about where platforms draw the line: whilst content that is considered newsworthy or posted by ‘public figures’ is allowed, movements and subgroups that actively mobilize people into action do cross the line. Actors on the right seem to be aware of platforms’ takedowns to a certain extent. Movements or subgroups that were not removed from mainstream platforms posted noticeably less, or were even completely silent. This suggests that platforms’ censorship has a chilling effect on the free behavior of these actors. Lars Erik Berntzen and Manès Weisskircher show how local Pegida Facebook pages remained quiet after pages of other Pegida subgroups were removed. This is a strategy to remain under the radar, whilst at the same time remaining a group where people could come together and read up on old content that was posted.

Tracking networks might give an indication of what stays and what goes on platforms. Yet, it does not go into a more fine grained understanding of why some content was removed, whether this was done by the platform or by the page owner themselves. Neither does it show at what time removal took place and what the direct reasons behind decisions of censorship were. Moreover, it also does not provide insight in less visible forms of moderation, such as ‘shadow-bans’. In such cases content is not removed, but is algorithmically ranked lower, making it less visible to viewers. Nigel Farage argued that Facebook did this with UKIP’s posts back in 2017.

Censorship leads to various responses on the radical right. Migration to other platforms, such as Parler, is a common strategy. In 2016, removal of alt-right influencers led to the creation of the platform Gab. The Russian platform VKontakte or gamer apps such as Discord form other popular refuges. We can also see that movements now use closed off Facebook groups rather than public pages to coordinate actions. The large-scale Yellow Vests protests in France were an example of this, just as the Stop the Steal group that recently was banned by Facebook. Actors do not just shift their activities, but they also change their strategies. Remaining quiet is an example, just as using coded-language to make hateful speech more hidden and less obvious.

At the moment, still little is known about the consequences of censorship. Tracking the movements of groups and individuals across platforms is a tricky matter. Not only is it technically difficult to get comparable data from different platforms, the increased number of ‘alternative tech’ platforms, such as Gab and Parler, not to mention messaging apps, makes it hard to track the movements of individuals.

Moreover, while studies have linked the radicalization to watching online content and frequenting certain platforms, individuals who decide to use these platforms might already be more radical than those who stay away. While social media have been linked to the success of the radical right, we do not know much about how censorship from these platforms actually reduces support for the radical right. Removing radical actors from mainstream platforms can, on the one hand, significantly reduce their audiences, but it can also contribute to increased feelings of resentment and victimhood, forming a breeding ground for even stronger discontent.