Breaking Bad: Why Facebook’s Policies Are Turning Social Media Moderators To A Life Of Crime
 /  Blog Post / Breaking Bad: Why Facebook’s Policies Are Turning Social Media Moderators To A Life Of Crime
Breaking Bad: Why Facebook’s Policies Are Turning Social Media Moderators To A Life Of Crime

Breaking Bad: Why Facebook’s Policies Are Turning Social Media Moderators To A Life Of Crime

Blog Post

Keywords: Facebook, Social Media, Law, Radicalisation

A hard-to-read article on social media moderation, titled ‘The Trauma Floor’, has recently led me to do a lot of thinking on the unforeseen consequences of the tech industry. To summarise, the piece talks about the experiences of low-paid employees in a stressful work environment where they are forced to leaf through disturbing content for hours daily. It reads like a work of dark humour, which only makes it worse as you realise that it isn’t also a work of fantasy. The stressful life of such employees is not news. Taking a bird’s eye view of work subcontracted from Silicon Valley brings forth many other examples: in call centres in particular, and business process outsourcing in general.

Having said that, the degree to which social media moderators are being affected by their work is chilling: micromanaged to an Orwellian degree with extremely high expectations has resulted in a few of them taking respite through the workplace abuse of offensive humour, drugs and sex. While shocking, what caught my eye was the idea that moderators could potentially be radicalised through their exposure to questionable social media content — part of their job.

So why are Facebook and other social media giants adopting policies that could have disastrous consequences for their employees and subcontractors? This blog post takes a short look at this issue.

The Impact of Extremist Content on Radicalisation

The online availability of extremist material has long been seen to play a significant role in radicalisation. To quote a recent report from the European Commission’s Radicalisation Awareness Network:

“Although the link between extremist ideas and violence is contested and variable, exposure to extremist narratives is undeniably critical to the process of radicalisation. Extremist narratives offer cognitive closure and a quest for significance that psychologists see as fundamental motivators of human behaviour — including towards illegal violence.”

Do see the report in full, as it is quite persuasive. Social media is an obvious vector for the transmission of such material. So the primary issue to look at is how and why social media companies are involved in countering this spread.

The Noblesse Oblige of Social Media Giants

Facebook has been criticised over the past few years for radical content being posted — and virally spread — on their platform. They have thus acted to combat such content; for examples, see here and here. See also a recent post by Mark Zuckerberg where he pointed to ‘harmful content’ as one of the key areas requiring regulation by governments, ostensibly to help Facebook decide what should count as harmful content. It is quite likely that this stance is being taken by the CEO of Facebook, Instagram and WhatsApp — the most significant social media sharing platforms on the planet — to partly shift the blame to governments and their defective pieces of legislation when further radicalisation inevitably occurs through social media content.

There is reason to believe that governments will not deal with the issue comprehensively; it has always been a challenge to draft technology laws that don’t instantly become obsolete — or stifle innovation. A lot of writing has been done on the issue of technology-agnostic regulation. Therefore, it is quite likely that governments will only push a general obligation on to social media giants similar in wording to Article 13 of the EU’s copyright directive, which, in summary, requires merely that ‘information society service providers’ take measures to enforce copyright claims, and not host infringing content. The Directive only states that:

“Those measures, such as the use of effective content recognition technologies, shall be appropriate and proportionate.”

While governments are taking steps to hold social media giants accountable for violent content on their platforms, it is almost certainly occurring quite bizarrely. For example, recent Australian legislation (the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Law, 2019) deals with “abhorrent violent material”, defined (in Section 474.31) as any audio, visual, or audio-visual material that ‘records or streams abhorrent violent conduct’. It should also be material that ‘reasonable people would regard as being, in all the circumstances, offensive’. The law then goes on to define “abhorrent violent conduct” (in Section 474.32) to include terrorist acts, murder, torture, rape, and kidnapping.

The legislation states that if social media company executives are aware that the service they provide ‘can be used to access particular material that the person has reasonable grounds to believe is abhorrent violent material’, then they could face jail time and massive fines (Section 474.33). The only way to avoid this penalty is if they ensure ‘the expeditious removal of the material from the content service’ (Section 474.34); what methods may be used for expeditious removal are left to one’s imagination, since the law certainly doesn’t elaborate. Indeed, the explanatory memorandum to the law only vaguely states that a ‘number of factors’ must be evaluated to determine whether the content was removed expeditiously, such as the ‘type and volume’ of the content and the ‘capabilities and resources’ available to the content service provider. This language was probably included to ensure that an unreasonable burden is not placed on content providers that have nowhere near the resources available to companies the size of Facebook.

None of the language present in the law could hold content service providers accountable for the damage they may cause to the human content moderators. Nothing in this law would make service providers pause in their efforts to avoid jail time or fines, and instead focus on reducing the psychological damage to their own employees and contractors. Note that anyone who views extremist content runs the risk of being radicalised. It is entirely possible that those directly responsible for moderation end up seeing hundreds of pieces of harmful content, and in doing so, end up radicalising themselves through this exposure to extremist content. The article by The Verge itself hints that this is a possibility, giving examples of people suffering from paranoia and extreme stress.

If similar language is used to describe Facebook’s obligation to monitor their platform for harmful content (however harmful content is defined), Facebook and other social media giants will be forced to use automated moderators to ‘expeditiously’ remove that content. However, another article from The Verge has looked at why we are nowhere near the point in machine learning and automation where these methods would work all the time; so companies will also be required to rely on human moderators concurrently.

A report from two special rapporteurs mandated by the United Nations’ Human Rights Council (on the promotion and protection of the right to freedom of opinion and expression; and on the promotion and protection of human rights and fundamental freedoms while countering terrorism) has also taken a dim view of the Australian law, and asks the government to withdraw the law and allow for it to be evaluated to ‘ensure consistency with international human rights standards’. However, there is movement in other parts of the world to adopt similar legislation; for example, the UK is contemplating legislation to make social media giants accountable for extremist content as well.

This raises interesting questions regarding the culpability of social media giants and their subcontractors when their drive to reduce the cost of moderation leads to poor employee protection; would they be held responsible if a self-radicalised employee goes on to commit a violent crime? Even if it is not at the level of ‘radicalisation’, how are social media companies responsible for their employees beginning to embrace bizarre conspiracy theories? The brute-force approach taken by legislators, such as those in Australia, is undoubtedly also to blame for the stress being placed on thousands of people around the world in charge of moderating social media content. I hope further research is conducted along these lines.

Ketan Modh, ESR 10, is currently a PhD Candidate at the University of Malta researching Identity Management (specifically, national ID cards) as part of the ESSENTIAL Project. Follow him on Twitter @ketansmodh. He can be contacted at ketan.modh@um.edu.mt

Related Posts