How social media companies are fighting to remove graphic content after TikTok's viral suicide video

More moderators, or better artificial intelligence, are only partial solutions for how to fight graphic content

Adam Smith
Tuesday 08 September 2020 12:39 BST
Comments
Facebook CEO Mark Zuckerberg addresses Cleveland shooting

Social media companies are continually trying to find the best way to moderate their platforms for harmful content.

Recently, a video of a suicide went viral on TikTok, having previously spread on Facebook via its livestream function, Instagram, and 4chan. 

In this task, companies have two major weapons: human moderation and machine learning.  

With regards to machine learning and artificial intelligence, Facebook is developing an algorithm that can detect “hateful” memes alongside its existing systems, while others use chatbots to detect sexual harassment.

During the coronavirus pandemic, YouTube also relied more on its algorithm to remove content

The company took down down more videos in the second quarter of 2020 than ever before, trading a “lower level of accuracy” for the knowledge that it was “removing as many pieces of violative content as possible".

However, artificial intelligence has its failings. In a 2019 report, Ofcom noted that users will try and subvert machine-based moderation, that these systems cannot understand context as well as humans, and they can undermine freedom of expression.  

An infamous example of this is Facebook censoring an image of a child victim of the Vietnam war, as the company did not distinguish between the famous war photograph and images of child abuse.

In response to such issues, Facebook started an Oversight Board to deal with these issues of content moderation externally from the social media giant.   

It is one of many companies that are using human moderation to fill the gaps in their algorithms’ knowledge.

However, this can result in dramatic suffering for the moderators who have to spend hours each day watching potentially infringing content that has been uploaded to the platforms.

Moderators have reported becoming “normalised” to extreme content, finding themselves drawn to content they would never normally view such as bestiality and incest.

Others suffer from panic attacks, take drugs in order to dumb their experiences, and become paranoid.  

Some also start believing the content they have to moderate, including holocaust denial or flat earth theories.

The problem arises partly from the sheer mass of content that is uploaded to these platforms.

Between April and June 2019, Instagram had to take down 834,000 pieces of graphic content from its site – a tiny proportion of the huge amount of posts that are uploaded each day.

Technology companies often rely on outsourcing content moderation to third-party firms, and so do not see the human cost of such labour as visibly.

This is worse in countries in Asia, where labour is outsourced more heavily but is not as well protected as in the US or other Western countries.

Social media companies’ algorithms can also have the effect of pushing users towards extreme content, given that it is able to generate the kinds of engagement that they are looking for.

YouTube's recommendation algorithm has been condemned for directing users to videos promoting extremist ideologies, while Instagram’s was denounced for pushing young girls down a rabbit-hole of self-harm images.

Facebook’s algorithm was found to be pushing holocaust denial, and the company reportedly shelved research which would make its platform less divisive but would be “antigrowth” and require “a moral stance.”

In light of this the government has proposed Online Harms regulation, which would give large social media companies a “duty of care” for users’ wellbeing or face penalties from Ofcom, although some have said the law itself could stifle free speech.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in