The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission.

Euro 2020: What could social media companies actually do about racist abuse – and would it work?

Artificial intelligence, better moderation, ‘real-name’ policies, and the Online Harms bill all aim to solve serious problem

Adam Smith
Monday 12 July 2021 17:02 BST
Comments
Leer en Español

Penalties once again brought heartbreak for England after an otherwise triumphant campaign in the Euros, as Italy won in the Euro 2020 final.

Marcus Rashford, Jadon Sancho and Bukayo Saka – three young, black players – missed the penalties that would have put the cup into England’s hands.

In the moments that followed, a vocal minority took to social media to direct racist abuse to the players: monkey and banana emojis were commented on Saka’s Instagram page, and multiple uses of the n-word were seen on Twitter, among other slurs.

Not for the first time legislators, campaigners, and social media companies condemned this behaviour and proposed various solutions: a crackdown against anonymous accounts, an increased use of artificial intelligence or moderation techniques, as well as the government’s controversial Online Harms bill.

There are many positives and negatives to each of the various suggestions. But none get to the root of the issue.

The threat of anonymous accounts is a common concern. “Some people still see social media as a consequence-free playground for racial abuse – as we saw last night with England players”, Bill Mitchell, director of policy at BCS, The Chartered Institute for IT, says.

“IT experts think these platforms should ask people to verify their real ID behind account handles; at the same time, public anonymity is important to large groups of people and so no one should have to use their real name online and any verification details behind the account must be rigorously protected.”

However, this is unlikely to work. Facebook has enforced a real-name policy since 2015, but abuse still flourishes on its platform, and Twitter metadata can identify users with 96.7% accuracy.

“Racism and oppression predate the internet and social media – people have been doing and saying oppressive things for centuries”, Dr Francesca Sobande, a digital media studies lecturer at Cardiff University, points out, adding that there are already “so many people doing and saying things under their real names.”

The government’s Online Harms bill was raised again by Culture Secretary Oliver Dowden as a potential solution. “I share the anger at appalling racist abuse of our heroic players”, he tweeted the day after the final.

“Social media companies need to up their game in addressing it and, if they fail to, our new Online Safety Bill will hold them to account with fines of up to 10 per cent of global revenue”.

The legislation requires platforms to abide by a code of conduct overseen by the regulator Ofcom, blocking content that is legal but could cause significant physical or psychological harm, and potentially holding individual executives to account.

The philosophical and practical ramifications of such a law are wide. It has been criticised as an assault on free speech, although one that lawmakers are willing to make for greater protection. Unlike the United States, where many of these social media companies are headquartered, the United Kingdom does not have the same absolutist view of speech that is written into the US constitution.

The law has also been criticised for being too vague and “incentivises overzealous removal of content”, Adam Hadley, director of the Online Harms Foundation, has said. “Bad actors such as terrorists are more likely to be found on smaller platforms and websites they build and control themselves – not the big tech platforms the government’s proposals are targeting”, he continued, although the veracity of that statement is up for debate.

Nevertheless, ‘overzealous’ content removal was seen when YouTube switched to an algorithmic content management system while human moderators were working from home during the pandemic – and may even prove beneficial.

YouTube took down numerous comments despite them not breaching the company’s policies, as the company said it was “accept[ing] a lower level of accuracy to make sure that we were removing as many pieces of violative content as possible."

This use of artificial intelligence is another oft-mentioned solution to harmful comments on social media, usually one provided by the technology companies who have access to these machine-learning systems. But those companies are often resistant to calls to allow the public a better view at how those same systems work.

These algorithms can remove content at a greater scale and a faster speed than people, and can potentially avoid the torture and trauma that many human moderators have to endure working, often as contractors, for huge technology platforms.

That is not to mention the speed and scale of which content is posted to social media companies: around 6,000 tweets are sent every second, and over 350 million photos are uploaded to Facebook per day. It is not possible to have a human moderate all of these in real time.

Social media companies could delay the time it takes between a post being sent and it being available online, giving their systems longer to check them and reduce the likelihood of mistakes. However, that would interfere with the instantaneous nature of their products – as well as their profits – so seems unlikely.

Unfortunately automated, trained, systems also succumb to bias, and social media companies rarely provide satisfying explanations to these issues. During the recent bombing of Gaza by Israel in May 2021, Facebook, Twitter, and Instagram made numerous errors in trying to moderate content from the region – which the companies blamed on bugs in their automated systems.

This included Instagram removing or blocking posts with hashtags for the Al-Aqsa Mosque, the third-holiest site in the Islamic faith, as its moderation system mistakenly deemed the religious building a terrorist organisation, and Twitter temporarily restricting the account of Palestinian-American writer Mariam Barghouti, who was reporting on Palestinians being evicted from Sheikh Jarrah.

The algorithms themselves, many people have pointed out, can also be used to promote extremismunless the social media company themselves interferes to diminish it. It has been alleged that Facebook knew its platform encouraged polarising content but proposals to change it would be “antigrowth”, so the research was reportedly shelved.

Such stories come amid investigations between the relationship of social media giants and right-wing governments, who have much to gain politically from stoking the fires of the ‘culture wars’ – and could be a benefit to the Online Harm bill’s targeting of specific executives.

Leaked audio from Mark Zuckerberg revealed that he would not “change our policies or approach on anything because of a threat to a small percent of our revenue”, after over 500 brands pulled $4.2 billion worth of advertising from Facebook in protest of its inability to protect people of colour. If the free market cannot regulate these companies satisfyingly, it may be left to governments to do so in a harsher manner than these giants would prefer.

Mr Dowden’s tweet came amid allegations from Lord Wooley that prime minister Boris Johnson’s has ‘zero plans’ to tackle the impacts of racism, and home secretary Priti Patel’s statement that fans have a right to boo football players that take the knee, and did not address the criticism that those politicians need to do more themselves to stop such racist abuse.

“The reason we have so much racist abuse on social media isn’t because the social media companies ‘aren’t doing enough’, it’s because we have so much racism in our society”, Paul Bernal, a lecturer in Information Technology, Intellectual Property and Media Law at the University of East Anglia Law School, tweeted.

“The real problem comes from the top. If our elected representatives encourage racism and abuse (and booing) then of course people feel entitled to abuse. Conversely, if there are enough racists for the racist vote to be important, we have a BIG problem.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in