<p>A Japanese phone shows the Facebook logo</p>

A Japanese phone shows the Facebook logo

Facebook reveals hate speech prevalence for the first time

An average of 10m instances per month across Facebook and Instagram were recorded

Anthony Cuthbertson
Thursday 19 November 2020 18:29
Comments
Leer en Español

Facebook has published the prevalence of hate speech on its platform for the first time, revealing tens of millions of instances of harmful content in the space of just three months.

The Community Standards Enforcement Report detailed 22.1 million pieces of hate speech content on Facebook and 6.5 million instances of hate speech on Instagram between July and September of this year.

The tech giant also identified more than 13 million pieces of child nudity and sexual exploitation content, and more than a million items of suicide and self-injury across its platforms.

In a call with reporters prior to the release of the report, Facebook’s Arcadiy Kantor said the prevalence of hateful content was measured by the amount of times it is seen by users, rather than the frequency of posts.

“Based on this methodology, we estimated the prevalence of hate speech from July 2020 to September 2020 was 0.10 per cent to 0.11 per cent,” he said.

"In other words, out of every 10,000 views of content on Facebook, 10 to 11 of them include hate speech… Our goal is to remove hate speech any time we become aware of it, but we know we still have progress to make."

Facebook also provided information relating to its attempts to crackdown on misinformation relating to coronavirus, as well as its approach to conspiracy theories relating to the Holocaust and QAnon.

Between March and October, the company removed more than 12 million pieces of content from Facebook and Instagram for containing misinformation that could lead to “imminent physical harm, such as content relating to fake preventative measures or exaggerated cures".

A further 167 million pieces of content relating to the Covid-19 pandemic were labelled with misinformation warnings by its fact checking partners.

The crackdown on content that violates its policies has been fuelled by improvements in its artificial intelligence systems, Facebook said.

Mike Schroepfer, the firm’s chief technology officer, said AI now detects 94.7 per cent of the hate speech that is removed, up from just 24 per cent in 2017.

“A central focus of Facebook’s AI efforts is deploying cutting-edge machine learning technology to protect people from harmful content. With billions of people using our platforms, we rely on AI to scale our content review work and automate decisions when possible,” he said. 

“Our goal is to spot hate speech, misinformation, and other forms of policy-violating content quickly and accurately, for every form of content, and for every language and community around the world.”

Register for free to continue reading

Registration is a free and easy way to support our truly independent journalism

By registering, you will also enjoy limited access to Premium articles, exclusive newsletters, commenting, and virtual events with our leading journalists

Already have an account? sign in

By clicking ‘Register’ you confirm that your data has been entered correctly and you have read and agree to our Terms of use, Cookie policy and Privacy notice.

This site is protected by reCAPTCHA and the Google Privacy policy and Terms of service apply.

Join our new commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in