Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Facebook reveals full scale of problematic content as it says new technology is helping it take down posts

Andrew Griffin
Thursday 11 February 2021 18:12 GMT
Comments
Se ve un logotipo de Facebook impreso en 3D colocado en un teclado en esta ilustración
Se ve un logotipo de Facebook impreso en 3D colocado en un teclado en esta ilustración ((Reuters))

Facebook has revealed the scale of problematic content on its platform and Instagram.

The company has published its latest "Community Standards Enforcement Report", in which it details how it is enforcing its policies on what content is allowed on Instagram and Facebook.

It also said that it would be taking new steps to be more transparent about how those decisions are made, amid ongoing controversy about what content is available on its platforms.

The latest report covers the final quarter of 2020, between October and December. As such, it covers the US election and other votes around the world, but not the insurrection in the Capitol last month, or the ban of Donald Trump that followed it.

Despite the problems of false information and other issues that blighted the US election, Facebook said that the amount of hate speech on Facebook dropped over the quarter.

The company measures the "prevalence" of such content, or how much it has seen, arguing that is the best measure of the impact such posts are having. It does not share how many of those posts there are.

It said that the prevalence of hate speech on Facebook dropped from as much as 0.11 per cent in the previous quarter to 0.08 per cent in the most recent one. The prevalence of violent and graphic content also dropped, as well as the prevalence of posts including adult nudity, it said.

Facebook suggested that the reduction in such content was not the result of less of it being posted but by changes in the news feed that meant any such posts would be better flagged and not recommended to users.

"Each post is ranked by processes that take into account a combination of integrity signals, such as how likely a piece of content is to violate our policies, as well as signals we receive from people, such as from surveys or actions they take on our platform like hiding or reporting posts," Guy Rosen, Facebook's vice president of integrity, wrote in a blog post.

"Improving how we use these signals helps tailor News Feed to each individual’s preferences, and also reduces the number of times we display posts that later may be determined to violate our policies."

Facebook also said that it had been able to be more proactive about such content, spotting it and taking action before it is able to be reported by users, in some specific areas. It focused on bullying and harassment, on which its "proactive rate" had increased from 25 per cent to 49 per cent.

It said that improvement was the result of improvements in its artificially inteligent systems, which are able to spot such posts and get them removed.

The company did admit that it was still working with a diminished number of people because of the coronavirus outbreak. "We anticipate our ability to review content will be impacted by COVID-19 until a vaccine is widely available," Mr Rosen said, though he said the company was "slowly continuing to regain our content review workforce globally".

Without a full workforce, Facebook claims that it is being forced to "prioritize the most harmful content for our teams to review, such as suicide and self-injury content".

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in