Facebook blames coronavirus for failure to remove child nudity from Instagram

Tech giant says it was able to remove 7 million pieces of content containing harmful Covid-19 misinformation

Anthony Cuthbertson
Wednesday 12 August 2020 16:27 BST
Comments

Facebook failed to remove child nudity and sexual exploitation content from Instagram due to the impact of the coronavirus pandemic on its content moderation system, the tech giant has claimed.

The company sent content reviewers home in March as part of Covid-19 containment measures without adequate work-from-home systems in place, the firm revealed in its quarterly Community Standards Enforcement Report.

This forced Facebook to prioritise the reviewal of certain harmful content over others between April and June on its social network and Instagram.

Automated systems are also in place to detect harmful content, however humans are still needed to identify and remove content flagged by users.

"We rely heavily on people to review suicide and self-injury and child exploitative content, and help improve the technology that proactively finds and removes identical or near-identical content that violates these policies," Guy Rosen, Facebook's Vice President of Integrity, wrote in a blog post.

"With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram."

Facebook said it has since brought "many reviewers" back online from home, while also bringing "a smaller number" back into the office where it is safe.

Despite the impact of the coronavirus pandemic, Facebook said it had made improvements in certain areas such as terrorism and hate speech.

This was due to technological improvements of its moderation system, which allowed Facebook to take action on 8.7 million terrorist-related posts in the latest quarter compared to 6.3m in the previous quarter.

Facebook also cracked down on misinformation relating to Covid-19, such as conspiracy theories linking the virus to the roll-out of 5G networks.

"In the past months, we've prioritised work around harmful content related to Covid-19 that could put people at risk," Mr Rosen told reporters during a press call on Tuesday.

"If some misinformation poses imminent harm, we remove it. So from April through June, we removed over 7 million pieces of harmful Covid-19 misinformation from Facebook and Instagram."

Warning labels were also placed on certain posts that contained misleading or false information about coronavirus.

Facebook has recently joined other tech platforms by flagging posts shared by US President Donald Trump for breaking its rules relating to misinformation, though these were not covered within the timeframe of the report.

"As the Covid-19 pandemic evolves, we'll continue adapting our content review process and working to improve our technology and bring more reviewers back online," the report concluded.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in