The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission. 

Facebook reportedly said no problem even as internal memos flagged polarising content and hate speech in India

Facebook held that there was ‘comparatively low prevalence of problem content’ on the platform

Vishwam Sankaran
Wednesday 10 November 2021 10:07 GMT
Comments
Indian Prime Minister Narendra Modi (L) and Facebook CEO Mark Zuckerberg shake hands after a Townhall meeting, at Facebook headquarters
Indian Prime Minister Narendra Modi (L) and Facebook CEO Mark Zuckerberg shake hands after a Townhall meeting, at Facebook headquarters (AFP via Getty Images)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Facebook reportedly brushed aside persistent problems in its operations in India even as internal memos flagged the prevalence of hate speech and polarising content on the platform, according to a new report.

Three internal memos exchanged by staff within Facebook between 2018 and 2020, pointed out numerous red flags in the platform’s operation in India, including “constant barrage of polarising nationalistic content,” misinformation, and content denigrating minority communities in the country, the report published on Wednesday in the Indian Express newspaper noted.

However, despite these red flags pointed out by the company’s staff, who were mandated to undertake oversight functions, Facebook held that these instances of hate speech and problematic content were of relatively low prevalence on the platform.

Redacted versions of Facebook internal documents leaked to the US securities and exchange commission by former Facebook product manager and whistleblower Frances Haugen, revealed that two reports within the company flagged hate speech and “problem content” in January-February 2019, ahead of parliamentary elections.

In another report presented in August 2020, staff reportedly mentioned that Facebook’s artificial intelligence tools failed to pick up problematic content and hate speech as they were unable to “identify vernacular languages.”

In a 2019 internal review when these problems were flagged, Chris Cox, then vice president of Facebook, said, “Survey tells us that people generally feel safe. Experts tell us that the country is relatively stable,” according to the minutes of the meeting part of the leaked documents.

Mr Cox, who quit the company in March 2020 and later rejoined as the chief product officer, however, noted that problems in sub-regions within India may be lost at the country level.

The company did not respond to requests for comment by the Indian Express on Cox’s meeting and the internal memos.

An earlier report on the documents leaked by Ms Haugen also noted that Facebook saw India as one of the most “at-risk countries” in the world.

It identified Hindi and Bengali languages as priorities for “automation on violating hostile speech,” adding that the company did not have enough local language moderators or content-flagging personnel to stop misinformation that spilled over to real-world violence.

Then In 2019, a Facebook researcher who set up a “test account” to assess videos and groups recommended by Facebook’s algorithm, found that the suggestions in the feed was inundated with hate speech, misinformation, and posts glorifying violence, according to the New York Times.

“Following this test user’s news feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the researcher wrote.

In one of the internal memos, which is part of a discussion between Facebook staff and executives, employees questioned how the platform did not have “even basic key work detection set up to catch” hate speech.

They also raised questions on how the company planned to “earn back” the trust of its employees from minorities communities, especially after a senior Indian Facebook executive had shared on her personal profile a post which many felt “denigrated” Muslims.

In a statement, Facebook reportedly said it “invested significantly in technology to find hate speech in various languages, including Hindi and Bengali” using which it “reduced the amount of hate speech that people see by half” in 2021.

“....we are improving enforcement and are committed to updating our policies as hate speech evolves online,” a company spokesperson said.

The Independent has reached out to Facebook for comment.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in