Trump misinformation is Facebook’s most popular post despite fact-checking labels

‘The fact that we refuse to hold accounts with millions of followers to higher standards to everyone else (and often they get lower standards) is one of the most upsetting things about working here,’ one employee said

Adam Smith
Tuesday 17 November 2020 14:58 GMT
Comments
(AFP via Getty Images)
Leer en Español

Support truly
independent journalism

Our mission is to deliver unbiased, fact-based reporting that holds power to account and exposes the truth.

Whether $5 or $50, every contribution counts.

Support us to deliver journalism without an agenda.

Louise Thomas

Louise Thomas

Editor

Labels placed on president Donald Trump’s posts on Facebook have done little to stop their spread, according to internal Facebook data.

The president has used his platform on Facebook to share misinformation about the recent US presidential election, which was won by Democrat Joe Biden.

On 5 November, while election votes were being tallied, Mr Trump shared unfounded claims about integrity of the election, specifically mail-in ballots, which he incorrectly called  “rigged” and “fraudulent”.

Two posts the president made performed nearly nine times better than his average posts.

Facebook attempted to move users towards more reputable sources of information. The company marks posts that contain fraudulent claims about the election, but do not say they are inaccurate.

“Final results may be different from the initial vote counts, as ballot counting will continue for days or weeks after polls close” an information box on Facebook reads.

However, internal data from Facebook posted on the company’s internal discussion boards, as reported by Buzzfeed, argues this has been inadequate.

“We have evidence that applying these informs to posts decreases their reshares by [approximately] 8 per cent,” the data scientists said.

“However given that Trump has SO many shares on any given post, the decrease is not going to change shares by orders of magnitude.”

The data scientist added that the labels were not expected to limit the scale of the post, but rather “to provide factual information in context to the post.”

Such a decision has been the source of criticism for Facebook, both from employees and from users.

“Is there any induction that the ‘this post might not be true’ flags have continued to be effective at all in slowing misinformation spread?” asked a Facebook employee on the company’s internal message board.

“I have a feeling people have quickly learned to ignore these flags at this point. Are we limiting reach of these posts at all or just hoping that people will do it organically?”

Another pointed out the high number of people sharing Mr Trump’s posts as evidence he has won the election.

“It doesn’t feel like people are being deterred all that much by our mild dosage of context”, they said.

“The fact that we refuse to hold accounts with millions of followers to higher standards to everyone else (and often they get lower standards) is one of the most upsetting things about working here,” another said.

Ahead of this election we developed informational labels, which we applied to candidate posts with a goal of connecting people with reliable sources about the election." Facebook said in a statement.

"This was just one piece of our larger election integrity efforts, which also included creating a one-stop-shop Voting Information Center, helping over 4 million people register to vote, building the largest global fact-checking network of any platform, removing over 265,000 pieces of content for violating our voter suppression policies and removing dozens of networks engaged in coordinated inauthentic behavior." 

This is not the only instance were Facebook employees have spoken out against the company’s practises.

The social media giant apparently fired an employee who had criticised Mark Zuckerberg's decision not to take action against inflammatory posts by Donald Trump in June.

Another senior Facebook engineer who collected evidence of the company providing preferential treatment to right-wing pages was reportedly fired by the company for breaking its “respectful communication policy” in August.

Facebook has long-retained its policy of not fact-checking politicians. CEO Mark Zuckerberg criticised Twitter for fact-checking the president, saying in May 2020 that social media platforms should not be the “arbiter[s] of truth”.

However, recently US Senators have taken issue with Facebook’s attempt to redirect users to factual information, incorrectly calling it a form of censorship.

“When I use the word 'censor' here, I'm meaning blocked content, fact check, or labeled content, or demonetised websites of conservative, Republican, or pro-life individuals or groups or companies,” Republican Senator Mike Lee said in a Senate hearing. 

Yet under a Biden administration, Facebook may face repercussions because of its decisions.

Bill Russo, an advisor to the president-elect's press team, has said Facebook has been too slow to delete debunked claims about voter fraud.

"If you thought disinformation on Facebook was a problem during our election, just wait until you see how it is shredding the fabric of our democracy in the days after," Mr Russo tweeted on Monday night.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in