Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Facebook unveils tool to let users see if they interacted with Russian trolls

More than 120 million people were exposed to posts by Russian-linked actors

Jeremy B. White
San Francisco
Saturday 23 December 2017 02:05 GMT
Comments
Facebook and other tech giants have faced questions about how their platforms enabled the spread of disinformation, in the wake of Kremlin efforts to disrupt the presidential election
Facebook and other tech giants have faced questions about how their platforms enabled the spread of disinformation, in the wake of Kremlin efforts to disrupt the presidential election (Reuters)

Did you “like” a Russian troll? Facebook has rolled out a new tool to let you find out.

The social media titan unveiled a mechanism for users to see if they interacted with pages created by the Internet Research Agency, identified by American intelligence as a Kremlin-linked troll farm.

The move towardx transparency comes after Facebook revealed that some 126 million Americans were exposed to posts promulgated by Russian-linked actors, who disseminated divisive and false posts as part of an effort to sow discord ahead of the 2016 presidential election.

Examples released by Congress show Russian-linked posts promoting Donald Trump and Bernie Sanders, assailing Hillary Clinton and weighing in on various sides of issues like immigration and gun control.

Many pages were designed to resemble grassroots organisations that did not exist.

Senator Pat Leahy shows a fake social media as representatives of Twitter, Facebook and Google testify before Congress (Reuters) (REUTERS/Jonathan Ernst)

After American intelligence officials concluded that the Kremlin mounted a concerted effort to disrupt the 2016 presidential election, Facebook and other tech giants have faced questions about how their platforms enabled the spread of disinformation.

Since disclosing the vast reach of the propaganda effort, Facebook has faced a barrage of questions about how thoroughly it vets content and whether it adequately shields users from misleading content.

Among changes the site has instituted, it has pledged to start releasing more information about who pays for political content.

It also announced that it would hire an additional 10,000 people to monitor content, an effort to “identify and remove content violations and fake accounts”, and moved to block content flagged as false by fact-checking organisations.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in