The Independent’s journalism is supported by our readers. When you purchase through links on our site, we may earn commission.

Facebook unveils tool to let users see if they interacted with Russian trolls

More than 120 million people were exposed to posts by Russian-linked actors

Jeremy B. White
San Francisco
Saturday 23 December 2017 02:05
Comments
Facebook and other tech giants have faced questions about how their platforms enabled the spread of disinformation, in the wake of Kremlin efforts to disrupt the presidential election
Facebook and other tech giants have faced questions about how their platforms enabled the spread of disinformation, in the wake of Kremlin efforts to disrupt the presidential election

Did you “like” a Russian troll? Facebook has rolled out a new tool to let you find out.

The social media titan unveiled a mechanism for users to see if they interacted with pages created by the Internet Research Agency, identified by American intelligence as a Kremlin-linked troll farm.

The move towardx transparency comes after Facebook revealed that some 126 million Americans were exposed to posts promulgated by Russian-linked actors, who disseminated divisive and false posts as part of an effort to sow discord ahead of the 2016 presidential election.

Examples released by Congress show Russian-linked posts promoting Donald Trump and Bernie Sanders, assailing Hillary Clinton and weighing in on various sides of issues like immigration and gun control.

Many pages were designed to resemble grassroots organisations that did not exist.

Senator Pat Leahy shows a fake social media as representatives of Twitter, Facebook and Google testify before Congress (Reuters)

After American intelligence officials concluded that the Kremlin mounted a concerted effort to disrupt the 2016 presidential election, Facebook and other tech giants have faced questions about how their platforms enabled the spread of disinformation.

Since disclosing the vast reach of the propaganda effort, Facebook has faced a barrage of questions about how thoroughly it vets content and whether it adequately shields users from misleading content.

Among changes the site has instituted, it has pledged to start releasing more information about who pays for political content.

It also announced that it would hire an additional 10,000 people to monitor content, an effort to “identify and remove content violations and fake accounts”, and moved to block content flagged as false by fact-checking organisations.

Register for free to continue reading

Registration is a free and easy way to support our truly independent journalism

By registering, you will also enjoy limited access to Premium articles, exclusive newsletters, commenting, and virtual events with our leading journalists

Already have an account? sign in

By clicking ‘Register’ you confirm that your data has been entered correctly and you have read and agree to our Terms of use, Cookie policy and Privacy notice.

This site is protected by reCAPTCHA and the Google Privacy policy and Terms of service apply.

Join our new commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in