Social media is failing miserably at battling the spread of coronavirus misinformation

Health professionals are developing warning systems to flag Russian bot hashtags and extremist pseudoscience. But perhaps the true responsibility for purging these evils lies elsewhere

Rachel Thomas
Tuesday 05 May 2020 19:35 BST
Comments
Eamonn Holmes responds to backlash over 5G coronavirus controversy

A search of “coronavirus” on Google, unlike your average search on the engine, comes up with official sources from governments, NGOs such as the World Health Organisation (WHO) and the mainstream media. You are not bombarded with advertisements or product placements, as would result from a usual search, and you’d be hard-pressed to find an unofficial link to an unreliable source of information on the pandemic.

Social media sites are similarly pushing content which is respectable and reliable, coming from similarly official sources. On opening the app, Instagram directs UK users to the NHS website, and US users to the CDC (Centres for Disease Control and Prevention).

On Pinterest, remarkably, the only memes on “Covid-19” are those created by internationally renowned health organisations such as the WHO. Facebook’s “Information Centre” similarly points users towards official sources, and the Facebook-owned app WhatsApp has imposed strict new limits on message forwarding in an attempt to reduce the spread of misinformation.

It seems that search engines and social media platform owners alike are in a race to become the most trusted source of information on the virus, yet ironically, the whole architecture of these sites has historically been designed to spread popular, rather than true, information.

Social media has previously been the exact place for extremist, pseudoscience to thrive precisely because it relies on a different method to rigorous science. What’s more, the scientific community has regularly experienced the systematic and deliberate abuse of their own research on social platforms, with automated “Russian bots” spreading unscientific information on the platforms and far-right extremists using pseudoscience going unchecked by platform owners.

Deliberate misinformation around vaccines, for example, has historically been used by international political organisations to play into people’s deepest fears and to cause social division. Using hashtags such as “vaxxed” and “learn the risk”, automated social media accounts have periodically posted polarising messages across platforms to gradually lure people out towards more extreme anti-vaccination content. While still a fairly serious problem in the UK, the US and Italy have been particularly susceptible to sharing pseudoscience such as anti-vax messages on social media, which is tragic given that they are two of the worst affected places by coronavirus.

Another example of the negative effects of the misuse of scientific research can be seen in the Christchurch shootings that happened in New Zealand last March, by a killer who published a rambling manifesto online, referencing a conspiracy theory about the threat of multiculturalism. It’s astonishing how much of this idea, which originated at the fringes in France, has grown online. Mass killers in Germany and the US alike have referenced this “great replacement” theory and refer to it as academically reliable. Using distorted demographic stats and misleading stats around migrant crimes, an idea which originated 10 years ago has been referenced 1.5 million times online between April 2012-2019.

Popularity and profitability, rather than “truth” have historically been key in driving the algorithms that determine much of the content that we see on social media. Dr Robert Elliot Smith, an expert in evolutionary algorithms and senior research fellow of Computer Science at University College London and author of Rage Inside the Machine (2019), points out how online platforms have been designed with profit in mind.

He claims that this often leads to what he calls “informational segregation” (stereotyping over the internet) and says it can be taken advantage of in various fields in order to reinforce existing biases (like fear around vaccinations). For this reason, Smith points out that non-scientific ideas have been encoded deep within our technological infrastructure, challenging the idea that technology is an apolitical and amoral force. He reminds us “technology including, and perhaps especially, computation doesn’t exist in a vacuum”.

Carl Miller, research director of the Centre for Analysis of Social Media at the think tank Demos, similarly refers to the phenomenon of “Gangnam Style content”, which describes YouTube’s previous tendency to serve up popular content linked to what a user has viewed, until pretty much everything that was ever watched on the platform ended up looping back to the popular “Gangnam Style” music video.

To counteract this, YouTube made the seemingly trivial decision to serve up more niche content related to a given search, which ultimately “pushed enormous amounts of attention to all these niche voices which before were not getting much attention in mainstream press”.

“Suddenly [they] were then getting millions and millions of clicks,” Miller said during a BBC podcast on the misinformation virus back in January – including potentially harmful content about eating disorders, dieting pills etc.

If platforms have historically not intervened to stop misinformation, why now?

The short answer is, in part, that platforms feel they can publicly be much more aggressive on fighting coronavirus misinformation than they have been on political misinformation. Claire Wardle, for example, from the non-profit organisation First Draft, says: “There are no two sides with coronavirus, so they don’t have people on the other side saying ‘we want this’, the way you do with anti-vaxxers or political misinformation.” It’s also relatively simple and straightforward for platforms to select trusted sources such as the WHO without appearing partisan.

But the long answer is that they haven’t managed to successfully curb the misinformation virus at all. Research reveals that the vast majority of false information about the virus appears online. In a recent study, Oxford’s Reuters Institute found that 88 per cent of false or misleading claims about coronavirus appeared on social media platforms, compared with just 9 per cent on television or 8 per cent in news outlets. It’s no surprise, then, that the Pew Research Centre found that nearly 30 per cent of US adults believe Covid-19 was developed in a lab, despite this not being an attested theory. Conspiracy theories falsely connecting 5G with the spread of the pandemic has also led to threats, harassment and even petrol bomb attacks.

Perhaps we are giving too much credit, then, to social media platforms and their quest for truth during this pandemic, whose whole ecosystem depends on engagement and virality, with no value placed on accuracy. Now we are in the throes of a global emergency, they want to rush to be the most reliable sources, but we cannot hide from the reality that they are the main reason that misinformation spreads at all.

So what can we do to restore faith in platforms, and to salvage the broken relationship between the scientific community and Silicon Valley?

Currently, health professionals are developing warning systems to flag up linked hashtags to Russian bots and to identify clusters of negativity towards extremist pseudoscience which they can, therefore, engage with. Yet perhaps the responsibility, if not with the social media platforms, governments (though I’m cautiously optimistic that they will help with the worst), or health professionals, lies with the users themselves, whose data feeds the algorithms.

Hannah Fry, author of Hello World: How to be human in the age of the machine, cautions: “Whenever we use an algorithm, especially a free one, we need to ask ourselves about the hidden incentives … What is this algorithm really doing? Is this a trade I’m comfortable with?”

Perhaps the best answer is to see the relationship as a partnership between algorithms and human judges, with regulating boards controlling the data industry as much as the US Food and Drug Administration regulates pharmaceuticals, for example.

As Carl Miller says: “Platform engineering is a history of unintended consequences.” Pandemics too, teach us lessons which we didn’t foresee, and I can already sense that a desire for more scientifically accurate content will be paramount moving forward.​

Rachel Thomas is a researcher in ethics and technology

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in