The White House announced this week it was launching a sort of hotline for people to report their experiences of censorship on social media. Americans are invited to share their stories through an online questionnaire if they “suspect political bias” in enforcement actions taken against them by companies like Facebook, Google and Twitter.
Setting aside the enormous implications this has for the privacy of individual users (what better way for a government to identify all the people plotting its downfall than to get them to fill in a form revealing political leanings?), this intervention by Donald Trump demonstrates just how significant the actions of social media platforms are now considered in controlling the speech of users.
The White House move follows a decision by Facebook at the start of this month to ban several high-profile right wingers, including conspiracy theorist Alex Jones, from its platforms. Others barred included Nation of Islam leader Louis Farrakhan, who has repeatedly made antisemitic statements.
The immediate cry from those who were banned was that they had been censored. Their supporters argue that the platforms, staffed as they see it by left-leaning liberals, are simply pursuing a political agenda to silence the legal speech of those they oppose.
Silencing those with whom you disagree is, without doubt, censorship. It is of most concern when enacted by those in government – as we see in countries such as Saudi Arabia where journalists and government critics are jailed, and even killed, for their work.
But governments are not the only ones to exert power. Social media platforms have enormous influence over what we see and how we see it. Therefore we should be concerned about the unilateral actions taken by the platforms to limit legal speech and approach with extreme caution any solutions that suggest it’s somehow easy to eliminate only “bad” speech.
Those applauding the banning of Jones et al might want to pause to consider that it is not only far-right drum-bangers who lose access to their accounts.
Last month, an article in USA Today highlighted the example of US high school teacher and activist Carolyn Wysinger, whose post in response to actor Liam Neeson saying he’d roamed the streets hunting for black men to harm, was deleted by Twitter for violating its community standards. “White men are so fragile,” the post read, “and the mere presence of a black person challenges every single thing in them.”
Other activists have had posts deleted for highlighting racist messages they receive, lesbians have been banned for using the word “dyke”in their posts.
In the UK, gender critical feminists who have quoted academic research on sex and gender identity have had their Twitter accounts suspended for breaching the organisation’s hateful conduct policy, while threats of violence towards women often go unpunished.
Ultimately it is up to the platforms, which are private companies, to set their terms and conditions. And they operate according to the profit motive. If it turns off advertisers, a social media account might be banned, just like a TV show might drop a star who has been called out for misogyny or racism.
But the ubiquity of social media offers a challenge. That private space feels like a public space, which is why some claim Twitter and Facebook should be treated like a utility, open to all. That puts a huge responsibility on the companies involved.
If we are to ensure that all our speech is protected, including speech that calls out others for engaging in hateful conduct, then their policies and procedures need to be clear, accountable and non-partisan. Any decisions to limit content should be taken by, and tested by, human beings. Algorithms simply cannot parse the context and nuance sufficient to identify, say, racist speech from anti-racist speech.
We need to tread carefully. While an individual who incites violence towards others should not (and does not) enjoy the protection of the law, nor a platform on any kind of media, tackling those who advocate hate cannot be solved by simply banning them.
I told a parliamentary committee examining free speech and democracy earlier this week, that we already have many laws that prohibit speech. In our drive to stem the tide of hateful speech online, we should not be creating new laws, nor should we rush to welcome an ever-widening definition of speech which is banned by social media.
The risk is that by ushering in new tools to tackle the problem, we end up eliminating much of the positive. Many of the mechanisms that help generate, for example, social media “pile ons” are also the mechanisms that enable activists to convene around an urgent issue – as happened with the Twitter support generated for a young woman from Saudi Arabia who turned to the Twitter community for help when seeking asylum.
We do need to do something to tackle hateful attitudes on social media, but banning more and more speech is not the answer. And the response from Trump’s White House is a signal that the inconsistent silencing of those who are unwelcome on these platforms will only add to polarisation.
Jodie Ginsberg is the chief executive of UK-based freedom of expression organisation Index on Censorship.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies