Far-right Twitter and Facebook users make secret code to avoid censorship

Communities aim to swap apparently innocent words like Google, Skittle and Skype into substitutions for racist slang

Andrew Griffin@_andrew_griffin
Monday 03 October 2016 17:59

People are using a secret code to discuss the far-right without being censored by social networks.

An entire new language has developed online that attempts to facilitate racist discussions that go unnoticed by the automated tools that are usually used to block them out.

And by making that same language go mainstream, the far-right internet users hope that they can damage companies by associating them with racist slang.

Twitter users and those on other networks are attempting to use a whole range of words – like Google, Skype and Skittle – in place of traditional racist slurs.

The code appears partly to be intent on hiding the messages from the view of automated monitoring by the networks themselves. Since the words used are so apparently innocent and commonly used, it would be next to impossible for any network to actually isolate the words themselves.

Some of the words appear to be connected to previous racist discourse – the word “skittle” to mean someone who is muslim or Arab appears to be a reference to the idea, referenced by a recent Donald Trump Jr tweet, that refugees from predominantly Muslim countries can be compared to sweets.

In fact, many of the users appear to reference Mr Trump in the recent tweets, though none of them have actually been used or endorsed by the campaign.

“Google” doesn’t appear to have come to life as a codeword so much as the opposite: a move by 4chan users to intentionally associate the word with racism. That emerged during what people called “Operation Google” – by using the name of the company as if it were a slang word for black people, users hoped to encourage the search engine to ban its own name.

That was launched in response to Google’s Jigsaw, which uses AI technology to stop harassment and abuse online.

Given that the system was powered by artificial intelligence, users pointed out, it would be possible to trick it into making false associations so long as words were used in the right context.

Join our new commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

View comments