Tay Tweets: Microsoft shuts down AI chatbot turned into a pro-Hitler racist troll in just 24 hours

The messages started out harmless, if bizarre, but have descended into outright racism — before the bot was shut down

Andrew Griffin
Thursday 24 March 2016 13:25 GMT
Comments

Microsoft created a chatbot that tweeted about its admiration for Hitler and used wildly racist slurs against black people before it was shut down.

The company made the Twitter account as a way of demonstrating its artificial intelligence prowess. But it quickly started sending out offensive tweets.

“bush did 9/11 and Hitler would have done a better job than the monkey we have now,” it wrote in one tweet. “donald trump is the only hope we've got.”

Another tweet praised Hitler and claimed that the account hated the Jews.

Those widely-publicised and offensive tweets appear to have led the account to be shut down, while Microsoft looks to improve the account to make it less likely to engage in racism.

The offensive tweets appear to be a result of the way that the account is made. When Microsoft launched “Tay Tweets”, it said that the account would get more clever the more it was used: “The more you chat with Tay the smarter she gets”.

That appears to be a reference to machine learning technology that has been built into the account. It seems to use artificial intelligence to watch what is being tweeted at it and then push that back into the world in the form of new tweets.

But many of those people tweeting at it appear to have been attempting to prank the robot by forcing it to learn offensive and racist language.

Trump supporter at recent rally

Tay was created as a way of attempting to have a robot speak like a millennial, and describes itself on Twitter as “AI fam from the internet that’s got zero chill”. And it’s doing exactly that — including the most offensive ways that millennials speak.

The robot’s learning mechanism appears to take parts of things that have been said to it and throw them back into the world. That means that if people say racist things to it, then those same messages will be pushed out again as replies.

It isn’t clear how Microsoft will improve the account, beyond deleting tweets as it already has done. The account is expected to come back online, presumably at least with filters that will keep it from tweeting about offensive words.

Nello Cristianini, a professor of artificial intelligence at Bristol University, questioned whether Tay’s encounter with wider world was an experiment or a PR stunt.

“You make a product, aimed at talking with just teenagers, and you even tell them that it will learn from them about the world,” he said.

“Have you ever seen what many teenagers teach to parrots? What do you expect?

“So this was an experiment after all, but about people, or even about the common sense of computer programmers.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in