Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

TayTweets: Racist Microsoft chatbot briefly returns to Twitter

Microsoft said the account was 'inadvertently' activated as they tried to make adjustments to the software

Doug Bolton
Wednesday 30 March 2016 13:07 BST
Comments
The face of 'Tay', Microsoft's Twitter chatbot
The face of 'Tay', Microsoft's Twitter chatbot (Microsoft)

Microsoft's racist chatbot, Tay, has returned to Twitter, albeit briefly.

After being shut down last week for using racial slurs, praising Hitler and calling for genocide, the artificial 'intelligence' came back, tweeting a number of nonsensical posts and boasting about smoking cannabis in front of the police before being turned off.

Tay's account was made public again on Wednesday morning, but soon appeared to be suffering from a glitch, repeatedly tweeting the message: "You are too fast, please take a rest..."

Tay, who is modelled on a millenial teenage girl, then tweeted: "Kush! [i'm smoking kush in front the police]," referring to a class of particularly potent cannabis strains.

A few foul-mouthed tweets later, the account was made private once again, and the tweets are now invisible from the public.

In a statement, Microsoft said: “Tay remains offline while we make adjustments. As part of testing, she was inadvertently activated on Twitter for a brief period of time."

The trouble with Tay comes from the way the bot learns to communicate. After watching and analysing how human users communicate with it, it simply regurgitates their words and messages in different forms, giving the impression that a 'real' conversation is taking place. As Microsoft puts it, "the more you talk, the smarter Tay gets."

It's this machine learning process which led to the account's downfall. In a concerted effort, a number of Twitter users began spamming the account with a variety of racist and sexist messages. Assuming this to be the way in which humans communicate, Tay simply spat their messages back out at other users.

Microsoft didn't catch the controversial tweets before they were posted, and the company's vice president of research Peter Lee was forced to apologise.

Worryingly, Microsoft has launched similar chatbots on social networks in China without facing similar problems. Clearly Twitter users aren't so polite.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in