Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission.

Why Amazon Alexa told a 10-year-old to do a deadly challenge

Alexa gives answers it finds on the web, and that has been provided by users, but both have been proved unreliable in the past

Adam Smith
Wednesday 29 December 2021 14:40 GMT
Comments
(Getty Images)

Amazon’s Alexa recently recommended to a 10 year-old girl that she should put a coin against an electrified plug.

The voice assistant gave the response when the child asked for a ‘challenge’ from the Echo speaker.

“Here’s something I found on the web”, Amazon replied, “The challenge is simple: plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.”

The dangerous activity, known as "the penny challenge", began circulating on TikTok and other social media websites about a year ago, the BBC reported.

Why did Alexa make the suggestion?

The smart speaker made the suggestion because Alexa uses information on the web to give responses to questions it does not know the answer to.

In this instance, Alexa was using information it found on a site called Our Community Now, a Colorado organisation.

That website was specifically warning people not to undertake the challenge: as Our Community Now wrote after the Alexa incident, the original article described the challenge as “stupid”, “disturbing” and told readers to “NOT attempt this”. Our Community Now noted that Alexa had pulled the description “without proper context of the situation”.

The Independent has reached out to the website, and Amazon, for more information.

Google Assistant, and other voice assistants like it, convert a voice command into a text input, then from text into intent, and finally works out some possible answers to what has been asked. This is done using complex algorithms and natural language processors.

These algorithms, while sophisticated, have been wrong before: in 2018, Siri erroneously brought up a photo of a penis when searching for information about Donald Trump because of an edited Wikipedia article.

Similarly, Amazon’s Alexa has also been criticised for answering questions with islamophobia and antisemitic conspiracy theories.

“Customer trust is at the center of everything we do and Alexa is designed to provide accurate, relevant, and helpful information to customers,” Amazon said in a statement. “As soon as we became aware of this error, we took swift action to fix it.”

The web is made up of many pages with no real moderation policy, and anyone can upload anything to the internet, within reason. Popular websites can therefore reach the top of search engine results – and be repeated by smart speakers – without necessarily being true.

What can be done about it?

In specific instances like these, Amazon will remove questions from the database of responses Alexa can give.

More generally, Amazon’s Alexa is improved via human review, using recordings from customer’s Echo devices that the company has saved.

“We use your requests to Alexa to train our speech recognition and natural language understanding systems using machine learning”, Amazon says.

“Training Alexa with real world requests from a diverse range of customers is necessary for Alexa to respond properly to the variation in our customers’ speech patterns, dialects, accents, and vocabulary and the acoustic environments where customers use Alexa.

“This training relies in part on supervised machine learning, an industry-standard practice where humans review an extremely small sample of requests to help Alexa understand the correct interpretation of a request and provide the appropriate response in the future.”

Amazon also has an Answers program whereby any Amazon customer can submit responses to unanswered questions.

Using a points-based system, a submitted answer is more likely to be used in response to a question after it has received positive feedback from users. Higher rated answers are given as answers more often than lower rated ones.

However, this places the onus on users to provide and rate accurate answers - something that social media platforms have struggled with over time, and Alexa Answers is no exception.

In 2019, VentureBeat found that inaccurate, asinine, and answers that contained company marketing made its way repeatedly onto the Alexa Answers platform.

“High quality answers are important to us, and this is something we take seriously — we will continue to evolve Alexa Answers,” an Amazon spokesperson told VentureBeat, but the company was reported to “cagey” about providing further details about how the platform worked. This includes whether there was any punishment for users who tried to troll the system, which remains unclear.

The Independent has contacted Amazon for more information about how it moderates the Alexa platform, whether it takes any pre-emptive curation of Alexa, and how it ranks answers to questions in general before they are given in reply.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in