Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Brain-computer interface breakthrough sees thoughts translated into speech in scientific first

Neuroengineers from Columbia University harness intelligible speech from person's brain activity using artificial intelligence

Anthony Cuthbertson
Tuesday 29 January 2019 21:15 GMT
Comments
Researchers translated thoughts directly into speech in scientific first
Researchers translated thoughts directly into speech in scientific first (iStock)

Clear, intelligible speech using computer processing of human brain activty has been acheived by scientists for the first time.

Researchers at the Zuckerman Institute at Columbia University were able to reconstruct the words a person heard by monitoring their brain activity.

The breakthrough is an important step towards creating a brain-computer interface capable of reading the thoughts of people who are unable to communicate verbally.

"Our voices help connect us to our friends, family and the world around us, which is why losing the power of one's voice due to injury or disease is so devastatting," said Professor Nima Mesgarani, a principal investigator at Columbia University who led the study.

He added: "We have a potential way to restore that power. We've shown that, with the right technology, these people's thoughts could be decoded and understood by any listener."

Prof Mesgarani and his team used artificial intelligence to recognise the patterns of activity that appear in someone's brain when they listen to someone speak.

By making use of a similar computer algorithm to those found in smart assistants like Amazon's Alexa and Apple's Siri, the neuroengineers were able to synthesise speech from these brain patterns using a robotic voice.

The algorithm, called a vocoder, was taught using epilepsy patients treated by Dr Ashesh Dinesh Mehta at the Northwell Health Physician Partners Neuroscience Institute.

"Working with Dr Mehta, we asked epilepsy patients already undergoing brain surgery to listen to sentences spoken by different people, while we measured patterns of brain activity. These neural patterns trained the vocoder," said Prof Mesgarani.

Testing the technology on individuals resulted in individuals understanding the translated thoughts around 75 per cent of the time.

"The sensitive vocoder and powerful neural networks represented the sounds the patients had originally listened to with surprising accuracy," said Prof Mesgarani.

Future breakthroughs that the technology could lead to include a wearable brain-computer interface that could translate an individual's thoughts, such as 'I need a glass of water', directly into synthesized speech or text.

"This would be a game changer," said Prof Mesgarani. "It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them."

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in