ΑΙhub.org
 

Decoding brain activity into speech


by
01 May 2019



share this:


A recent paper in Nature reports on a new technology created by UC San Francisco neuroscientists that translates neural activity into speech. Although the technology was trialled on participants with intact speech, the hope is that it could be transformative in the future for people who are unable to communicate as a result of neurological impairments.

The researchers asked five volunteers being treated at the UCSF Epilepsy Center, with electrodes temporarily implanted in their brains, to read several hundred sentences aloud while their brain activity was recorded.

Based on the audio recordings of participants’ voices, the researchers used linguistic principles to reverse engineer the vocal tract movements needed to produce those sounds: pressing the lips together, tightening vocal cords, shifting the tip of the tongue to the roof of the mouth, then relaxing it, and so on.

This detailed mapping of sound to anatomy allowed the scientists to create a realistic virtual vocal tract for each participant that could be controlled by their brain activity. This included two neural networks: a decoder that transforms brain activity patterns produced during speech into movements of the virtual vocal tract, and a synthesizer that converts these vocal tract movements into a synthetic approximation of the participant’s voice.

A video of the resulting brain-to-speech synthesis can be found below.

You can read the UC San Francisco press release on which this news highlight is based here.

Reference
Anumanchipalli, G. K., Chartier, J., & Chang, E. F. (2019). Speech synthesis from neural decoding of spoken sentences. Nature, 568(7753), 493.




AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



AI UK 2025 conference recordings now available to watch

  11 Apr 2025
Listen to the talks from this year's AI UK conference.

#AAAI2025 workshops round-up 2: Open-source AI for mainstream use, and federated learning for unbounded and intelligent decentralization

  10 Apr 2025
We hear from the organisers of two workshops at AAAI2025 and find out the key takeaways from their events.

Accelerating drug development with AI

  09 Apr 2025
Waterloo researchers use machine learning to predict how new drugs could affect the body

ChatGPT’s Studio Ghibli-style images show its creative power – but raise new copyright problems

  08 Apr 2025
Social media has recently been flooded with images that look like they belong in a Studio Ghibli film.

#AAAI2025 invited talk round-up 1: labour economics, and reasoning about spatial information

  07 Apr 2025
We give a flavour of two plenary talks from the AAAI conference in Philadelphia.

Everything you say to an Alexa speaker will now be sent to Amazon

  04 Apr 2025
This change was implemented on 28 March 2025.

End-to-end data-driven weather prediction

  04 Apr 2025
A new AI weather prediction system, developed by a team of researchers, can deliver accurate forecasts.

Interview with Joseph Marvin Imperial: aligning generative AI with technical standards

  02 Apr 2025
Joseph tells us about his PhD research so far and his experience at the AAAI 2025 Doctoral Consortium.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association