ΑΙhub.org
 

Decoding brain activity into speech

by
01 May 2019



share this:


A recent paper in Nature reports on a new technology created by UC San Francisco neuroscientists that translates neural activity into speech. Although the technology was trialled on participants with intact speech, the hope is that it could be transformative in the future for people who are unable to communicate as a result of neurological impairments.

The researchers asked five volunteers being treated at the UCSF Epilepsy Center, with electrodes temporarily implanted in their brains, to read several hundred sentences aloud while their brain activity was recorded.

Based on the audio recordings of participants’ voices, the researchers used linguistic principles to reverse engineer the vocal tract movements needed to produce those sounds: pressing the lips together, tightening vocal cords, shifting the tip of the tongue to the roof of the mouth, then relaxing it, and so on.

This detailed mapping of sound to anatomy allowed the scientists to create a realistic virtual vocal tract for each participant that could be controlled by their brain activity. This included two neural networks: a decoder that transforms brain activity patterns produced during speech into movements of the virtual vocal tract, and a synthesizer that converts these vocal tract movements into a synthetic approximation of the participant’s voice.

A video of the resulting brain-to-speech synthesis can be found below.

You can read the UC San Francisco press release on which this news highlight is based here.

Reference
Anumanchipalli, G. K., Chartier, J., & Chang, E. F. (2019). Speech synthesis from neural decoding of spoken sentences. Nature, 568(7753), 493.




AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



Assured and Trustworthy Human-centered AI – a AAAI Fall symposium

Read some of the highlights from the Assured and Trustworthy Human-centered AI symposium.
08 December 2023, by , and

What’s coming up at #NeurIPS2023?

Find out more about the programme of events, including invited talks, tutorials, workshops, and socials.
07 December 2023, by

Interview with Paula Feldman: generating 3d models of blood vessels

Paula and colleagues have used a recursive neural network approach to better understand, and model, blood vessels.
06 December 2023, by

Experimenting with generative AI in the classroom

Waterloo professor introduces AI in his course and equips students with skills to respond to big problems in tech.
05 December 2023, by

The Wizard of AI – a film by Alan Warburton

Presented at the ODI Data Summit, this video essay addresses the cultural impacts of generative AI.
04 December 2023, by

Forthcoming machine learning and AI seminars: December 2023 edition

A list of free-to-attend AI-related seminars that are scheduled to take place between 1 December 2023 and 31 January 2024.
01 December 2023, by





©2021 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association