[Research] Speech synthesis from brain signals

Welcome to the Coping With Epilepsy Forums

Welcome to the Coping With Epilepsy forums - a peer support community for folks dealing (directly or indirectly) with seizure disorders. You can visit the forum page to see the list of forum nodes (categories/rooms) for topics.

Please have a look around and if you like what you see, please consider registering an account and joining the discussions. When you register an account and log in, you may enjoy additional benefits including no ads, access to members only (ie. private) forum nodes and more. Registering an account is free - you have nothing to lose!

Bernard

Your Host
Administrator
Benefactor
Messages
7,431
Reaction score
776
Points
278
... scientists are reporting that they have developed a virtual prosthetic voice, a system that decodes the brain’s vocal intentions and translates them into mostly understandable speech, with no need to move a muscle, even those in the mouth.
...
For the new trial, scientists at the University of California, San Francisco, and U.C. Berkeley recruited five people who were in the hospital being evaluated for epilepsy surgery.

Many people with epilepsy do poorly on medication and opt to undergo brain surgery. Before operating, doctors must first locate the “hot spot” in each person’s brain where the seizures originate; this is done with electrodes that are placed in the brain, or on its surface, and listen for telltale electrical storms.

Pinpointing this location can take weeks. In the interim, patients go through their days with electrodes implanted in or near brain regions that are involved in movement and auditory signaling. These patients often consent to additional experiments that piggyback on those implants.

Five such patients at U.C.S.F. agreed to test the virtual voice generator. Each had been implanted with one or two electrode arrays: stamp-size pads, containing hundreds of tiny electrodes, that were placed on the surface of the brain.

As each participant recited hundreds of sentences, the electrodes recorded the firing patterns of neurons in the motor cortex. The researchers associated those patterns with the subtle movements of the patient’s lips, tongue, larynx and jaw that occur during natural speech. The team then translated those movements into spoken sentences.

Native English speakers were asked to listen to the sentences to test the fluency of the virtual voices. As much as 70 percent of what was spoken by the virtual system was intelligible, the study found.

“We showed, by decoding the brain activity guiding articulation, we could simulate speech that is more accurate and natural sounding than synthesized speech based on extracting sound representations from the brain,” said Dr. Edward Chang, a professor of neurosurgery at U.C.S.F. and an author of the new study. His colleagues were Gopala K. Anumanchipalli, also of U.C.S.F., and Josh Chartier, who is affiliated with both U.C.S.F. and Berkeley.
...

https://www.nytimes.com/2019/04/24/health/artificial-speech-brain-injury.html

Study published here: https://www.nature.com/articles/s41586-019-1119-1
 
These patients often consent to additional experiments that piggyback on those implants.
I wonder if those patients get a discount for consenting. It would be nice. :)
 
Back
Top Bottom