Revolutionary speech loss treatment translates brain signals into written text



[ad_1]

Patients with paralysis-related speech loss could benefit from a new technology developed by researchers at the University of California at San Francisco (UCSF) that transforms cerebral speech signals into speech. written sentences.

Operating in real time, this technology is the first to extract the intention of telling specific words of brain activity fast enough to keep pace with natural conversation.

The software is currently able to recognize only a series of sentences for which it was formed, but the research team believes that this breakthrough could serve as a springboard to a future voice prosthesis system more powerful.

Eddie Chang, professor of neurosurgery at UCSF and responsible for the study, said: "At present, patients with speech disorders due to paralysis are limited to spell the words very slowly with the help of residual eye movements or muscle contractions to control a computer interface. But in many cases the information needed to produce fluid speech is still present in their brains. We just need the technology to allow them to express it. "

The research funded by Facebook was made possible by three volunteers from the UCSF Epilepsy Center, who were already undergoing neurosurgery for their condition.

The patients, all of whom have a regular speaking ability, were asked a small group of recording electrodes on the surface of their brain, prior to surgery, to track the origin of their seizures. Known as electrocorticography (ECoG), this technique provides much richer and more detailed data on brain activity than noninvasive technologies such as electroencephalogram or blood tests. functional magnetic resonance imaging.

The brain activity and speech of the patients were recorded while asked a series of nine questions, which they answered from a list of 24 predetermined answers. The research team then introduced the electrode data and audio recordings into an automatic learning algorithm capable of matching specific speech sounds with the corresponding brain activity.

The algorithm allowed to identify the questions heard by the patients with an accuracy of 76% and the answers that they gave with an accuracy of 61%.

Chang said, "Most of the previous approaches have been centered on decoding the speech alone, but here we show the interest of decoding both sides of a conversation – both the questions that some people do not understand. one hears and his answer.

"This reinforces our intuition that speech is not something that happens in a vacuum and that any attempt to decode what speech-language patients are trying to say will be improved by taking into account the fact The whole context in which they attempt to communicate. "

Researchers are now looking to improve the software so that it can translate more varied speech signals. They are also looking for a way to make technology accessible to patients with a non-paralysis-related speech loss and whose brain does not send speech signals to their vocal system.

[ad_2]
Source link