[ad_1]
The researchers say that technology that harnesses brain activity to produce a synthesized speech can benefit people deprived of the ability to speak due to a stroke or other medical conditions.
Known as a "brain decoder," this technology is supposed to read minds and turn thoughts into speech – a tool that could one day help physicians communicate with patients who can not speak.
Scientists at the University of California at San Francisco (UCSF) implanted electrodes into the brains of volunteers and then decoded signals into brain speech centers to guide a computer-simulated version of their vocal tract-lips, jaw, tongue and larynx – in order to generate speech. through a synthesizer.
The results of the volunteers were mostly intelligible, although the researchers noted that the speech was somewhat confusing.
"We were shocked when we heard the results for the first time. We could not believe our ears, "said Josh Chartier, PhD student at UCSF. "I was incredibly excited by the fact that many aspects of the actual speech were present in the synthesizer output."
The results of the study have given rise to researchers' hope that, through improvements, a clinically viable device could be developed for patients suffering from speech loss in the coming years.
"Clearly, there is still a lot of work to be done to make this more natural and understandable," said Mr. Chartier, "but we were very impressed by everything that can be decoded from there. brain activity. "
Stroke, diseases such as cerebral palsy, amyotrophic lateral sclerosis (ALS), Parkinson's disease and multiple sclerosis, brain damage and cancer sometimes deprive the person of speech.
Such conditions cause some people to use devices that follow eye movements or residual facial muscle movements to spell words letter by letter. However, these methods are slow and generally do not generate more than 10 words per minute, compared to 100 to 150 words per minute in natural language.
The five volunteers who participated in the study were all patients with epilepsy. Although they were all able to speak, they had the opportunity to participate because they had already planned to have electrodes temporarily implanted in their brains to map the source of their seizures before neurosurgery. Future studies will test the technology on people unable to speak.
Volunteers read aloud while monitoring activity in areas of the brain involved in language production. The researchers discerned vocal tract movements necessary for speech production and created a "virtual vocal tract" for each participant. This one could be controlled by the activity of his brain and produce a synthesized speech.
"Very few of us really have an idea of what's going on in our mouths when we talk," said neurosurgeon Edward Chang. "The brain translates those thoughts of what you mean into vocal tract movements, and that's what we're trying to decode."
The researchers found that during the study, they were more successful at synthesizing slower speech sounds such as "sh" and less success with abrupt sounds such as "b" and "loudspeaker." p ".
In addition, the technology did not work as well when researchers tried to decode brain activity directly in speech, without using virtual voice leads.
"We are still working to make the synthesized speech clearer and less confused," said Chartier. "This is partly a consequence of the algorithms we use, and we believe we can achieve better results as we improve the technology."
"We hope that these discoveries will give hope to those whose conditions prevent them from expressing themselves: we will one day be able to restore the ability to communicate, a fundamental element of our human identity", a- he added.
The study was published in the journal Nature.
In November 2018, a neurotechnology platform that uses artificial intelligence to translate brainwaves into control signals was designated as the first laureate of a new innovation award in the world. year supported by E & T at the 2018 IET Innovation Awards held in central London.
[ad_2]
Source link