[ad_1]
Researchers at UC San Francisco successfully developed a “speech neuroprosthesis” that enabled a severely paralyzed man to communicate in sentences, translating signals from his brain to the vocal tract directly into words that appear as words. of text on a screen.
This achievement builds on more than a decade of efforts by UCSF neurosurgeon Edward Chang to develop technology that enables people with paralysis to communicate even though they are unable to speak on their own.
“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of a person who is paralyzed and unable to speak,” said Chang, lead author of the study. “This has great promise for restoring communication by harnessing the brain’s natural speech machinery. “
Every year, thousands of people lose the ability to speak due to stroke, accident or illness. With further development, the approach described in this study may one day enable these people to communicate fully.
Translate brain signals into speech
Previously, work in the field of communication neuroprosthesis focused on restoring communication through spelling-based approaches to type letters one by one in text.
Chang’s study differs from these efforts in a critical way: his team translates signals intended to control the muscles of the vocal system to pronounce words, rather than signals to move the arm or hand to enable typing.
Chang said this approach harnesses the natural and fluid aspects of speech and promises faster, more organic communication.
READ: Breakthrough for Spinal Cord Injury and Dementia as Proteins Build ‘Striking’ Repairs
“With speech, we normally communicate information at a very high rate, up to 150 or 200 words per minute,” he said, noting that spelling-based approaches using typing, handwriting and controlling a slider are considerably slower and more laborious. “Going straight to words, like we’re doing here, has great advantages because it’s closer to how we normally speak.”
Over the past decade, Chang’s progress towards this goal has been facilitated by patients at the UCSF Epilepsy Center who underwent neurosurgery to identify the origins of their seizures using arrays of placed electrodes. on the surface of their brain.
These patients, all of whom had normal speech, volunteered to have their brain recordings analyzed for speech-related activity. The first successes with these volunteer patients paved the way for the current trial in people with paralysis.
Previously, Chang and his colleagues at the UCSF Weill Institute for Neurosciences mapped the patterns of cortical activity associated with the movements of the vocal tracts that produce each consonant and vowel.
To translate these results into full-word speech recognition, David Moses, PhD, a postdoctoral engineer at Chang’s lab, developed new methods for real-time decoding of these models and statistical language models to improve accuracy.
But their success in decoding speech in able-to-speak participants did not guarantee that the technology would work in someone whose vocal tract is paralyzed. “Our models had to learn the correspondence between complex patterns of brain activity and predicted speech,” said Moses. “It poses a major challenge when the participant cannot speak. “
Additionally, the team was unsure whether the brain signals controlling the vocal tract would still be intact for people who have not been able to move their vocal muscles for many years. “The best way to find out if it might work was to try it out,” said Moses.
The first 50 words
To investigate the potential of this technology in patients with paralysis, Chang teamed up with colleague Karunesh Ganguly, associate professor of neurology, to initiate a study known as “BRAVO” (Restoration of the Brain-Interface arm and voice computer).
The first participant in the trial is a man in his thirties who suffered a devastating stroke more than 15 years ago that severely damaged the connection between his brain and his vocal tract and limbs.
Since his injury, he has had extremely limited movement of the head, neck and limbs, and communicates by using a pointer attached to a baseball cap to insert letters on a screen.
CHECK: Yale Scientists Have Successfully Repaired Spinal Cord Injury Using Patients’ Own Stem Cells
The participant, who asked to be called BRAVO1, worked with the researchers to create a 50-word vocabulary that Chang’s team could recognize from brain activity using advanced computer algorithms. The vocabulary, which includes words such as “water”, “family” and “good”, was sufficient to create hundreds of sentences expressing concepts applicable to the daily life of BRAVO1.
For the study, Chang surgically implanted a high-density electrode array on BRAVO1’s motor speech cortex. After the participant’s full recovery, his team recorded 22 hours of neural activity in this region of the brain over 48 sessions and several months. In each session, BRAVO1 attempted to pronounce each of the 50 vocabulary words several times while the electrodes registered brain signals from its speech cortex.
Translate an attempted speech into text
To translate the recorded neural activity patterns into specific words, the other two lead authors of the study used custom neural network models, which are forms of artificial intelligence. When the participant attempted to speak, these networks distinguished subtle patterns of brain activity to detect attempts to speak and identify the words they were trying to say.
RELATED: Northwestern Scientists Repair and Reverse Damage to ALS Neurons in Lab Using New, Non-Toxic Compound
To test their approach, the team first presented BRAVO1 with short sentences constructed from the 50 vocabulary words and asked them to try repeating them multiple times. As he tried, the words were decoded from his brain activity, one by one, on a screen.
Then the team moved on to asking him questions such as “How are you today?” And “Do you want some water?” As before, BRAVO1’s attempted speech appeared on the screen. “I’m doing very well” and “No, I’m not thirsty. “
The team found that the system was able to decode words from brain activity at a speed of up to 18 words per minute with an accuracy of up to 93 percent (average 75 percent).
The language model Moses applied contributed to the success and implemented an “auto-correct” feature, similar to that used by consumer speech recognition and texting software.
Moses characterized the early results of the trials — which appear in the New England Journal of Medicine—as proof of principle. “We were delighted to see the precise decoding of a variety of meaningful sentences,” he said. “We have shown that it is indeed possible to facilitate communication in this way and that it can be used in conversational contexts. “
Looking ahead, Chang and Moses said they would expand the trial to include more participants with severe paralysis and communication deficits. The team is currently working to increase the number of words in the available vocabulary, as well as to improve the rate of speech.
Both said that although the study focused on a single participant and a limited vocabulary, these limitations did not decrease achievement. “This is an important technological step for a person who cannot communicate naturally,” said Moses, “and it demonstrates the potential of this approach to give a voice to people with severe paralysis and loss of speech.”
(LOOK the video on this research below.)
Source: University of California, San Francisco
SHARE this fascinating breakthrough with your friends on social media …
[ad_2]
Source link