[ad_1]
People like the late Stephen Hawking can think about what they mean, but are unable to speak because their muscles are paralyzed. To communicate, they can use devices that detect the eye movements or cheeks of a person to spell words one letter at a time. However, this process is slow and unnatural.
Scientists want to help these completely paralyzed or "blocked" individuals communicate more intuitively by developing a cerebral machine interface to decode commands that the brain sends to the tongue, palate, lips and larynx (articulators).
The person would simply try to say words and the interface of the brain machine (BMI) would translate into speech.
New research from Northwestern Medicine and the Weinberg College of Arts and Sciences has brought science closer to creating brain-speech machine interfaces by unlocking new insights into how the brain encodes speech.
Scientists have discovered that the brain controls the production of speech in a manner similar to how it controls the production of arm and hand movements. To do this, researchers recorded signals from two parts of the brain and decoded what these signals represented. Scientists have discovered that the brain is both the goals of what we try to say (the word sounds like "pa" and "ba") and the individual movements we use to achieve these goals (how to move our lips, our palate, our tongue and our larynx)). The different representations occur in two different parts of the brain.
"This can help us build better voice decoders for BMIs, which will bring us closer to our goal of helping to talk to people locked up again," said Dr. Marc Slutzky, Associate Professor of Neurology and Physiology at the University of Toronto. Northwestern University. Feinberg School of Medicine and a neurologist in Northwestern medicine.
The study will be published on 26 September in the Journal of Neuroscience.
The discovery could also help people with other speech disorders, such as speech apraxia, seen in children and after stroke in adults. In speech apraxia, an individual has difficulty translating voice messages from the brain into spoken language.
How words are translated from your brain into words
Speech is composed of individual sounds, called phonemes, produced by the coordinated movements of the lips, tongue, palate, and larynx. However, scientists did not know exactly how these movements, called articulatory gestures, are planned by the brain. In particular, it was unclear how the cerebral cortex controls the production of speech, and no evidence of the representation of gestures in the brain has been demonstrated.
"We hypothesized that the motor regions of the brain would have an organization similar to that of the brain motor regions," Slutzky said. "The pre-central cortex would represent movements (gestures) of the lips, tongue, palate and larynx, and the higher cortical areas would represent more the phonemes."
That's exactly what they found.
"We studied two parts of the brain that help produce speech," said Slutzky. "The pre-central cortex represented more important gestures than the phonemes.The lower frontal cortex, which is a higher-level speech zone, represented both phonemes and gestures."
Discuss brain surgery patients to decode their cerebral signals
Northwestern scientists recorded cerebral signals from the cortical surface using electrodes placed in patients undergoing brain surgery to eliminate brain tumors. The patients had to be awake during their operation, so the researchers asked them to read the words of a screen.
After the operation, scientists marked the times when patients produced phonemes and gestures. Then, they used the recorded cerebral signals of each cortical area to decode the phonemes and gestures produced and measured the accuracy of the decoding. The cerebral signals in the pre-central cortex were more accurate at decoding than phonemes, while those in the lower frontal cortex were also effective at decoding phonemes and gestures. This information has helped support the linguistic models of speech production. It will also help engineers design brain machine interfaces to decode speech from these areas of the brain.
The next step in the research is to develop an algorithm for brain machine interfaces that not only decodes gestures, but also combine these decoded gestures to form words.
It was an interdisciplinary and inter-campus survey. The authors included a neurosurgeon, a neurologist, a computer scientist, a linguist, and biomedical engineers. In addition to Slutzky, the Northwestern authors are Emily M. Mugler, Matthew C. Tate (Neurological Surgery), Jessica W. Templer (Neurology) and Matthew A. Goldrick (Linguistics).
The article titled "Differential Representation of Joint Gestures and Phonemes in Front and Bottom Front Gyri".
This work was funded in part by the Doris Duke Charitable Foundation, the Dixon Translational Research Scholarship of the Northwestern Memorial Foundation (including partial funding from the National Center for the Advancement of Translation Sciences, UL1TR000150 and UL1TR001422). ).
Source link