Paralyzed Man’s Brain Waves Turned Into Computer Sentences In First Medical Time | Science



[ad_1]

In a medical first, researchers tapped the brain waves of a paralyzed man unable to speak and turned what he intended to say into sentences on a computer screen.

It will take years of additional research, but the study, published Wednesday, marks an important step towards restoring more natural communication for people who cannot speak due to injury or illness.

“Most of us take for granted the ease with which we communicate through speech,” said Dr. Edward Chang, a neurosurgeon at the University of California, San Francisco, who led the work. “It’s exciting to think that we are at the very beginning of a new chapter, a new area” to lessen the devastation of patients who have lost this ability.

Today, people who cannot speak or write due to paralysis have very limited means of communication. For example, the man in the experiment, who has not been identified to protect his privacy, uses a pointer attached to a baseball cap that allows him to move his head to touch words or letters on a screen. . Other devices can pick up the eye movements of patients. But it is a frustrating and limited substitution for speech.

In recent years, experiments with mind-controlled prostheses have allowed paralyzed people to shake hands or have a drink using a robotic arm – they imagine themselves moving and these brain signals are transmitted. via a computer to the artificial limb.

Chang’s team used this work to develop ‘neuroprosthetics of speech’ – a device that decodes brain waves that normally control the vocal tract, the tiny muscle movements of the lips, jaw, tongue. and the larynx which form each consonant and vowel.

The man who volunteered to test the device was in his 30s. Fifteen years ago, he suffered a stroke which caused general paralysis and deprived him of speech. The researchers implanted electrodes on the surface of the human brain, in the area that controls speech.

A computer analyzed the patterns when it attempted to pronounce common words such as “water” or “good”, eventually learning to differentiate 50 words that could generate more than 1,000 sentences.

With questions like “How are you today?” “Or” Are you thirsty “, the device allowed the man to respond” I am very well “or” No, I am not thirsty “- without pronouncing the words but translating them into text, a the team reported in the New England Journal of Medicine.

It takes about three to four seconds for the word to appear on screen after the man tries to say it, said lead author David Moses, an engineer at Chang’s lab. It’s not as fast as speaking, but faster than typing an answer.

In an accompanying editorial, Harvard neurologists Leigh Hochberg and Sydney Cash called the work a “pioneering demonstration.”

They suggested improvements but said if the technology worked it could help people with injuries, strokes, or illnesses like Lou Gehrig’s disease where “the brain prepares the messages to be transmitted but these messages are trapped ”.

Chang’s lab has spent years mapping brain activity that leads to speech. First, the researchers temporarily placed electrodes in the brains of volunteers undergoing epilepsy surgery, so they could match brain activity to the words spoken.

Only then was it time to experiment with someone unable to speak. How did they know that the device was correctly interpreting the volunteer’s words? They started by making her try saying specific phrases such as “Bring my glasses” rather than answering open-ended questions until the machine translated correctly most of the time.

The next steps are to improve the speed, accuracy, and vocabulary size of the device, and perhaps one day allow users to communicate with a computer-generated voice rather than text on a screen.

[ad_2]

Source link