A neural implant can translate brain activity into sentences



[ad_1]

To communicate, people who are unable to speak often rely on small eye movements to spell words, a laboriously slow process. Now, using signals captured by a brain implant, scientists have extracted entire sentences from the brain.

Some of these reconstructed words, spoken aloud by a virtual vocal string, are a bit confusing. But overall, the sentences are understandable, report researchers at the University of California at San Francisco in April 25. Nature.

The creation of audible synthetic sentences required years of badysis after the recording of cerebral signals, and the technique is not ready to be used outside of a laboratory. Nevertheless, the work shows "that by simply using the brain, it is possible to decode speech," says Gopala Anumanchipalli, speech specialist at UCSF.

The technology described in the new study is expected to restore people's ability to speak fluently, says Frank Guenther, speech neuroscientist at Boston University. "It's hard to overestimate the importance of this for these people …. It's incredibly insulating and almost nightmarish not to be able to communicate needs or connect socially. "

Existing speech aids based on spell words are tedious and often produce about 10 words per minute. Previous studies have used cerebral signals to decode smaller language elements, such as vowels or words, but with a more limited vocabulary than the current work.

In collaboration with neurosurgeon Edward Chang and bioengineer Josh Chartier, Anumanchipalli studied five people with electrodes grids temporarily implanted in the brain as part of epilepsy treatments. As these people could talk, researchers could record brain activity as participants pronounced sentences. The team then mapped the cerebral signals controlling the lips, tongue, jaw and larynx with the actual movements of the vocal tract as these people spoke. This allowed scientists to create a unique virtual vocal tract for each person.

Speech decoding

Scientists have transformed the cerebral signals captured by this grid of electrodes designed to record brain activity in synthesized sentences. The technique may one day help people who do not speak to communicate.

Chang Laboratory / Department of Neurosurgery UCSF

Then the researchers translated the virtual movements of the participants' artificial vocal tracks into sounds. The use of this virtual tool "optimized the speech and made the sound more natural," says Mr. Chartier. About 70% of these reconstructed words were comprehensible to listeners who were asked to choose the words they had heard from a list of possibilities. For example, when the synthesized voice said, "Ask a calico cat to keep the rodents at bay," a listener heard: "The calico cat intended to keep the rabbits away." Others, like "buh" and "puh", seemed mushier.

For the technique to work, it was necessary to know how a person moves his vocal apparatus. But these movements do not exist in many people unable to speak, such as those who have suffered a stroke, vocal tract damage or Lou Gehrig's disease.

"By far the biggest hurdle is how to build a decoder while you have no examples of speech." Says Marc Slutzky, neurologist and neural engineer at the Feinberg School of Northwestern University Medicine in Chicago.

In some tests, the researchers found that the algorithms used in the second stage of the process – translating the movements of the virtual vocal tract into sounds – were sufficiently similar from one person to another to be reused by different people, even even by those who can. do not talk.

But so far, the first stage of the process – the mapping of brain activity with the movements of a person's vocal tract – seems to be more idiosyncratic. According to scientists, finding a way to connect these personalized cerebral signals to the desired movements of their vocal tract will be a challenge for those who are unable to move.

[ad_2]
Source link