Neuroscientists translate brainwaves from hearing into intelligible words



[ad_1]

Using brain badysis technology, artificial intelligence and speech synthesizers, scientists converted brain patterns into intelligible verbal speech. However, instead of capturing an individual's inner thoughts to reconstruct speech, the new research – published this week in Scientific Advances – utilizes the brain patterns produced by listening to the word.

Such a neuroprosthetic neuroscientist Nima Mesgarani and her colleagues have combined recent advances in deep learning (also known as deep learning) with speech synthesis technologies. The resulting brain-computer interface, while still rudimentary, allowed for the capture of brain structures directly into the auditory cortex, which were then decoded by a vocoder, or speech synthesizer, with the technology of the brain. Artificial intelligence.

It was therefore possible to produce speech. intelligible. The speech was very robotic, but three out of four listeners could discern the content. This is an encouraging step – it can help people who have lost the ability to speak.

Let us be clear, Mesgorani's neuroprothetic device does not translate the speech that is in the head of an individual, ie the thoughts that it contains. also called imagined speech – directly in words. Unfortunately, we are not yet there on the scientific level.

Instead, the system captured the distinct cognitive responses of an individual while listening to recordings of people speaking. A network of deep neurons was then able to decode or translate these models, allowing the system to reconstruct speech.

"This study follows a recent trend in applying in-depth learning techniques to decode neural signals" Andrew Jackson told Gizmodo. Jackson is a professor of neural interfaces at Newcastle University and did not participate in this study. "In this case, the neural signals are recorded on the surface of the human brain during epilepsy surgery.The participants listen to different words and expressions read by the actors.Neurone networks are trained to learn the relationship between brain signals and sounds, and as a result, they can reconstruct intelligible reproductions of words and expressions based solely on cerebral signals. "

Patients with epilepsy were selected for Study because they often had to undergo brain surgery. Mesgarani recruited five volunteers for the experiment, with the help of Ashesh Dinesh Mehta, neurosurgeon at the Institute of Neuroscience at Northwell Health Physician Partners and co-author of the new study. The team used invasive electrocorticography (ECoG) to measure neuronal activity when patients heard continuous speech noises. Patients listened, for example, to speakers reciting numbers from zero to nine. His brain patterns were then introduced into the vocoder equipped with an artificial intelligence, which resulted in a synthesized speech .

The results were very robotic, but quite intelligible. During the tests, the listeners were able to correctly identify the numbers pronounced in about 75% of the time. They even managed to tell if the speaker was a man or a woman. Not bad, and a result that "was a surprise" for Mesgarani, he explained to Gizmodo in an email.

Here you can find speech synthesizer recordings – the researchers tested various techniques, but The best result comes from the combination of deep neural networks and vocoder

The use of a speech synthesizer in this context, unlike a system capable of combining and reciting pre-recorded words, was important to Mesgarani. As he explained to Gizmodo, speech involves more things than just putting the right words.

"The purpose of this article is to restore communication with those who have lost the ability to speak, we try to learn to directly map the signal of the brain to speech, its own language," he told Gizmodo. "You can also decode phonemes [unidades distintas de som] or words. However, the speech contains much more information than the content – like the speaker [com sua voz e estilo distintos] the intonation, the emotional tone, etc. "

In the future, Mesgarani would like to synthesize more complex words and sentences and collect brain signals from people who think or imagine the act of speaking." 19659003] Jackson was impressed by the new study, but he did not yet know if this approach would apply directly to brain-computer interfaces.

"In the article, the decoded signals reflect real words heard by the brain. To be useful, a communication device should decode the words imagined by the user, "Jackson told Gizmodo." Although there is often overlap between the areas of the brain involved in it. hearing, the actual word and the imaginary word, we still do not know how similar the cerebral signals will be. "

William Tatum, neurologist at Mayo Clinic involved in the new study, said research is important because it is the first to use artificial intelligence to reconstruct the language of brainwaves involved in the generation of known acoustic stimuli .The importance is remarkable "because it promotes the application of the 'In-depth learning in the next generation of speech production systems,' he told Gizmodo.But despite this, he felt that the size of the The number of participants was very small and the use of data extracted directly from the human brain during surgery was not ideal.

Another limitation of the study is that neural networks make sure that they work properly. more than the reproduction of words from zero to nine, it would be necessary to form a large number of cerebral signals of each participant. The system is patient-specific because we all produce different brain patterns when we listen to speech.

"It will be interesting to see how, in the future, decoders trained for one person would be generalized to other people," Jackson said. "It's a little early speech recognition systems, which had to be individually trained by the user, unlike current technologies such as Siri and Alexa, which can understand anyone's voice, also using neural networks. . Only time will tell if these technologies will one day be able to do the same thing for cerebral signals. "

There is undeniably a lot of work to be done, but the new article is an encouraging step for the creation of neuroprosthetic speech implants."

[Scientific Reports]

[ad_2]
Source link