[ad_1]
In the first experiment of this type, scientists were able to translate brain signals directly into intelligible words. This may sound like a wild science fiction at first, but this feat might actually help some people dealing with speech problems.
And yes, we could also get futuristic computer interfaces.
The key to the system is an artificial intelligence algorithm that matches what the subject hears with patterns of electrical activity, and then turns them into words that make sense to the listener.
Previous research has shown that when we talk (or even imagine talking), we get separate patterns in the neural networks of the brain. In this case, the system decodes brain responses rather than actual thoughts in speech, but it has the potential to do so too, with sufficient development.
"Our voices allow us to keep in touch with our friends, our family and the world around us.This is why losing the power of speech as a result of an injury or d?" an illness is so devastating, "said one of the team members, Nima Mesgarani of Columbia University in New York. .
"With today's study, we have a potential way of restoring this power.We have shown that with the right technology, the thoughts of these people could be decoded and understood by any listener. "
The algorithm used calls a vocoder, it's the same type of algorithm that allows you to synthesize speech after being trained in human conversation. When you receive a response from Siri or Amazon Alexa, this is a vocoder being deployed.
In other words, Amazon or Apple do not have to program every word in their devices. They use the vocoder to create a realistic voice based on the text to be spoken.
Here, the vocoder was not formed by human speech, but by the neuronal activity of the part of the brain located in the auditory cortex, measured in patients undergoing brain surgery while on the other hand. they listened to sentences spoken aloud.
With this database on which to rely, cerebral signals recorded while patients listened to 0-9 read were read by the vocoder and cleaned with further IA badysis. They have been found to closely match the sounds that have been heard – even though the final voice is still pretty robotic.
The technique proved to be much more effective than previous attempts using simpler computer models on spectrogram images – visual representations of sound frequencies.
"We found that people could understand and repeat sounds about 75% of the time, which is well beyond previous attempts," Mesgarani said.
"The sensitive vocoder and powerful neural networks represented the sounds that patients initially listened to with surprising accuracy."
There is still a lot of work to be done, but the potential is huge. Again, it should be emphasized that the system does not turn real mental thoughts into spoken words, but it could perhaps do it over time – this is the next challenge that researchers want to take up.
Further down the line, you may be able to even display your emails on the screen or turn on your smart lights by simply emitting a mental command.
It will take time, however, especially because all our brains work in a slightly different way – a large amount of training data from each person would be needed to accurately interpret all our thoughts.
In the not-too-distant future, we could eventually talk about people who have no voice, whether they have blockage syndrome or are recovering from a stroke, or (as in the case of the late Stephen Hawking) suffer from amyotrophic lateral sclerosis (ALS).
"In this scenario, if the wearer thinks" I need a glbad of water ", our system could use the cerebral signals generated by this thought and turn them into synthesized verbal speech," explains Mesgarani.
"This would change the game.This would give anyone who has lost the ability to speak, that it is an injury or an illness, the renewed chance to connect to the world." who surrounds it. "
The search was published in Scientific reports.
Source link