An experimental hearing aid controlled by the brain can identify the voices in a crowd



[ad_1]

Bhumming secret hmmm hmmm do not tell anyone garble garble layoffs

The brain is unmatched in its ability to distinguish juicy treats and voices that call attention to a cacophony of background noise. Hearing aids, however, have a perverse smell to this effect of cocktail: rather than amplify a particular voice by selective attention, they amplify each sound in the same way.

On Wednesday, researchers unveiled a possible solution: an experimental hearing aid that reads thoughts. It uses artificial intelligence to separate the sounds of different speakers, detects the brain activity that distinguishes one of these voices from others, and amplifies only that voice before delivering the sound to the speaker. auditor, they explained in Science Advances.

publicity

If the technology proves practical – and for that, it will probably not be possible to implant electrodes on the surface of the brain, unlike the current version – it could serve as a basis for a brain-controlled hearing aid that would help people with hearing loss function better in social contexts and in the noisy world.

The project, led by electrical engineer Nima Mesgarani of the Zuckerman Mind Brain Behavior Institute at Columbia University, is one of many projects to adapt hearing aids to normal hearing. The Bose Hearphone application at $ 500 for smartphones, for example, is equipped with directional microphones so that users can hear one person better than another, as well as commands to mitigate, for example, the sound of the circulation. But no current device can amplify selected conversations from multiple sources in a crowd, as the normally auditory brain can.

"Even the most advanced digital hearing aids do not know which voices they should suppress and which voices they should amplify," said Mesgarani.

If they did, it would make a big difference for the hard-of-hearing, said Roger Miller, who heads the neural prosthesis program at the National Institute of Deafness and Other Communication Disorders, which funded l & # 39; study. "There is real gold to mine in this hill," he said.

Mesgarani started his activities in the brain. In 2012, with his graduate advisor, he discovered that when people converse, the brainwaves of the listener echo the acoustic characteristics of the speaker's voice, increasing perceived volume and filtering superfluous voices.

This ability comes from the secondary auditory cortex of the brain, one behind each ear. They amplify one voice over the others by a simple means of attention, as part of a process called downward control. ("High" means an executive function such as conscious attention, "low" means a sensory function, in this case, hearing.) The sound of a familiar voice, a familiar word (its name), a word which resonates emotionally (divorced) or the tone, or other attention sensor causes this region to increase the perceived volume of what attracts its attention.

The hearing aid controlled by the brain first separates the audio signals from different speakers. It then determines the spectrogram, or vocal imprint, of each, which means how the volume and frequency of a voice vary over time. Then, it detects brain waves in a listener's auditory cortex (via a network of 16 out of 16 implanted electrodes), which indicate the voice to which the listener is paying attention. Finally, the system looks for that particular voice and amplifies it, and only it. When the listener's attention turns to another voice, the system quiets the first and dials the volume on the new one.

Three patients with epilepsy undergoing brain surgery volunteered to let Dr. Ashesh Mehta of the Institute of Neurology and Neurosurgery at Northwell Health Institute, Long Island, New York, implant the electrode array in New York. their brain. The electrodes detected brain activity that occurred when participants listened to one of the two speakers at the same time, focusing one on the other, and then the other. other, according to the instructions of the scientists. Scientists have detected the unique brain activity corresponding to paying attention to each voice.

"The brainwaves of the listeners followed only the voice of the speaker on which they focus," Mesgarani said.

This research is part of an increasingly long list of brain activity tests to produce a result that the body can not manage otherwise, such as a paralyzed person moving a mechanical arm or a person with ALS transforming his thoughts in words.

To find widespread use, the mind-reading hearing aid should work via electrodes on the scalp. The Columbia team is working on the version for the scalp, as well as on the one with electrodes around the ear.

Their hearing aid prior to reading the mind only worked on the voices that he had been trained to recognize, such as those of family members. He could detect and amplify these voices but not the unknowns. The new generation device "can recognize and decode a voice – in any voice – from the start," said Mesgarani.

[ad_2]

Source link