[ad_1]
Physicians have turned cerebral speech signals into written sentences as part of a research project aimed at transforming the way severely disabled patients communicate in the future.
This breakthrough is the first to demonstrate how the intention to say specific words can be extracted from brain activity and converted to text fast enough to keep pace with natural conversation.
In its current form, the brain reading software only works for certain phrases on which it was formed, but scientists believe that it is a stepping stone to a more powerful system capable of decode in real time the words that a person intends to pronounce.
Doctors at the University of California at San Francisco have taken up the challenge in hopes of creating a product allowing paralyzed people to communicate more fluidly than using existing devices that capture the movements eyepieces and muscle contractions to control a virtual keyboard.
"To date, there is no voice prosthetic system that allows users to interact in the time scale of a human conversation," said Edward Chang, neurosurgeon and lead researcher of the study published in the journal Nature.
The work, funded by Facebook, was possible thanks to three patients with epilepsy about to undergo neurosurgery. Before starting their operations, all three had a small electrode placed directly on the brain for at least a week to map the origin of their seizures.
During their stay in the hospital, the patients, who could all speak normally, agreed to participate in Chang's research. He used the electrodes to record brain activity, while he asked nine questions to each patient and asked him to read a list of 24 potential answers.
With the recordings in hand, Chang and his team built computer models that learned how to map particular patterns of brain activity to questions posed by patients and the answers they gave. Once trained, the software could identify almost instantly, and only from cerebral signals, the question heard by a patient and the answer given, with 76% accuracy and 61% respectively.
"It's the first time this approach is used to identify words and spoken sentences," said David Moses, a researcher on the team. "It is important to keep in mind that we have achieved this with a very limited vocabulary, but we hope, in the future, to increase the flexibility and accuracy of what we can translate."
Although rudimentary, the system allowed patients to answer questions about the music they liked; how well they felt if their room was too hot or too cold, or too light or too dark; and when they would like to be checked again.
Despite the breakthrough, there are still obstacles to overcome. One of the challenges is to improve the software so that it can translate brain signals into more varied speech on the fly. This will require algorithms driven on a very large amount of spoken language and corresponding brain signal data, which can vary from patient to patient.
Another goal is to read the "imagined speech" or phrases spoken in the mind. For the moment, the system detects the brain signals that are sent to move the lips, tongue, jaw and larynx – in other words, the talking machine. But for some patients with lesions or neurodegenerative disease, these signals may not be enough and more sophisticated methods of reading sentences in the brain will be needed.
While the work was still in its infancy, Winston Chiong, UCSF neuroethicist who was not involved in the latest study, said it was important to debate the ethical issues that such systems might raise in the past. 'to come up. For example, could a "speech-language neuroprosthetic" unintentionally reveal people's most private thoughts?
Chang said that it was already hard enough to decode what someone was openly trying to say, and that it was virtually impossible to extract his inner thoughts. "I have no interest in developing technology to know what people think, even if it was possible," he said.
"But if someone wants to communicate and can not do it, I think we, scientists and clinicians, have a responsibility to restore that fundamental human capacity."
Source link