Brain Implant Turns Thoughts Into Words To Help Paralyzed Man “Speak” Again



[ad_1]

ucsf-chang-patient-setup-illustration

The UCSF’s brain-computer interface is surgically applied directly to a patient’s motor cortex to enable communication.

Ken Probst, UCSF

Facebook’s work on neural input technology for augmented reality and virtual reality appears to be moving in a more wrist-focused direction, but the company continues to invest in research into implanted brain-computer interfaces. The latest phase of a multi-year Facebook-funded UCSF study called Project Steno translates a paralyzed patient’s conversation attempts with speech impairments into words on a screen.

“This is the first time that someone who naturally tries to say words can be decoded into words solely from brain activity,” said Dr David Moses, lead author of a study published Wednesday in the New England Journal of Medicine. “I hope this is proof of principle of direct control of speech from a communication device, using intentional attempt to speak as a control signal by someone who cannot speak, who is paralyzed. “

Brain-computer interfaces (BCIs) have been the source of a number of promising recent breakthroughs, including Stanford research that could turn imaginary handwriting into projected text. The UCSF study takes a different approach, analyzing actual attempts at speech and acting almost like a translator.

The study, led by UCSF neurosurgeon Dr. Edward Chang, involved implanting a “neuroprosthesis” of electrodes in a paralyzed man who suffered a stroke at the age of 20. With an electrode patch implanted on the area of ​​the brain associated with controlling the vocal tract, the man attempted to answer questions displayed on a screen. UCSF’s machine learning algorithms can recognize 50 words and convert them to sentences in real time. For example, if the patient saw a prompt asking, “How are you today?” The response appeared on the screen in the form “I am doing very well”, appearing word by word.

Moses clarified that the work will aim to continue beyond Facebook’s fundraising phase and that the research still has a lot of work to do. At this time, it is still unclear to what extent speech recognition comes from recorded patterns of brain activity, spoken utterances, or a combination of the two.

Moses is quick to point out that the study, like other BCI work, doesn’t interfere with reading: it relies on detecting the brain activity that specifically occurs when trying to engage in a certain behavior. , like talking. Moses also says that the work of the UCSF team does not yet translate into non-invasive neural interfaces. Elon Musk’s Neuralink promises wireless transmission data from electrodes implanted in the brain for future research and assistive use, but so far this technology has only been demonstrated in a monkey.

frlr-head-mount-bci-research-prototype

Facebook Reality Labs’ BCI head-mounted device prototype, which had no implanted electrodes, is now open source.

Facebook

Meanwhile, Facebook Reality Labs Research has moved away from brain-computer interfaces for future VR / AR headsets, focusing in the near future on wrist-worn devices based on technology acquired from CTRL-Labs. Facebook Reality Labs had its own prototypes of non-invasive research headsets to study brain activity, and the company has announced plans to make them available for open source research projects as it stops focusing on neural hardware mounted on it. the head. (UCSF receives funding from Facebook but no materials.)

“Aspects of head-mounted optical work will be applicable to our wrist EMG research. We will continue to use optical BCI as a research tool to create better wrist sensor models and algorithms. We will continue to take advantage of these prototypes in our research, we are no longer developing a head-mounted optical BCI device to detect speech production. This is one of the reasons we will be sharing our head-mounted hardware prototypes with other researchers, who can apply our innovation to other use cases, ”said Facebook representative confirmed via email. .

However, neural input technology for consumers is still in its infancy. While there are consumer devices that use non-invasive sensors worn on the head or wrist, they are currently much less accurate than implanted electrodes.

[ad_2]

Source link