Scientists speak directly from the brain – TechCrunch



[ad_1]

In a feat that could eventually pave the way for speech for people with serious illnesses, scientists have succeeded in recreating the words of healthy subjects by relying directly on their brains. The technology is very long, but the scientific application is real and the promises are there.

Edward Chang, neurosurgeon at UC San Francisco and co-author of the paper published today in Nature, explained the impact of the team's work in a press release: "For The first time, this study demonstrates that we can generate whole spoken sentences based on the brain activity of an individual. It is an exalting proof of principle that with technology already within our reach, we should be able to build a clinically viable device in patients with speech loss. "

To be perfectly clear, it is not a magic machine in which you sit down and which translates your thoughts into words. It is a complex and invasive process that does not exactly decode the subject. thinking but what they were actually talking.

Led by Gopala Speech Specialist Anumanchipalli, this experiment was performed on topics that had already had large electrode networks implanted in their brains for a different medical procedure. The researchers asked the lucky ones to read several hundred sentences out loud while recording the signals detected by the electrodes.

The electrode network in question

You see, it happens that researchers know a certain pattern of brain activity after thinking and arranging words (in cortical areas like Wernicke and Broca) and before the final signals are sent from the motor cortex to the muscles of the tongue and of the mouth. There is a kind of intermediary signal between those whom Anumanchipalli and his co-author, Josh Chartier, a graduate student, had previously characterized, and who, in their opinion, could work for the purpose of reconstructing the discourse.

The direct audio analysis allows the team to determine the muscles and movements that would be involved when (this is a fairly established science), and from that they built a kind of virtual model of the vocal system of the nobody.

They then mapped the brain activity detected during the session on this virtual model with the aid of a machine learning system, essentially allowing a brain to record for control the recording of a mouth. It is important to understand that it is not a question of transforming abstract thoughts into words, but of understanding the concrete instructions given by the brain to the muscles of the face and of determining, from these, which words these movements would form. It's a brain reading, but it's not mind reading.

The resulting synthetic discourse, although not perfectly clear, is certainly intelligible. And properly configured, he might be able to produce 150 words per minute from someone who would otherwise be unable to speak.

"We still have some way to go to perfectly imitate spoken language," said Chartier. "Nevertheless, the precision levels we have produced here would be an incredible improvement in real-time communication compared to what is currently available."

For comparison, a person so affected, for example a degenerative muscular disease, must often pronounce by spelling the words letter by letter with his gaze. Imagine 5 to 10 words per minute, with other methods for people with disabilities, even slower. It is a miracle they can communicate, but this time-consuming and unnatural method is far from the speed and expressiveness of real speech.

If a person were able to use this method, it would be much closer to ordinary speech, although it might lose precision. But it is not a miracle solution.

The problem with this method is that it requires a large amount of carefully collected data from a healthy speech system, from the brain to the tip of the tongue. For many people, it is no longer possible to collect this data and for others, the invasive collection method will make the task impossible for a doctor. And the conditions that prevented a person from speaking have always prevented this method from working.

The good news is that this is a start, and there are many conditions for which it would work, theoretically. And collecting these critical data from brain and speech recordings could be done preventively in cases where a stroke or degeneration is considered a risk.

[ad_2]

Source link