[ad_1]
When you decide to continue reading this article, you may change your mind several times. While your final choice is obvious to an observer – you’ll continue to scroll and read, or click on another article – all the internal deliberation you’ve had along the way will likely be impenetrable to anyone but you. This clandestine hesitation is at the center of research, published Jan.20 in Nature, by researchers at Stanford University who are studying how cognitive deliberations are reflected in neural activity.
These scientists and engineers developed a system that read and decoded the activity of monkey brain cells as the animals had to identify whether an animation of moving dots moved slightly to the left or to the right. The system successfully revealed the monkeys’ ongoing decision-making process in real time, with the ebb and flow of indecision along the way.
“I was just looking at the decoded activity trace on the screen, not knowing which direction the dots were moving or what the monkey was doing, and I could tell Sania [Fong], the director of the laboratory, “He will make the right choice”, a few seconds before the monkey launches the movement to signal this same choice “, remembers Diogo Peixoto, former postdoctoral fellow in neurobiology and co-main author of the article . it was fine 80 to 90 percent of the time, and it really proved it worked. “
In later experiments, the researchers were even able to influence the monkeys’ final decisions through subliminal manipulations of dot movement.
“Basically a lot of our cognition is due to continuous neural activity that is not overtly reflected in behavior, so what’s exciting about this research is that we’ve shown that we can now identify and interpret some of these secret internal neural states “. said lead author of the study William Newsome, provostial professor of the Harman family in the department of neurobiology at Stanford University School of Medicine.
“We are opening a window to a world of cognition that has been opaque to science until now,” added Newsome, who is also director Vincent VC Woo of the Wu Tsai Neurosciences Institute.
One decision at a time
Neuroscience studies of decision making have typically involved estimating the average activity of brain cell populations across hundreds of tests. But this process overlooks the intricacies of a single decision and the fact that each decision-making body is slightly different: The myriad of factors that influence whether you choose to read this article today will differ from those that would affect you if you had to. same decision tomorrow.
“Cognition is really complex, and when you average multiple trials, you miss out on important details about how we come to our perceptions and how we make our choices,” said Jessica Verhein, medical / doctoral student in neuroscience and co-lead author of the paper.
For these experiments, the monkeys were fitted with a neural implant the size of a pink fingernail that signaled the activity of 100 to 200 individual neurons every 10 milliseconds as they were shown digital dots scrolling across a screen. . The researchers placed this implant in the dorsal premotor cortex and the primary motor cortex because, in previous research, they found that neural signals from these brain areas convey the animals’ decisions and their confidence in those decisions.
Each moving dots video was unique and was less than two seconds long, and the monkeys reported on their decisions about whether dots moved right or left only when prompted – a correct answer given to the correct one. moment earned a reward in juice. The monkeys clearly indicated their choice, by pressing a right or left button on the screen.
In the ape brains, however, the decision process was less obvious. Neurons communicate with rapid bursts of loud electrical signals, which occur alongside a wave of other activity in the brain. But Peixoto was able to predict the monkeys’ choices easily, in part because the activity metrics he saw were first fed through a signal processing and decoding pipeline based on years of work by the lab. by Krishna Shenoy, Professor Hong Seh and Vivian WM Lim at the School of Engineering and Professor, by courtesy, of Neurobiology and Bioengineering, and Researcher at the Howard Hughes Medical Institute.
Shenoy’s team had used their real-time neural decoding technique for other purposes. “We always try to help paralyzed people by reading their intentions. For example, they can think about how they want to move their arms, and then that intention is executed through the decoder to move a computer cursor on the screen to type. messages, ”said Shenoy, co-author of the article. “So we are constantly measuring neural activity, decoding it millisecond by millisecond, and then acting on that information quickly.”
In this particular study, instead of predicting immediate arm movement, the researchers wanted to predict the intention of a future choice as reported by arm movement – which required a new algorithm. Inspired by the work of Roozbeh Kiani, a former postdoctoral researcher at the Newsome lab, Peixoto and his colleagues have developed an algorithm that takes into account noisy signals from groups of neurons in the dorsal premotor cortex and primary motor cortex and reinterprets them. as a “variable decision.” This variable describes the activity that occurs in the brain before the decision to move.
“With this algorithm, we can decode the monkey’s ultimate decision before it moves its finger, let alone its arm,” Peixoto said.
Three experiences
The researchers hypothesized that more positive values of the decision variable indicated an increased confidence of the monkey that the points were moving to the right, while more negative values indicated a confidence in the points moving towards the right. left. To test this hypothesis, they conducted two experiments: one where they would stop the test as soon as the decision variable hit a certain threshold and another where they stopped it when the variable seemed to indicate a sharp reversal of the monkey’s decision.
In the first few experiments, the researchers stopped testing at five randomly selected levels, and at the highest positive or negative decision variable levels, the variable predicted the monkey’s final decision with about 98% accuracy. . The predictions of the second experiment, in which the monkey had probably changed its mind, were almost as accurate.
Before the third experiment, the researchers checked how many points they could add during the test before the monkey was distracted by the change in stimulus. Then in the experiment, the researchers added dots below the perceptible threshold to see if that would influence the monkey’s decision subliminally. And, while the new points were very subtle, they sometimes skewed the ape’s choices in which direction it was moving. The influence of the new points was stronger if they were added at the start of the trial and any time the monkey decision variable was low – indicating a low level of certainty.
“This last experiment, led by Jessie [Verhein], really allowed us to exclude some of the common models of decision making, “Newsome said. One of these models is that people and animals make decisions based on the cumulative amount of evidence in a But if this were true, then the bias the researchers introduced with the new points should have had the same effect regardless of when they were introduced. Instead, the results seemed to support an alternative model, which states that if a subject has enough confidence in a decision made in their mind, or has spent too much time deliberating, they are less inclined to consider new evidence.
New questions, new opportunities
Already, Shenoy’s lab is repeating these experiments with human participants with neuronal dysfunction who use these same neural implants. Due to the differences between the brains of human and non-human primates, the results could be surprising.
Potential applications of this system beyond the study of decision making include investigations of visual attention, working memory or emotion. The researchers believe their key technological breakthrough – the monitoring and interpretation of secret cognitive states through real-time neural recordings – should prove useful for cognitive neuroscience in general, and they are excited to see how others researchers build on their work.
“The hope is that this research will capture the interest of some undergraduates or new graduate students and get involved in these issues and carry the ball forward for the next 40 years,” Shenoy said.
###
Stanford co-authors include former postdoctoral fellows Roozbeh Kiani (now at New York University), Jonathan C. Kao (now at University of California, Los Angeles), and Chandrasekaran (now at University of Boston); Paul Nuyujukian, assistant professor of bioengineering and neurosurgery; former laboratory director Sania Fong and researcher Julian Brown (now at UCSF); and Stephen I. Ryu, assistant professor of electrical engineering (also head of neurosurgery at the Palo Alto Medical Foundation). Newsome, Nuyujukian, and Shenoy are also members of Stanford Bio-X and the Wu Tsai Neurosciences Institute.
This research was funded by the Champalimaud Foundation, Portugal; Howard Hughes Medical Institute; National Institutes of Health through the Stanford Medical Scientist Training Program; Collaboration of the Simons Foundation on the global brain; Pew Fellowship in Biomedical Sciences; National Institutes of Health (including a Pioneer Director’s Award); McKnight Scholars Award; National Science Foundation; National Institute on Deafness and Other Communication Disorders; National Institute of Neurological Disorders and Stroke; Defense Advanced Research Projects Agency – Bureau of Biological Technologies (NeuroFAST Prize); and Office of Naval Research.
Source link