[ad_1]
A team of researchers from MIT and Massachusetts General Hospital recently published a study linking social awareness to individual neural activity. To our knowledge, this is the first time that evidence for “theory of mind” has been identified on this scale.
Measuring large groups of neurons is the bread and butter of neurology. Even a simple MRI can show signs of Regions brain and give scientists an indication of their use and, in many cases, what kind of thoughts are occurring. But understanding what is going on at the level of a single neuron is an entirely different feat.
According to the paper:
Here, using recordings of single cells in the human dorsomedial prefrontal cortex, we identify neurons that reliably encode information about the beliefs of others across a wide variety of scenarios and that distinguish representations related to oneself from others. beliefs … these results reveal a detailed cellular process in the human dorsomedial prefrontal cortex to represent the beliefs of others and identify candidate neurons that could support theory of mind.
In other words: researchers believe they’ve observed individual brain neurons forming the patterns that cause us to think about what others might be feeling and thinking. They identify empathy in action.
This could have a huge impact on brain research, particularly in the area of mental illness and social anxiety disorders or in the development of personalized treatments for people with autism spectrum disorders.
However, perhaps the most interesting thing about it is what we could potentially learn about consciousness through teamwork.
[Read: How this company leveraged AI to become the Netflix of Finland]
The researchers asked 15 patients who were scheduled to undergo a specific type of brain surgery (unrelated to the study) to answer a few questions and undergo a simple behavioral test. According to a press release from Massachusetts General Hospital:
Microelectrodes inserted into the dorsomedial prefrontal cortex recorded the behavior of individual neurons as patients listened to short stories and answered questions about them. For example, participants were presented with this scenario to assess how they viewed other people’s beliefs about reality: “You and Tom see a pot on the table. After Tom leaves, you move the pot to a cabinet. Where does Tom think the pot is?
Participants had to make inferences about the beliefs of others after hearing each story. The experience did not change the planned surgical approach or change clinical care.
The experiment basically took a big concept (brain activity) and incorporated it as much as possible. By adding this layer of knowledge to our collective understanding of how individual neurons communicate and work together to emerge what is ultimately a theory of other minds in our own consciousness, it may become possible to identify and quantify other neural systems in action using similar experimental techniques.
It would, of course, be impossible for human scientists to find ways to stimulate, observe, and label 100 billion neurons – if for no other reason than the fact that it would take thousands of years to count them. , let alone watch them respond. to provocation.
Fortunately, we’ve entered the age of artificial intelligence, and if there’s one thing that artificial intelligence is good at, it’s doing really monotonous things, like labeling 80 billion individual neurons, very quickly.
It’s not hard to imagine the Massachusetts team’s methodology being automated. Although it appears that the current iteration requires the use of invasive sensors – hence the use of volunteers who were already due to have brain surgery – it is certainly possible that such fine readings could one day be obtained with a external device.
The ultimate goal of such a system would be to identify and map every neuron in the human brain as it functions in real time. It would be like seeing a labyrinth of hedges from a hot air balloon after an eternity lost in its twists and turns.
This would give us a divine view of consciousness in action and, potentially, allow us to reproduce it more precisely in machines.
Published January 27, 2021 – 20:34 UTC
[ad_2]
Source link