[ad_1]
The researchers designed an algorithm, called XDREAM, that generated images that allowed neurons to trigger more than all the natural images that researchers tested. As the images evolved, they began to look like distorted versions of real-world stimuli. The work appears May 2 in the newspaper Cell.
"When they were given this tool, the cells began to increase their rate of fire beyond the levels we had seen before, even with normal preselected images to achieve the highest firing rates," said the spokesman. -First author, Carlos Ponce, then postdoctoral fellow of the lab of the main writer Margaret Livingstone at Harvard Medical School and now a faculty member of Washington University in St. Louis.
"What began to emerge in each experience are images that are reminiscent of world forms but are not real objects in the world," he explains. "We saw something that looked more like the language used by the cells with each other."
Researchers know that primate brain visual cortex neurons respond to complex images, such as faces, and that most neurons are fairly selective in their image preferences. Previous studies of neural preferences used many natural images to determine which images caused the most fire by neurons. However, this approach is limited by the fact that one can not present all possible images to understand what will best stimulate the cell.
The XDREAM algorithm uses the trigger rate of a neuron to guide the evolution of a new synthetic image. He goes through a series of images over the minutes, mutes them, combines them, and then displays a new series of images. At first, the images looked like noise, but they gradually turned into face-like shapes or something recognizable in the animal's environment, like the food hopper in the room. the animal or familiar people wearing a surgical scrub. This algorithm was developed by Will Xiao in Gabriel Kreiman's laboratory at the Children's Hospital and tested on real neurons at Harvard Medical School.
"The big advantage of this approach is that it allows the neuron to create its own favorite images from scratch, using an unrestricted tool, able to create anything in the world or even things that do not exist in the world, "says Ponce.
"In this way, we have developed a super stimulus that drives the cell better than any natural stimulus we can guess," says Livingstone. "This approach allows you to use artificial intelligence to determine what triggers the best neurons." It's a totally unbiased way of asking the cell what it really wants, what which would trigger it the most. "
From this study, researchers believe that the brain learns to ignore the statistically relevant characteristics of its world. "We find that the brain analyzes the visual scene and is guided by the experience, extracting information that is important to the individual over time," says Ponce. "The brain adapts to its environment and encodes environmentally important information in unpredictable ways."
The team believes that this technology can be applied to any neuron in the brain responding to sensory information, such as auditory neurons, hippocampal neurons and prefrontal cortex neurons, where memories can be viewed. "This is important because, while artificial intelligence researchers are developing models that work as well as the brain – or better still, we will still need to understand which networks are most likely to behave safely and to achieve of other human goals, "said Ponce. "A more effective artificial intelligence can be based on knowing how the brain works."
This work was funded by grants from the National Institutes of Health and the National Science Foundation.
Source of the story:
Material provided by Cell press. Note: Content can be changed for style and length.
[ad_2]
Source link