Artificial intelligence helps determine what visual neurons like to see



[ad_1]

AI helps determine what visual neurons like to see

A new computer program uses artificial intelligence to determine what visual neurons like to see. This approach could lead to a better understanding of learning disabilities, autism spectrum disorders and other neurological disorders.

Why do our eyes tend to be more attracted by certain shapes, colors and silhouettes than others?

For more than half a century, researchers have known that neurons in the visual system of the brain respond unequally to different images – a characteristic essential to the ability to recognize, understand, and interpret the multitude of visual cues that surrounding us. For example, specific populations of visual neurons in an area of ​​the brain called the lower temporal cortex are triggered more when people or other primates – animals with perfectly adapted visual systems – look at faces, places, objects or text. But what these neurons respond to has remained unclear.

Now, a small macaque-based study led by researchers at Harvard Medical School's Blavatnik Institute has yielded valuable clues based on an artificial intelligence system that can reliably determine what neurons in the visual cortex of the brain prefer to see.

A report on the work of the team was published today in Cell.

The vast majority of experiments conducted so far to try to measure neural preferences have used real images. But real images have an inherent bias: they are limited to the stimuli available in the real world and the images that researchers choose to test. The AI-based program overcomes this hurdle by creating synthetic images tailored to the preferences of each neuron.

Will Xiao, a graduate student in the Department of Neurobiology at Harvard Medical School, has designed a computer program that uses a form of reactive artificial intelligence to create self-adjusting images based on neural responses obtained from six macaque monkeys. To do this, he and his colleagues measured the firing rates of individual visual neurons in the animals' brains while they were looking at images on a computer screen.

In a few hours, the animals were shown images with 100 millisecond jerks generated by Xiao's program. The images started with a random texture pattern in grayscale. Depending on the number of controlled neuron shots, the program gradually introduced shapes and colors, transforming itself over time into a final image that fully reflects the preference of a neuron. Since each of these images is synthetic, says Xiao, this avoids the biases that researchers have traditionally introduced using only natural images.

"At the end of every experiment," he said, "this program generates a super stimulus for these cells."

Lead researcher Margaret Livingstone explains the results of these experiments on several separate trials: specific neurons tend to evolve images of the program that are not identical but are remarkably similar.

Some of these images were what Livingstone, Professor of Neurobiology at Takeda of HMS, and his colleagues had been waiting for. For example, a neuron that they think could react to faces has evolved around pink images with two large black dots similar to eyes. Others were more surprising. A neuron in one of the animals consistently generated images that looked like the body of a monkey, but with a red spot near the neck. The researchers came to understand that this monkey was lodged close to another still wearing a red collar.

"We think this neuron has responded preferentially not only to monkey bodies, but to a specific monkey," said Livingstone.

All the final images did not look like something recognizable, Xiao added. The neuron of a monkey has evolved into a small black square. Another developed an amorphous black shape with orange underneath.

Livingstone notes that research in his lab and others has shown that the responses of these neurons are not innate – they are learned through constant exposure to visual stimuli over time. According to Livingstone, when the ability to preferentially recognize and shoot certain images appears during development, it is not known, said Livingstone. She and her colleagues plan to consider this issue in future studies.

It is essential to learn how the visual system responds to images to better understand the fundamental mechanisms of cognitive problems, ranging from learning disabilities to autism spectrum disorders, often marked by an impairment of a child's ability to recognize faces and to process facial signals.

"This malfunction of the brain's visual processing device can adversely affect a child's ability to connect, communicate and interpret basic signals," Livingstone said. "By studying those cells that preferentially respond to faces, for example, we might discover clues about how social development happens and what can sometimes go wrong."

The research was funded by the National Institutes of Health and the National Science Foundation.

Publication: Carlos R. Ponce, et al., "Evolution of images for visual neurons using a deep generative network reveals the principles of coding and neuronal preferences," Cell, 2019; doi: 10.1016 / j.cell.2019.04.005

[ad_2]
Source link