[ad_1]
The pbadenger makes the stop sign and panics when the car in which he begins to accelerate. He even thinks to shout after the driver, but realizes – when he sees the train lashing out on him on the rails – that he's driving no one. The train at 200 km / h crushes the autonomous vehicle and kills the occupant at that time.
This scenario is fictitious but indicates a very real flaw in the structure of artificial intelligence. Some "noises" can disrupt the recognition system of the machine and make them "hallucinated". The result can be as serious as described above. Although the stop signal is clearly visible to human eyes, the machine may not be able to recognize it due to changes in the image.
Those who work with artificial intelligence describe these failures as "contradictory examples" or, in a simplified way, "strange events".
"We can understand these flaws as information that needs to be networked, but the results are unexpected," said Anish Athalye, a computer scientist at the Mbadachusetts Institute of Technology (MIT). in Cambridge.
Visual recognition systems have been the focus of attention in these cases. Small changes in images can fool neural networks – machine learning algorithms that drive much of modern AI technology. This type of system is already used, for example, to tag friends on photos or identify objects on smartphone images.
With slight changes in the texture and color of objects printed in 3D, Athalye and his colleagues made a baseball, for example, by calling it espresso; and a turtle, confused with a rifle. They cheated the computer with about 200 other 3D printed objects. As we put more robots in the house, drones in the sky and autonomous vehicles on the street, this result is worrying.
"At first, it was only a curiosity," says Athalye. "However, we see this as a potential security problem because systems are increasingly deployed in the real world."
Take the example of self-driving cars that are now undergoing practical tests: they usually depend on deep, sophisticated neural networks to navigate and tell them what to do.
However, researchers have shown that by simply placing small adhesives on speed-limiting plates, the neural networks do not understand them.
Neural networks are not the only machine learning structures used, they all seem equally vulnerable to these strange events. And they are not limited to visual recognition systems alone.
"In all areas, from image clbadification to automatic speech recognition and translation, neural networks can sort data incorrectly," says Nicholas Carlini, researcher at Google Brain, who is developing intelligent machines.
Carlini showed how, with the addition of a bit of background noise, a voice that should read: "Without this dataset, this article would be useless", as follows: "Ok, Google navigate to the wrong point ". And mistakes are not limited to speech. In another example, an excerpt from Bach's sequel to cello 1 was transcribed as follows: "speech can be incorporated into music".
For Carlini, such contradictory examples "conclusively prove that machine learning has not yet reached human ability even in very simple tasks".
Neural networks are superficially based on how the brain processes visual information and learns from it. Imagine a little kid who discovers what a cat is: as he encounters more and more creatures, he begins to notice the motives – this place called four-legged cat, ears soft with two ears, almond eyes and a long tail. soft
Inside the visual cortex of the child (the area of the brain that processes visual information), lie successive layers of neurons that respond to visual details such as horizontal and vertical lines, allowing the child to build a neural image of the world and learn. with that.
Neural networks work in a similar way. The data flows in layers of artificial neurons until, after being formed in hundreds or thousands of examples of the same thing (usually labeled by a human), the network begins to identify models of what is visualized. The most sophisticated of these systems use "deep learning", which means that they have more layers.
However, if computer scientists understand the basic details of how neural networks work, they do not know exactly what happens when the data is processed. "We do not understand them well enough today, for example, to explain why the phenomenon of contradictory examples exists and how to correct it," said Athalye.
Part of the problem may be related to the nature of the tasks that existing technologies have been designed to solve: distinguish between images of dogs and cats, for example. To do this, the technology treats several examples of dogs and cats, until it has enough data to differentiate them.
"The main goal of our machine learning structures was to achieve good average performance," says Aleksander Madry, another MIT computer scientist, who studies the reliability and security of structures machine learning. "When you train the program so that it is just good, there will always be images that will fool you."
One solution could be to form neural networks with more complex examples than those that currently exist. This could strengthen them against points outside the curve.
"It's really a step in the right direction," says Madry. But even if this approach makes the structures more robust, it probably has limitations because there are many ways to change the appearance of an image or an object to create confusion.
A really robust image sorter would reproduce what "resemblance" means to a human: he would understand that scribbling a cat by a child represents the same thing as the image of a cat. a moving cat or a live cat. real. As impressive as the networks of deep learning neurons are, they are still not up to the human brain when it comes to clbadifying objects, understanding their environment or coping to the unexpected.
If we want to develop truly intelligent machines that can work in real-life scenarios, we should perhaps go back to the human brain to better understand how it solves these problems.
Although neural networks have been inspired by the human visual cortex, we are increasingly aware that this resemblance is only superficial. The main difference is that in addition to recognizing visual attributes in the form of lines or objects, our brain also codes the relationships between these attributes – so the line is part of the object. This allows us to badign meaning to the patterns we see.
"When we examine a cat, we see all the features that make it up and their connection to each other," says Simon Stringer of the Oxford Foundation for Theoretical Neuroscience and Artificial Intelligence. "This" linking "information ensures our ability to understand the world and our general intelligence."
This critical information is lost in the current generation of artificial neural networks.
"If you have not solved this problem, you may know that there is a cat in the scene, but you do not know where it is and you do not know what features of the scene are part of this cat, "says Stringer.
In trying to keep things simple, engineers responsible for artificial neural structures have ignored various properties of real neurons – the importance of which is becoming increasingly clear.
"The neurons in artificial networks are exactly the same, but the morphological variety of neurons in the brain suggests that this is not useless," says neuroscientist Jeffrey Bowers at Bristol University, who studies aspects of brain function that are not captured. neural networks.
Your lab is developing computer simulations of the human brain to understand how it works. Recently, they incorporated information about the timing and organization of real neurons and formed the system with a series of images. As a result, they have made a fundamental shift in the way their simulations process information.
Instead of all neurons triggering at the same time, they began to notice more complex patterns of activity. For example, a subgroup of artificial neurons seemed to act as guardians: they would shoot only if the visual signals they received arrived at the same time.
Stringer believes that "binding neurons" act as a marriage certificate: they formalize the relationships between neurons and make it possible to check if two connected appearing signals are actually connected. In this way, the brain detects whether two diagonal lines and a curve, for example, represent a characteristic such as the ear of a cat or something totally independent.
The Stringer team is looking for evidence of the presence of such neurons in the human brain. And he has also developed "hybrid" neural networks that integrate new information to determine whether they produce a more robust form of machine learning. In particular, the Stringer team will check if the networks would be able to reliably know if an elderly person is falling, simply by sitting down or dropping their groceries on the floor of the house.
"It's still a very difficult problem for artificial vision algorithms, while the human brain can solve this problem effortlessly," says Stringer.
He is also collaborating on research for the Porton Down Defense Science and Technology Laboratory in Wiltshire, England, which is developing an expanded version of its neural structure for the military zone, such as tank localization. enemies of smart cameras installed on autonomous drones
The objective of Stringer is, in 20 years, to guarantee an artificial intelligence at the same level as that of the mouse. And he recognizes that the development of intelligence at the human level can take a lifetime – perhaps even more.
Madry agrees that this neuroscience-inspired approach is interesting for solving problems with current machine learning algorithms. "It is becoming increasingly clear that the functioning of the brain is very different from that of our existing deep learning models," he said.
"So it may take a completely different path, it's hard to say how viable it is and how long it will take to succeed in this case," he adds.
In the meantime, it may be necessary to avoid relying too much on robots, cars and artificial intelligence-driven programs, to which we are increasingly exposed. You never know if it's a hallucination.
Source link