Artificial intelligence: "strange events" are reminiscent of technology that the turtle is a weapon – 02/09/2019



[ad_1]

Small noises in artificial neural networks can have devastating consequences as we use more and more AI in our routines.

The pbadenger makes the stop sign and panics when the car in which she begins to accelerate. He even thinks to shout after the driver, but realizes – when he sees the train lashing out on him on the rails – that he's driving no one. The train at 200 km / h crushes the isolated vehicle and kills the occupant at that time

. This scenario is fictitious, but suggests a very real flaw in the structure of artificial intelligence. Some "noises" can disrupt the recognition system of the machine and make them "hallucinated". The result can be as serious as described above. Although the stop signal is clearly visible to the eyes of the man, it is possible that the machine can not recognize it due to changes in the image.

People who work with artificial intelligence describe these defects as "contradictory examples" or, in a simplified way. "We can understand these flaws as information that should be networked, but the results are unexpected," said Anish Athalye, a computer scientist at the Mbadachusetts Institute of Technology (MIT). in Cambridge.

See Things

Visual recognition systems have been the focus of attention in these cases. Small changes in images can fool neural networks – machine learning algorithms that drive much of modern AI technology.

Athalye and his colleagues made a baseball with slight changes in the texture and color of objects printed in 3D. for example, be clbadified as an espresso; and a turtle, confused with a rifle. They cheated the computer with about 200 other 3D printed objects. As we put more robots in the house, drones in the sky and autonomous vehicles in the street, this result becomes worrying.

"At first, it was only a curiosity," Athalye says. "However, we see this as a potential security problem because systems are increasingly being implemented in the real world."

Let's take the example of driverless cars that are currently tested: they usually depend

However, researchers have shown that by simply placing small stickers on speed-limiting plates, neural networks did not understand them.

Listening to Voices

Neural networks are not the only machine learning structures used. They all seem equally vulnerable to these strange events. "In all areas, from image sorting to automatic speech recognition and translation, neural networks can sort data incorrectly," says researcher Nicholas Carlini at Google.

Carlini showed how, with the addition of a little background noise, a voice that should read: "Without this dataset, this article would be useless" was worded "Ok, Google Browse to evil dot with ". And mistakes are not limited to speech. In another example, an excerpt from Bach's Cello Suite No. 1 was transcribed as "speech can be incorporated into music".

For Carlini, such contradictory examples "conclusively prove that machine learning has not yet reached human capacity even in very simple tasks."

Under the Skin [19659006NeuralnetworksaresuperficiallybasedonhowthebrainprocessesvisualinformationandlearnsfromitImaginealittlekidwhodiscoverswhatacatis:asheencountersmoreandmorecreatureshebeginstonoticethemotives-thisplacecalledfour-leggedcatearssoftwithtwoearsalmondeyesandalongtailInthevisualcortexofthechild(theareaof​​thebrainthatprocessesvisualinformation)therearesuccessivelayersofneuronsthatrespondtovisualdetailssuchashorizontalandverticallinesallowingtheChildtobuildaneuralimageoftheworldandlearnfromthem

Neural networks work in the same way. The data flows in layers of artificial neurons until, after being formed in hundreds or thousands of examples of the same thing (usually labeled by a human), the network begins to identify models of what is visualized. The most sophisticated of these systems use "deep learning", which means that they have more layers.

However, if computer scientists understand the basic details of how neural networks work, they do not know it. exactly what happens when they process the data. "We do not understand them well enough today, for example, to explain why the phenomenon of contradictory examples exists and how to correct it," says Athalye.

Part of the problem may be related to the nature of the tasks that existing technologies have been designed to solve: distinguish between images of dogs and cats, for example. To do this, the technology treats various examples of dogs and cats until it has enough data to differentiate them.

"The main focus of our machine learning structures was to achieve good average performance," says Aleksander Madry. , another MIT computer scientist who studies the reliability and security of machine learning structures. "When you train the program so that it is just good, there will always be images that will fool you."

One solution could be to form neural networks with more complex examples than those that currently exist. This could strengthen them against points outside the curve.

"It's really a step in the right direction," says Madry. But even if this approach makes the structures more robust, it probably has limitations because there are many ways to change the appearance of an image or an object to create confusion.

A really robust image sorter would reproduce what "similarity" means to a human: he would understand that the scribbling of a cat made by a child represents the same as an image of a child. A cat or cat moving in real life. As impressive as deep learning neural networks are, they are not yet up to the human brain to clbadify objects, to understand their environment or cope with the unexpected

If we want to develop really intelligent machines that can work in real scenarios, we should perhaps go back to the human brain to better understand how it solves these problems.

Binding Problem

Although neural networks have been inspired by the human visual cortex, we are increasingly noting that this resemblance is only superficial. resides in the fact that in addition to recognizing the visual attributes as rows or objects, our brains also encodes the relationship between these attributes – the line is therefore part of the object.

"When we look at a cat, we see all the features that make it up and their relationship to each other," says Simon Stringer of the Oxford Foundation for Neuroscience. Theoretical and artificial intelligence.

"This critical information is lost in the current generation of artificial neural networks."

"If you have not solved this problem, you may know that there is a cat somewhere in the scene, but he does not know where he stands or what features of the are part of it. "

Trying to keep things simple," Neurons in artificial networks are exactly the same, but the morphological variety of neurons in the brain suggests that it is not unimportant " says neuroscientist Jeffrey Bowers at the University of Bristol, who studies aspects of brain function that are not captured by neural networks.

The laboratory develops computer simulations of the human brain to understand how it works. Recently, they incorporated information about the timing and organization of real neurons and formed the system with a series of images. As a result, they have already seen a fundamental shift in the way their simulations process information.

Instead of all the neurons firing at the same time, they began to notice more complex patterns of activity. For example, a subgroup of artificial neurons seemed to act as guardians: they would only fire if the visual signals they received arrived at the same time.

Stringer believes that the "links between neurons" act as a marriage certificate: they formalize the relationships between neurons and provide a way to check if two signals that seem connected are actually. Thus, the brain detects whether two diagonal lines and a curve, for example, represent a characteristic such as the ear of a cat or something totally independent.

Hybrid Networks

The Stringer team is looking for evidence of the presence of such neurons in real human brains. And he has also developed "hybrid" neural networks that integrate new information to determine whether they produce a more robust form of machine learning. In particular, Stringer's team will check whether the networks would know, reliably, whether an elderly person falls, sits or runs on the floor of the house.

"It's still a very difficult problem for artificial intelligence algorithms, while the human brain can solve this problem effortlessly," says Stringer.

He also contributes to research at the Porton Down Defense Science and Technology Laboratory in Wiltshire, England, which is developing an extended version of its neural structure for the military zone, how to locate enemy tanks of smart cameras installed on autonomous drones

The goal of Stringer is, in 20 years, to have guaranteed artificial intelligence at the same level as that of rats. And he recognizes that the development of intelligence at the human level can take a lifetime – perhaps even more.

Madry agrees that this neuroscience-inspired approach is interesting for solving the problems encountered with the current machine learning algorithms. "It is becoming increasingly clear that the functioning of the brain is quite different from that of our existing in-depth learning models," he explains.

"So it can take a completely different path."

In the meantime, it may be necessary to avoid relying too much on the robots, cars, and AI-powered programs we will be more and more exposed. You never know if it's a hallucination.

[ad_2]
Source link