Computers, like those running autonomous cars, may confuse random squiggles with trains, fences and even school buses. People are not supposed to be able to see how these images are tripping computers, but in a new study, researchers at Johns Hopkins University show that most people can do it.
The results suggest that modern computers may not be as different from humans as we think, and show that advances in artificial intelligence continue to reduce the gap between the visual abilities of people and those machines. The search appears today in the newspaper Nature Communications.
"Most of the time, research in our field is about getting computers to think the same way as others," says lead author Chaz Firestone, an assistant professor in Johns' Department of Psychological Sciences and Brain. Hopkins. "Our project does the opposite – we ask if people can think like computers."
What is easy for humans is often difficult for computers. Artificial intelligence systems have long been more effective than people in doing mathematics or remembering large amounts of information; but for decades, humans have had the advantage of recognizing everyday objects such as dogs, cats, tables or chairs. But recently, "neural networks" mimicking the brain have approached human ability to identify objects, leading to technological advances supporting autonomous cars, facial recognition programs and helping doctors detect abnormalities in radiological exams.
Even with these technological advances, there is a critical blind spot: it is possible to intentionally create images that neural networks can not see correctly. And these images, called "contradictory" or "misleading" images, are a big problem: not only could they be exploited by hackers and pose security risks, but they also suggest that humans and machines see images very differently.
In some cases, all it takes for a computer to call an apple a car is to reconfigure a pixel or two. In other cases, machines see armadillos and armadillos in what looks like a static television.
"These machines seem to be objects that misidentify objects in a way that humans could never," says Firestone. "But, surprisingly, no one has really tested this, how do we know that people can not see what computers have done?"
To test this, Firestone and lead author Zhenglong Zhou, a Johns Hopkins senior specializing in cognitive science, essentially asked people to "think like a machine". The machines only have a relatively small vocabulary for naming the images. Thus, Firestone and Zhou showed people dozens of deceptive images that had already fooled computers and gave people the same type of labeling as the machine. In particular, they asked people what was the option chosen by the computer from two options: one being the actual conclusion of the computer and the other an answer random. (Was the blob a bagel or a reel?) In fact, people were totally in agreement with the conclusions of the computers.
People chose the same answer as computers 75% of the time. Perhaps most notably, 98% of respondents tended to respond like computers.
Then the researchers pushed the bar higher by giving people the choice between the computer's favorite answer and its best guess. (Was the blob a bagel or a pretzel?) People again validated the choices of the computer, with 91% of those tested agreeing with the first choice of the machine.
Even when researchers asked people to choose between 48 choices of objects and even when the images looked like static images, an overwhelming proportion of subjects chose what the machine chose well beyond random rates . A total of 1,800 subjects were tested in the different experiments.
"We discovered that if you put a person in the same situation as a computer, humans tend to accept machines," Firestone says. "It's still a problem for artificial intelligence, but it's not as if the computer was saying something completely different from what a human would say."
Research Identifies Major Weaknesses in Modern Computer Vision Systems
Johns Hopkins University
Researchers urge humans to think like computers (March 22, 2019)
recovered on March 22, 2019
This document is subject to copyright. Apart from any fair use for study or private research purposes, no
part may be reproduced without written permission. Content is provided for information only.