Facial Recognition Researcher Fights Amazon's Biased Artificial Intelligence



[ad_1]

CAMBRIDGE, Mass. (AP) – Face recognition technology was already making its way into everyday life – from your photos on Facebook to mugshots scans – when Joy Buolamwini noticed a small, serious problem: some software could not detect faces on the skin as dark as his.

This revelation prompted the Massachusetts Institute of Technology researcher to launch a project that would have a disproportionate influence in the debate over how artificial intelligence should be deployed in the real world.

His tests on software created by brand-name technology companies such as Amazon revealed much higher error rates in the sex classification of women with darker skin than in men with lighter skin.

Along the way, Buolamwini encouraged Microsoft and IBM to improve their systems and thwarted Amazon, which publicly attacked its search methods. On Wednesday, a group of researchers in artificial intelligence, including a winner of the first prize in computer science, launched an energetic defense of his work and called on Amazon to stop selling its facial recognition software to the police.

His work has also attracted the attention of political leaders of state and congressional palaces and has led some to look for limits to the use of computer vision tools to analyze the human faces.

"There has to be a choice," said Buolamwini, a graduate student and researcher at MIT's Media Lab. "At the present time, what is happening is that these technologies are widely deployed unattended, often covertly, so that when we wake up, it is almost too late."

Buolamwini is hardly the only one to express reservations about the rapid adoption of facial recognition by police, government agencies and businesses, from shops to apartment complexes. Many other researchers have shown how AI systems, which look for patterns in huge amounts of data, will mimic the institutional biases embedded in the data from which they learn. For example, if AI systems are developed with the help of images of most white men, they will work better to recognize white men.

These disparities can sometimes be a matter of life or death: A recent study of computer vision systems that allow autonomous cars to "see" the road shows that they have more trouble detecting pedestrians with more skin. dark.

The work of Boulamwini has had a positive influence on his method of testing systems created by well-known companies. She applies such systems to a scale of skin tones used by dermatologists, then names and shames those with racial and sexist prejudices. Buolamwini, who also founded a coalition of scholars, activists and others called the Algorithmic Justice League, has mixed his scholarly investigations with activism.

"This adds to a growing body of evidence that facial recognition affects different groups in different ways," said Shankar Narayan, of the American Civil Liberties Union of Washington State, where the group asked for restrictions on the technology. "Joy's work has helped to reinforce this awareness."

Amazon, whose chief executive, Jeff Bezos, had been in direct contact last summer, responded by focusing on his research methods.

A study conducted by Buolamwini published a little over a year ago revealed disparities in the way facial analysis systems built by IBM, Microsoft and the Chinese company Face Plus Plus ranked people by sex. Women with darker skin were the lowest ranked group, with error rates of up to 34.7%. In contrast, the maximum error rate in light-skinned men was less than 1%.

The study called for "urgent attention" to address this bias.

"I almost immediately responded," said Ruchir Puri, chief scientist of IBM Research, describing an email that he had received from Buolamwini last year.

Since then, "it 's a very fruitful relationship" that has allowed IBM to unveil this year a new database of one million images for a better analysis of the diversity of faces. The previous systems were too dependent on what Buolamwini calls frames of pictures of "pale men".

Microsoft, which had the lowest error rates, declined to comment. Messages left with Face Plus Plus were not immediately returned.

A few months after his first study, when Buolamwini worked with a University of Toronto researcher, Inioluwa Deborah Raji, on a follow-up test, the three companies revealed major improvements.

But this time, they also added Amazon, which sold the system it calls Rekognition to law enforcement agencies. The results, published in late January, showed that Amazon had poorly identified women with darker shades.

"We were surprised to see that Amazon was where its competitors were a year ago," said Buolamwini.

Amazon rejected what she called Buolamwini's "erroneous statements" and said the study confused facial analysis with facial recognition, wrongly measuring the former with the evaluation techniques of the second.

"The solution to the concerns of new technologies is not to run inconsistent" tests "of how the service is designed, nor to amplify the false and misleading conclusions of the test through the media," said Matt Wood. general manager of artificial intelligence. The cloud computing division of Amazon wrote in a January blog post. Amazon declined the interview requests.

"I did not know that their reaction would be so hostile," Buolamwini said during an interview in his MIT lab.

On Wednesday, his coalition was made up of researchers, including AI pioneer Yoshua Bengio, recently awarded the Turing Prize, who reviewed the Nobel Prize version in the field of technology.

They criticized Amazon's response, particularly its distinction between facial recognition and analysis.

"Contrary to Wood's claims, the biases found in one system are worrisome in the other, especially in cases of use that can have a significant impact on people's lives, such as forces of order, "they wrote.

His few publicly known clients defended the Amazon system.

Chris Adzima, Senior Information Systems Analyst for the Washington County Sheriff's Office in Oregon, said the agency was using Amazon's Rekognition to identify the most likely matches from its collection of approximately 350,000 photos. But because a human makes the final decision, "the bias of this computer system is not transferred to any results or action taken," said Adzima.

But more and more, regulators and legislators have doubts. A bipartisan bill in Congress seeks to limit facial recognition. The legislatures of Washington and Massachusetts study their own laws.

Buolamwini said that one of the main results of his research is that artificial intelligence systems must be carefully examined and constantly monitored if they are to be used by the public. Not only to check for accuracy, she said, but to ensure that face recognition is not misused to violate privacy or cause other harms.

"We can not let companies alone do this kind of control," she said.

___

Gillian Flaccus, an Associated Press writer, wrote the report from Hillsboro, Oregon.

Copyright © Associated Press. All rights reserved. This material may not be published, disseminated, rewritten or redistributed.

pics

Related stories

[ad_2]

Source link