Amnesty International researchers reprimand Amazon for selling imperfect face recognition technology to police – BGR



[ad_1]

Amazon has already been targeted by Rekognition, the facial recognition technology that the company has allowed the police to use, which has raised concerns that the company is essentially supporting the state of surveillance. Technology has also been the subject of review in the past for various reasons, such as the fact that the system might be defective, for example by mistakenly identifying minorities.

Today, leading researchers in artificial intelligence from all sectors of technology and academia, including Amazon rivals, such as Google, Microsoft, and Facebook, have published an open letter via Medium, essentially reprimanding Amazon for have sold the technology to the police. And of course, the letter asks the company to stop.

Citing a statement by Amazon vice president, Michael Punke, pointing out that the company supports legislation that helps ensure that its products are not used to violate civil liberties, the letter continues: "Call Amazon to stop to sell the recognition to the law enforcement as such legislation and the guarantees are not in place. "

The letter appears to have been triggered in part by Amazon's reaction to the research of Joy Buolamwini, a researcher at the Massachusetts Institute of Technology. His tests revealed that software companies like Amazon, including software made available to the police, would give higher error rates to try to detect the sex of women with dark skin compared to men with lighter skin. According to one Associated press report, she included in her search software from Microsoft and IBM, which sought to solve the problems that she identified.

Amazon, however, "reacted by criticizing its research methods." From the open letter of IA researchers:

There is currently no law to verify the use of Rekognition, Amazon did not disclose who its customers were, and what was the error rate between different intersectional demographics. How can we then ensure that this tool is not used incorrectly as stipulated (Amazon Web Services GM for In-depth Learning and AI Matthew Wood)?

The letter continues, it is on audits conducted by independent researchers such as Buolamwini "with concrete figures and experiences clearly designed, explained and presented, which demonstrate the types of bias that exist in these products. This critical work rightly justifies the use of such immature technologies in high-stakes scenarios without public debate or legislation in place to ensure that civil rights are not violated. "

Image Source: John Raoux / AP / Shutterstock

[ad_2]

Source link