[ad_1]
Microsoft asked the government to seriously consider the regulatory ambiguity regarding the deployment and adoption of facial recognition technology, and Brad Smith, the company's president, urged regulators to confirm the face detection algorithms. Where dozens of investors and consumer advocacy groups have come together against Amazon to provide an artificial face detection system for law enforcement agencies. Brad Smith and Harry Shum, director of Microsoft's artificial intelligence department, published a study earlier this year that predicted that advances in artificial intelligence would require new laws, but what Smith wrote today It was the first time that Redmond Facial Recognition Systems, which evolve differently from rivals like Amazon, said in June that the private sector needed to act responsibly in the use of technology. artificial intelligence.
"Technology companies are increasingly forced to limit the way government agencies use facial recognition technology and other technologies. Our democratic system does not allow our elected representatives to make decisions about issues that require a balance between public safety and our democratic freedoms, and the government must play an important role in the organization of facial recognition technology. "
Smith points out that facial recognition has become very prevalent in our society, which is not necessarily a bad thing: the police have used it to track more than 3,000 missing children in four days To identify the suspect involved in the shooting incident last month, but that does not mean that there is no possibility of abuse.
Smith said: "Imagine that the government is tracking all the places I've crossed last month without your permission or your knowledge. Imagine a database for anyone participating in a political protest that is the very essence of freedom. 39; expression. "For every shelf of products that you care about and for the products you buy without asking for permission first, it's been a science fiction for a long time, but it's about to become possible. "
Recognition Systems Facial are less good at identifying African-American faces than Caucasian faces A research article published in February by Timnit Gebru, a Microsoft researcher, has shown error rates of up to 35% percent in facial recognition systems to treat dark-skinned women Facial recognition systems are similar to many artificial intelligence techniques, which usually have an error rate even when they operate in an even-handed way .
Smith did not look for specific laws or ethical principles, but asked a series of questions to regulators, including: "The use of facial recognition technology by the forces of the United States." should the order be subject to human control? The existence of civilian oversight and accountability for the use of this technology under the government's national security technology practices?
Smith called on technology companies to look for ways to reduce the risk of bias in facial recognition technology, to adopt a reasoned and transparent approach to the development of facial detection systems, to deploy slowly and deliberately facial recognition technology. This technology
For its part, Microsoft created an internal consulting committee called Aether Committee to study the use of artificial intelligence, issued a set of ethical principles to develop its own technologies. artificial intelligence and refused to deploy facial recognition technology. Human Rights
Brad Smith's involvement comes as Microsoft, Google and other technology companies are heavily criticized for providing tools and expertise to controversial programs. , which succumbed to public pressure, canceled a contract with ICE in June. And Google employees protested the company's involvement in Project Maven, a Department of Defense program that sought to identify potential targets through drone videos.
Microsoft
[Jsid=id;jsasync=true;jssrc=”http://connectfacebooknet/ar_AR/alljs#appId=&xfbml=1″;dgetElementsByTagName('head')[0] .appendChild (js);} (document));
[ad_2]
Source link