Google creates an external advisory committee to monitor the use of unethical AIs



[ad_1]

Google announced today the creation of a new external advisory committee to monitor the use of artificial intelligence by the company, which could violate the ethical principles set out in the summer. latest. The group was announced by Kent Walker, Google's vice president of global affairs, and includes experts on a wide range of topics including mathematics, computer science, engineering, philosophy, politics and more. public health, psychology and even foreign policy.

The group will call the External Advisory Council on Advanced Technologies and it seems that Google wants to see it as some kind of independent watchdog monitoring how it deploys AI in the real world, putting focus on facial recognition and bias reduction integrated in machine learning training methods. "This group will look at some of the more complex Google issues that arise as part of our artificial intelligence principles … offering diverse perspectives to inform our work," Walker writes.

As far as members are concerned, names may not be easily recognizable to those outside the academic world. However, the skills of the board appear to be of the highest caliber, with resumes including several presidential administration positions and positions at leading universities, including Oxford University, the University of Science and technologies of Hong Kong and UC Berkeley.

Last year, Google found itself involved in a controversy over its participation in a US Defense Department UAV program called Project Maven. Following a huge internal reaction and external criticism for bringing employees to work on artificial intelligence projects that may involve loss of life, Google has decided to end its engagement in Maven after the first day. expiration of his contract. He also developed a new set of guidelines, which CEO Sundar Pichai called Google's principles on AI, banning the company from working on any product or technology that could violate "internationally recognized standards." "or" widely accepted principles of international law and human rights. "

"We recognize that such powerful technology raises equally important questions about its use," wrote Pichai at the time. "The way AI is developed and used will have a significant impact on society for many years to come. As the leader of AI, we feel it is our duty to remedy this problem. Google actually wants his research on AI to be "socially beneficial." This often means that you should not accept a government contract or work in territories or markets where human rights violations are serious.

Whatever the case may be, Google has found itself in another similar controversy related to its project to launch a research product in China, which could involve the deployment of a form of intelligence artificial in a country currently trying to use this same technology to monitor and track its citizens. Google's commitment differs from the position of Amazon and Microsoft, both of whom said they would continue to work with the US government. Microsoft has signed a $ 480 million contract for the supply of HoloLens helmets to the Pentagon, while Amazon continues to sell its Rekognition facial recognition software to law enforcement agencies.

[ad_2]
Source link