Google announces an ethics committee on AI



[ad_1]

Google logo

Copyright of the image
Getty Images

Google has launched a global advisory board to provide advice on ethical issues related to artificial intelligence, automation and related technologies.

The panel consists of eight people and includes a former US Secretary of State and an Associate Professor of the University of Bath.

The group "will consider some of Google's most complex challenges," the firm said.

The panel was announced at the Massachusetts Institute of Technology conference at the EmTech Digital conference of the MIT Technology Review.

Google has been the subject of intense criticism – both internal and external – about how it plans to use emerging technologies.

In June 2018, the company announced that it would not renew the contract with the Pentagon for the development of an artificial intelligence technology to control drones. The Maven project, as it was known, was unpopular with Google staff and led to some resignations.

In response, Google has released a set of "principles" in terms of artificial intelligence, which it has promised to respect. They included promises to be "socially beneficial" and "accountable to people".

The External Advisory Council on Advanced Technology (ATEAC) will meet for the first time in April. Kent Walker, head of global business at Google, said in a blog post that there would be three more meetings in 2019.

Google has published a complete list of panel members. It includes mathematician Bubacarr Bah, former US Secretary of State William Joseph Burns, and Joanna Bryson, who teaches computer science at the University of Bath, UK.

He will discuss recommendations on how to use technologies such as facial recognition. Last year, Diane Greene, then head of cloud computing at Google, described facial recognition technology as presenting an "inherent bias" due to the lack of diverse data.

In a much-quoted thesis entitled "Robots should be slaves", Ms. Bryson argued against the tendency to treat robots as people.

"By humanizing them," she wrote, "not only are we dehumanizing more real people, but we are also encouraging poor human decision-making in the allocation of resources and accountability."

In 2018, she argued that the complexity should not be used as an excuse for not properly informing the public of the functioning of AI systems.

"When a system using AI causes damage, we must know that we can hold accountable to the human beings behind that system."

_____

Follow Dave Lee on Twitter @DaveLeeBBC

Do you have more information about this or any other technology story? You can reach Dave directly and securely via the encrypted email application. Report it: +1 (628) 400-7370

[ad_2]

Source link