Exclusive: Google cancels the ethics committee of IA in response to an outcry



[ad_1]

This week, Vox and other outlets announced that the newly created Google Artificial Intelligence Ethics Committee was collapsing under the controversy surrounding several board members.

Well, it's officially finished collapsing – it's been canceled. On Thursday, Google told Vox that it was unplugging the ethics committee.

The council survived barely more than a week. Founded to guide "responsible AI development" at Google, it would have gathered eight members and would have met four times in 2019 to review concerns about Google's AI program. These concerns include how AI can allow authoritarian states, how AI algorithms produce disparate results, whether or not working on military AI applications, and so on. . But he encountered problems from the beginning.

Thousands of Google employees have signed a petition calling for the removal of Kay Coles James, president of the Heritage Foundation, from her comments on transgender people and her organization's skepticism about climate change. At the same time, the inclusion of Dyan Gibbens, CEO of the drone company, has reopened former divisions of the company on the use of the company's artificial intelligence for military applications.

Board member Alessandro Acquisti resigned. Another member, Joanna Bryson, defending her decision not to resign, claimed from James"Believe it or not, I know something worse about one of the other people." Other board members found themselves submerged with requests that they justify their decision to stay on the board.

Thursday afternoon, a spokesman for Google told Vox that the company had decided to completely dissolve the panel, called the External Advisory Council on Advanced Technologies (ATEAC). Here is the complete statement of the company:

It has become clear that in the current environment, ATEAC can not function as we wish. We finish the advice and go back to the drawing board. We will continue to be responsible for our work on the important issues raised by artificial intelligence and will find different ways to obtain outside views on these topics.

The panel was meant to add outside perspectives to the ongoing work on the AI ​​ethics of Google engineers, all of which will continue. Let's hope that the board's cancellation does not represent a retreat from Google's work on AI ethics in the field of AI, but an opportunity to think about how to involve stakeholders more constructively exterior.

The board became a huge handicap for Google

The credibility of the board was tainted when Alessandro Acquisti, researcher in the protection of privacy, announced on Twitter that he resigned, saying: "Although I am dedicated to researching and addressing key ethical issues such as equity, rights and inclusion in AI, I do not think this is the right forum for me to participate in this important work. "

In the meantime, the petition to remove Kay Coles James attracted more than 2,300 signatures from Google employees and showed no sign of a loss of speed.

As council anger intensified, board members were drawn into long ethical debates about why they sat on the board, which could not be what Google was hoping for. On Facebook, Luciano Floridi, board member, philosopher on ethics at Oxford, wrote:

Request [Kay Coles James’s] the advice was a serious mistake and sent a wrong message about the nature and objectives of the entire ATEAC project. From an ethical point of view, Google misjudged what it means to have representative opinions in a broader context. If Ms. Coles James does not resign, as I hope, and if Google does not remove it (https://medium.com/…/googlers-against-transphobia-and-hate-…), as I am I personally recommended, the question becomes: what is the right moral attitude to adopt facing this serious mistake?

He finally decided to stay on the panel, but it was not the kind of ethical debate that Google was hoping to spark – and it was hard to imagine that the two are working together.

It was not the only problem. One day ago, I argued that aside from outrage, the board was not well prepared to succeed. AI ethics committees like Google, which are in vogue in Silicon Valley, seem to be largely unprepared for solving, or even advancing, difficult questions about the progress of the ethics of the IA.

A member of Google's artificial intelligence board was an unpaid and powerless position that does not allow, during four meetings in a year, to fully understand everything Google does, let alone 'offer nuanced advice on this subject. There are urgent ethical questions about Google's work on AI – and no real way for the board of directors to deal with them satisfactorily. From the beginning, it was poorly designed for the purpose.

Now it has been canceled.

Google still needs to understand the ethics of AI – but not like that

Many Google AI researchers are actively working to make AI more fair and transparent, and clumsy leadership missteps will not change that. The Google spokesman I spoke to referred to several documents allegedly reflecting Google's approach to IA ethics, taken from a mission statement. detailed description of the types of research that they will not pursue. At the beginning of this year, their work on AI has not yet worked to this point. produces social information on detailed documents on the state of governance of the AI.

Ideally, an external panel would complement this work, increase accountability, and help ensure that each Google AI project is properly reviewed. Even before the indignation, the council was not ready to do it.

Google's next attempt at external accountability will have to solve these problems. Better advice could come together more often and have more stakeholders involved. It would also make specific recommendations publicly and transparently, and Google would tell us if they followed them and why.

It is important that Google gets this right. AI capabilities continue to grow, leaving most Americans worried about everything from automation to data privacy to catastrophic accidents with advanced AI systems. Ethics and governance can not be a kid's game for companies such as Google. They will be subject to scrutiny as they attempt to meet the challenges they create.


Sign up for the Future Perfect newsletter. Twice a week, you will have an overview of ideas and solutions to meet our greatest challenges: improving public health, alleviating human and animal suffering, mitigating catastrophic risks and, to put it simply, improving performance.

[ad_2]

Source link