Google to change search process after outcry over dismissal of scientists | Google



[ad_1]

Google will change procedures before July to review the work of its scientists, according to a city hall recording heard by Reuters, as part of an effort to quell internal uproar over the integrity of its artificial intelligence research ( IA).

In remarks at a staff meeting last Friday, Google Research executives said they were working to regain trust after the company ousted two prominent women and rejected their work, according to a recording of one. hour, the contents of which have been confirmed by two sources.

The teams are already testing a questionnaire that will assess projects against risk and help scientists navigate journals, Maggie Johnson, research unit director of operations, said at the meeting. This initial change would be implemented by the end of Q2, and the majority of items would not require further verification, she said.

Reuters reported in December that Google had introduced a “hot topic” review for studies looking at dozens of issues, like China or bias in its services. Internal reviewers had demanded that at least three articles on AI be amended to refrain from taking a negative look at Google technology, Reuters reported.

Jeff Dean, Google’s senior vice president overseeing the division, said Friday that the review of “sensitive topics” “is and was confusing” and that he had commissioned a senior research director, Zoubin Ghahramani, to clarify the rules, according to the recording. .

Ghahramani, a University of Cambridge professor who joined Google in September from Uber, told town hall: “We have to be comfortable with this discomfort” of the self-critical research.

Google declined to comment on Friday’s meeting.

An internal email, viewed by Reuters, offered new details of the concerns of Google researchers, showing exactly how Google’s legal department changed one of three articles on AI, called Training Data Extraction at from large linguistic models.

The email, dated February 8, from co-author of the article, Nicholas Carlini, was sent to hundreds of colleagues, seeking to draw their attention to what he described as “deeply insidious” checks. corporate lawyers.

“Let’s be clear here,” read the 1,200-word email. “When we academics write that we have a ‘worry’ or find something ‘worrying’ and a Google lawyer demands that we change it to make it sound better, it’s really Big Brother who intervenes. “

The required changes, according to his email, included “negative to neutral” exchanges such as replacing the word “concerns” with “considerations” and “dangers” with “risks”. The lawyers also demanded the removal of references to Google technology; the authors’ conclusion that AI disclosed copyrighted material; and the words “violation” and “sensitive” indicate the email.

Carlini did not respond to requests for comment. Google, in response to questions about the email, disputed its claim that lawyers were trying to control the tone of the newspaper. The company said it had no issues with the subjects the newspaper studied, but found some legal terms used inaccurately and conducted a thorough review accordingly.

Racial equity audit

Last week, Google also appointed Marian Croak, an internet audio technology pioneer and one of Google’s few black vice-presidents, to consolidate and manage 10 teams that study issues like racial bias in algorithms and technology for people with disabilities.

Croak said at Friday’s meeting that it would take time to address concerns from AI ethics researchers and mitigate the damage to Google’s brand. “Please hold me fully responsible for trying to turn this around,” she said on the recording.

Johnson added that the AI ​​organization was hiring a consulting firm for a broad racial equity impact assessment. The first such audit for the ministry would lead to recommendations “which are going to be quite difficult,” she said.

Tensions within Dean’s division escalated in December after Google sacked Timnit Gebru, co-head of its ethical AI research team, following his refusal to remove an article on power-generating AI. language. Gebru, who is black, accused the company at the time of reviewing its work differently because of its identity and marginalizing employees from under-represented backgrounds. Almost 2,700 employees signed an open letter in favor of Gebru.

During the town hall, Dean explained which exchange the company would support. “We want responsible AI and ethical investigations into artificial intelligence,” Dean said, giving the example of studying the environmental costs of technology. But he said it was problematic to cite data “close to a hundred factor” while ignoring more precise statistics as well as Google’s efforts to reduce emissions. Dean has previously criticized Gebru’s article for not including important conclusions about environmental impact.

Gebru defended the quote from his article. “It is a very bad look for Google to come out so defensively against an article that has been cited by so many of their counterparts,” she told Reuters.

Employees continued to post their frustrations on Twitter over the past month as Google investigated and then fired Margaret Mitchell, co-director of ethical AI, for moving electronic files outside the company. Mitchell said on Twitter that she had acted “to raise concerns about racial and sexual inequity and to denounce the problematic dismissal of Dr. Gebru by Google.”

Mitchell had collaborated on the paper that caused Gebru’s departure, and a version published online last month without Google affiliation named “Shmargaret Shmitchell” as a co-author.

Asked about his comments, Mitchell expressed through a lawyer his disappointment at Dean’s criticism of the newspaper and said his name had been withdrawn following an order from the company.

[ad_2]

Source link