Exclusive: Google promises to change search oversight after internal revolt



[ad_1]

(Reuters) – Alphabet Inc’s Google will change procedures before July to review the work of its scientists, according to a city hall recording heard by Reuters, as part of an effort to quell internal uproar over the integrity of his research on artificial intelligence (AI).

FILE PHOTO: The name Google is displayed outside the company’s offices in London, Britain November 1, 2018. REUTERS / Toby Melville

In remarks at a staff meeting last Friday, Google Research executives said they were working to regain trust after the company ousted two prominent women and rejected their work, according to a recording of one. hour, the contents of which have been confirmed by two sources.

The teams are already testing a questionnaire that will assess the risks of the projects and help scientists navigate the journals, research unit operations director Maggie Johnson said at the meeting. This initial change will be implemented by the end of Q2, and the majority of items will not require further verification, she said.

Reuters reported in December that Google had introduced a “hot topic” review for studies looking at dozens of issues, like China or bias in its services. Internal reviewers had demanded that at least three articles on AI be amended to refrain from taking a negative look at Google technology, Reuters reported.

Jeff Dean, Google’s senior vice president overseeing the division, said Friday that the review of “sensitive topics” “is and was confusing” and that he had commissioned a senior research director, Zoubin Ghahramani, to clarify the rules, according to the recording.

Ghahramani, a University of Cambridge professor who joined Google in September from Uber Technologies Inc, told town hall: “We have to be comfortable with this discomfort” of the self-critical research.

Google declined to comment on Friday’s meeting.

An internal email, viewed by Reuters, offered new details of the concerns of Google researchers, showing exactly how Google’s legal department changed one of three AI articles, titled “Data mining training at from large linguistic models “. (bit.ly/3dL0oQj)

The email, dated February 8, from co-author of the article, Nicholas Carlini, was sent to hundreds of colleagues, seeking to draw their attention to what he called the changes “profoundly. insidious ”from the company’s lawyers.

“Let’s be clear here,” read the 1,200-word email. “When we academics write that we have a ‘worry’ or find something ‘worrying’ and a Google lawyer asks us to change it to make it sound better, it’s really Big Brother who intervenes.

According to his email, the required changes included “negative to neutral” exchanges, such as replacing the word “concerns” with “considerations” and “dangers” with “risks”. The lawyers also demanded the removal of references to Google technology; the authors’ discovery that AI disclosed copyrighted content and the words “infringing” and “sensitive,” the email read.

Carlini did not respond to requests for comment. Google, in response to questions about the email, disputed its claim that lawyers were trying to control the tone of the newspaper. The company said it had no issues with the subjects the newspaper investigated, but found some legal terms used incorrectly and conducted a thorough review accordingly.

RACIAL FAIRNESS AUDIT

Google also last week appointed Marian Croak, an internet audio technology pioneer and one of Google’s few black vice presidents, to consolidate and manage 10 teams studying issues such as racial bias in algorithms and technology for people with disabilities.

Croak said at Friday’s meeting that it would take time to address concerns from AI ethics researchers and mitigate the damage to Google’s brand.

“Please hold me fully responsible for trying to turn this around,” she said on the recording.

Johnson added that the AI ​​organization was hiring a consulting firm for a broad racial equity impact assessment. The first such audit for the ministry would lead to recommendations “which are going to be quite difficult,” she said.

Tensions within Dean’s division escalated in December after Google dropped Timnit Gebru, co-head of its ethical AI research team, following his refusal to remove an article on generator AI. of language. Gebru, who is black, accused the company at the time of reviewing its work differently because of its identity and of marginalizing employees from under-represented backgrounds. Almost 2,700 employees signed an open letter in favor of Gebru. (bit.ly/3us5kj3)

During the town hall, Dean explained which exchange the company would support.

“We want responsible AI and ethical AI investigations,” Dean said, giving the example of studying the environmental costs of technology. But it is problematic to cite data “close to a factor of a hundred” while ignoring more precise statistics as well as Google’s efforts to reduce emissions, he said. Dean previously criticized Gebru’s article for not including important conclusions about environmental impact.

Gebru defended the quote from his article. “It is a very bad look for Google to come out so defensively against an article that has been cited by so many of their counterparts,” she told Reuters.

Employees continued to post their frustrations on Twitter over the past month as Google investigated and then fired Margaret Mitchell, co-head of ethical AI, for moving electronic files outside the company. Mitchell said on Twitter that she had acted “to raise concerns about racial and sexual inequity, and denounce the problematic Google dismissal of Dr. Gebru.”

Mitchell had collaborated on the paper that caused Gebru’s departure, and a version published online last month without Google affiliation named “Shmargaret Shmitchell” as a co-author. (bit.ly/3kmXwKW)

Asked about his comments, Mitchell expressed through a lawyer his disappointment at Dean’s criticism of the newspaper and said his name had been withdrawn following an order from the company.

Reporting by Paresh Dave and Jeffrey Dastin; Editing by Jonathan Weber and Lisa Shumaker

[ad_2]

Source link