Google splits from top AI researcher after stalling paper, faces backfire



[ad_1]

Former Google AI research scientist Timnit Gebru speaks onstage during day three of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California.
Enlarge / Former Google AI science researcher Timnit Gebru speaks onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California.

Kimberly White | Getty Images

Google struggled Thursday to limit the fallout from the departure of a top artificial intelligence researcher after the Internet group blocked the publication of an article on an important question of ethics in AI.

Timnit Gebru, who was co-director of AI ethics at Google, said on Twitter that she was fired after the document was rejected.

Jeff Dean, head of artificial intelligence at Google, defended the decision Thursday in an internal email to staff, saying the newspaper “was not meeting our publication bar.” He also described Dr Gebru’s departure as a resignation in response to Google’s refusal to concede on unspecified terms it had set to stay with the company.

The dispute has threatened to highlight Google’s handling of internal AI research that could hurt its business, as well as the company’s long-standing difficulties in trying to diversify its workforce.

Before leaving, Gebru complained in an email to coworkers that there was “no accountability” inside Google regarding the company’s claims that it wanted to increase the proportion. of women in its ranks. The email, first published on Platformer, also described the decision to block his article as part of a process to “silence marginalized voices.”

A person who has worked closely with Gebru said there have been tensions with Google management in the past because of their activism for greater diversity. But the immediate cause of her departure was the company’s decision not to allow publication of a research article she co-authored, the person added.

The article examined the potential bias in large-scale language models, one of the hottest new areas of natural language research. Systems like OpenAI’s GPT-3 and Google’s own system, Bert, attempt to predict the next word in a sentence or sentence – a method that has been used to produce surprisingly efficient automated writing and which Google uses to better understand complex search queries.

Language models are trained on vast amounts of text, usually pulled from the Internet, which has raised warnings that they could regurgitate racial biases and other content in the underlying training material.

“From the outside, it looks like someone at Google has decided this is detrimental to their interests,” said Emily Bender, professor of computational linguistics at the University of Washington, co-author of the document.

“Academic freedom is very important – there are risks when [research] takes place in places that [don’t] have that academic freedom, ”giving companies or governments the power to“ shut down ”research they don’t approve of, she added.

Bender said the authors hope to update the article with new research in time for it to be accepted at the conference it has already been submitted to. But she added that it was common for such work to be replaced by new research, given how quickly work in areas like this progresses. “In the scientific literature, no article is perfect.”

Julien Cornebise, former AI researcher at DeepMind, the London-based AI group owned by Google’s parent Alphabet, said the dispute “ shows the risks of focusing research on AI and machine learning between the few hands of powerful players in the industry, as it allows censorship of the domain by deciding what gets published or not. “

He added that Gebru was “extremely talented – we need researchers of her caliber, not filters, on these issues.” Gebru did not immediately respond to requests for comment.

Dean said the article, written with three other Google researchers, as well as external collaborators, “did not consider recent research to mitigate” the risk of bias. He added that the document “spoke of the environmental impact of large models, but ignored subsequent research showing much greater efficiency gains.”

© 2020 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied or modified in any way.

[ad_2]

Source link