Google gets confused in research on AI bias



[ad_1]

Google was struggling Thursday to limit the fallout from the departure of a high-level researcher in artificial intelligence, following its decision to block the publication of an article on an important question of ethics of AI.

Timnit Gebru, who was co-director of AI ethics at the Internet group, said on Twitter that she was fired after the document was rejected.

Jeff Dean, head of artificial intelligence at Google, defended the decision Thursday in an internal email to staff, saying the newspaper “had not reached our publication bar.” He also described Ms Gebru’s departure as a resignation, after Google refused to agree to unspecified terms she had set to stay with the company.

The dispute threatened to highlight Google’s handling of internal AI research that could hurt its business, as well as the company’s long-standing struggles in trying to bring more diversity to its workforce.

Before leaving, Ms Gebru complained in an email to coworkers that there was “no accountability” inside Google regarding the company’s claims that she wanted to increase the proportion of women in its ranks. The email, first published on Platformer, also described the decision to block his article as part of a process to “silence marginalized voices.”

A person who worked closely with Mr. Gebru said there had been tensions with Google management in the past because of their activism for greater diversity. But the immediate cause of her departure was the company’s decision not to allow publication of a research article she had co-authored, the person added.

The article examined the potential bias in large-scale language models, one of the hottest new areas of natural language research. Systems like OpenAI’s GPT-3 and Google’s own system, Bert, attempt to predict the next word in any sentence or phrase – a method that has been used to produce surprisingly efficient automated writing, and that Google uses it to better understand complex search queries.

Language models are trained on large amounts of text, usually pulled from the Internet, leading to warnings that they could regurgitate racial biases and other content in the underlying training material.

“From the outside, it looks like someone at Google has decided this is detrimental to their interests,” said Emily Bender, professor of computational linguistics at the University of Washington, co-author of the document.

“Academic freedom is very important – there are risks when [research] takes place in places that don’t have that academic freedom, “giving companies or governments the power to” shut down “research they don’t approve of.

Ms Bender said the authors hoped to update the document with more recent research in time for it to be accepted at the conference it had previously been submitted to. But she added that it was common for such work to be replaced by new research, given how quickly work in areas like this progresses. “In the scientific literature, no article is perfect.”

Julien Cornebise, a former AI researcher at DeepMind, the London-based AI group owned by Google parent Alphabet, said the dispute “ shows the risks of focusing research on AI and machine learning between the few hands of powerful actors in the industry, because it allows the censorship of the domain by deciding what is published or not ”.

He added that Ms. Gebru was “extremely talented – we need researchers of her caliber, not filters, on these issues.” Ms. Gebru did not immediately respond to requests for comment.

Dean said the article, written with three other Google researchers, as well as external collaborators, “ignored recent research to mitigate” the risk of bias. He added that the document “spoke of the environmental impact of large models, but ignored subsequent research showing much greater efficiency gains.”

[ad_2]

Source link