‘People are really pissed off’: inside Google sacking top AI ethics



[ad_1]

  • One of Google’s top AI ethics officials announced on Wednesday that she had been fired.
  • Observers were baffled by the move, but inside Google tensions between Timnit Gebru and senior management had been mounting for days.
  • Gebru had co-authored a research article that observed machine learning biases, but said she was ultimately fired for an email criticizing the company’s treatment of minorities.
  • Are you a current or former Googler with more to share? You can contact this reporter securely using the Signal encrypted messaging app (+ 1-628-228-1836) or through encrypted email ([email protected]). Contact using a non-functional device.
  • Visit the Business Insider homepage for more stories.

Timnit Gebru, Co-Director of Google’s Ethics-Artificial Intelligence Research Team, tweeted Wednesday night that she was kicked out of the business. Outside observers were stunned and confused: Why would Google put an end to one of its leading figures in AI ethics, who was also a highly respected name in the field?

But within Google, tensions had been mounting for several days, starting with a research article co-authored by Gebru, which was submitted to an academic conference and criticized the prejudices embedded in artificial intelligence. According to Gebru and other employees familiar with the matter, management asked Gebru to either remove the document or remove the names of all co-authors who were Google employees. Gebru then shared his frustrations with a female employee list server at Google, criticizing the company’s treatment of minority employees.

The next day she was fired.

This is the latest example of tensions between corporate interests and the ethical concerns of workers. The layoff also caught the attention of others in the AI ​​research arena, while leaving many Google employees confused as to the seemingly aggressive response.

Google chief AI officer Jeff Dean on Thursday told employees in an email obtained by Business Insider that Gebru’s layoff was a “tough time.”

According to an employee familiar with the situation, Google was not happy with the research paper, which analyzed the ethical dangers of language models. Gebru subsequently rejected Google’s request to strike out the authors’ names or remove the article entirely, asking management to provide more details on their reasoning.

As Gebru recounted on Twitter on Wednesday: “I said: here are the conditions. If you can fulfill them very well, I will remove my name from this document, otherwise I can work on a last date. Then she sent a e-mail to my direct reports saying she accepted my resignation. So this is Google for you. You saw it here. “

The day before her layoff, Gebru emailed an internal Google Brain Women and Allies group, expressing frustrations with her experience and calling on members to find new ways to find “leadership responsibility.”

According to another employee who is part of the group and has asked to remain anonymous, the email group is typically used for mentoring and “allows members to feel empowered to look into the workplace.” Since the group became moderate, they said, conversations have become more restricted. “Discussions and discussions are very limited,” they said.

As such, Gebru’s post caused a stir.

“What I mean is stop writing your documents because it doesn’t make any difference,” Gebru wrote in a post, a copy of which was posted by Casey Newton’s Platformer. “There is no way more documents or more conversations will end in anything.”

The day after that email was sent, Gebru said in a tweet that she received a message informing her that she was being fired. “Thank you for clarifying your terms. We cannot accept # 1 and # 2 as you request. We respect your decision to leave Google accordingly, and we accept your resignation,” he said. , according to Gebru.

The same post also referred to the email sent to the research group the day before, described as “incompatible with the expectations of a Google official” and used as a reason to expedite its termination.

“We believe your job termination should come sooner than your email reflects, as some aspects of the email you sent last night to non-management employees at the think tank reflect inconsistent behavior. with the expectations of a Google manager, ”he says. email, according to Gebru.

Stochastic parrots

The debacle raised questions about the nature of the research paper, which Google says was submitted before the company approved it.

The article, titled “On the Dangers of Stochastic Parrots: Can Language Patterns Be Too Large?”, Examined how major language patterns trained on large amounts of data bring bias and carry the risk of harm ethical.

In this article, a copy of which has been reviewed by Business Insider, the authors identify a variety of costs and risks associated with what they describe as “the rush for ever-expanding language models.”

“The document doesn’t say anything surprising to anyone who works with language models,” said an employee familiar with the subject who reviewed it. “It essentially gives a bad picture of what Google is doing,” said another, noting that the document’s findings were largely critical of work being done in the AI ​​arena.

But Google claims the document was not approved because it did not follow the correct procedure and “ignored too much relevant research.”

“Unfortunately, this document was only shared with one day’s notice before the deadline – we need two weeks for this type of review – and instead of waiting for reviewers’ comments, it has been approved for submission and submitted, ”said Jeff Dean, chief AI officer at Google. in an employee email sent Thursday obtained by Business Insider.

“A cross-functional team then reviewed the article as part of our regular process and the authors were notified that it did not meet our publication bar and received comments on the reasons,” he added.

The events caused concern and frustration among many employees. “People are really pissed off,” one said, noting that Dean’s email didn’t appear to justify Gebru’s dismissal, and didn’t recognize the email which Gebru said , was used as the reason for his dismissal. “People are trying to find out why the reaction has been so extreme.”

Vijay Chidambaram, Assistant Professor at the University of Texas at Austin, tweeted that the reason given for blocking the paper in Dean’s email was “pretty much BS”.

“This is the job of conference reviewers, not Google’s job,” he said.

The debacle has only sparked new concerns about Google’s treatment of AI ethics and how its desire to move forward is at odds with employee values. Last year, Meredith Whittaker, an employee at the time, claimed that Google pressured her to quit her work with the AI ​​Now Institute, a research center co-founded by Whittaker, which focuses on the social implications of artificial intelligence.

“Waking up, still shaken by the way Google is treating Timnit, and thinking about the urgency of confronting the history and racist present of the AI ​​realm,” Whittaker tweeted Thursday.



[ad_2]

Source link