Timnit Gebru is known for her research on biases and inequalities in AI, and in particular for a 2018 article she co-authored with Joy Buolamwini that highlighted just how commercial facial recognition software have done poorly when trying to classify women and people of color. Their work has sparked widespread awareness of common issues in AI today, especially when technology is tasked with identifying everything about human beings.
In the email, which was first published by the Platformer newsletter on Thursday, Gebru wrote that she felt “constantly dehumanized” at Google and expressed dismay at the persistent lack of diversity in the company. .
“In addition, there is no liability. There is no incentive to hire 39% women: your life gets worse when you start advocating for underrepresented people, you start to upset other leaders when they don’t want to give you good marks when calibration. There is no way more documents or more conversations will lead to anything, ”she wrote.
Gebru also expressed frustration with an internal process related to reviewing a research paper she co-wrote with others at Google and outside the company that had not yet been published. .
The research paper in question
Gebru, who joined Google at the end of 2018, told CNN Business that the research paper in question looked at the dangers of large language models – a growing trend in AI with the release of more and more systems. performers capable of creating impressive human-sounding texts, such as recipes, poetry and even newspaper articles. It’s also an area of AI that Google has shown it sees as key to its future in search.
Gebru said the document was submitted to the Fairness, Accountability and Transparency Conference, to be held in March, and that there was nothing unusual about the way the document was submitted. for internal Google review. She said she wrote the email on Tuesday night after long back-and-forth trips with Google AI executives in which she was repeatedly asked to either withdraw the document from consideration for a presentation at the conference or to remove it. her name.
Gebru told CNN Business she was informed on Wednesday that she was no longer working at the company. “It really didn’t have to be like that at all,” Gebru said.
An email sent to Google Research employees
A Google spokesperson said the company had no comment.
In an email sent to Google Research employees Thursday that he released publicly on Friday, Jeff Dean, head of artificial intelligence at Google, told employees his point of view: Gebru co-wrote an article but no hasn’t given the company two weeks to review it before. its deadline. The article was reviewed internally, he wrote, but it “did not respond to our publication bar.”
Dean added: “Unfortunately, this document was only shared with one day’s notice before its deadline – we need two weeks for this type of review – and instead of waiting for reviewers’ comments. , it was approved for submission and submitted. ”
He said Gebru has responded with requests that must be met if it is to stay at Google. “Timnit wrote that if we didn’t respond to those requests, she would leave Google and work on an end date,” Dean wrote.
Gebru told CNN Business that his terms included transparency about how the document was ordered to be taken down, as well as meetings with Dean and another AI official at Google to discuss the treatment of researchers.
“We accept and respect his decision to resign from Google,” Dean wrote. He also explained some of the company’s search and review processes and said he would speak with Google’s research teams, including those from the Ethics AI team, “so that they know that we strongly support these important research streams. “
A quick support demonstration
Right after Gebru’s initial tweet on Wednesday, colleagues and others quickly shared their support online, including Margaret Mitchell, who had been Gebru’s co-team leader at Google.
“Today dawns yet another horrific, life-changing loss in a year of horrific life-changing losses,” Mitchell
tweeted Thursday. “I can’t express the pain of losing @timnitgebru as a co-leader. I was able to excel because of her – like so many others. I am mostly in shock.”
“I have your back like you always had mine,”
tweeted Buolamwini, who in addition to co-authoring the 2018 article with Gebru, is the founder of the Algorithmic League. “You are brilliant and respected. You listen to these others willingly ignore. You ask tough questions not to move yourself forward but to uplift the communities to which we owe our foundations.”
Sherrilyn Ifill, President and Managing Director of the NAACP Legal Defense and Educational Fund,
tweeted, “I learned so much from her about AI biases. What a disaster.”
On Friday at noon, a message from Medium decrying his departure and demanding transparency on Google’s decision on the research document had secured the signatures of more than 1,300 Google employees and more than 1,600 supporters in academia and AI. . Those who share their support include many women who have fought inequalities in the tech industry, such as Ellen Pao, CEO of Project Include and former CEO of Reddit; Ifeoma Ozoma, a former Google employee who founded Earthseed; and Meredith Whittaker, faculty director of the AI Now Institute and lead organizer of the 2018 Google Walkout, who protested against sexual harassment and misconduct at the company. Others include Buolomwini, as well as Danielle Citron, a law professor specializing in the study of online harassment at Boston University and a 2019 MacArthur Fellow.
Citron told CNN Business that she sees Gebru as a “beacon” when it comes to exposing, clarifying and studying the racism and entrenched inequalities that are perpetuated in algorithmic systems. Gebru has shown how important it is to rethink the way data is collected, she said, and questioned whether we should even be using these systems, she said. .
“WTF, Google?” she says. “Sorry, but you were so lucky that she even came to work for you.”