Google reportedly asked employees to ‘take a positive tone’ in research paper



[ad_1]

Google has added a layer of control for research papers on sensitive topics including gender, race, and political ideology. A senior executive also asked researchers to “take a positive tone” in an article this summer. The news was first reported by Reuters.

“Technological advances and the growing complexity of our external environment are increasingly leading to situations where seemingly harmless projects raise ethical, reputational, regulatory or legal issues,” the policy says. Three employees told Reuters the rule began in June.

The company has also asked its employees to “refrain from portraying its technology in a negative light” on several occasions, Reuters said.

Employees working on an article about the AI ​​recommendation, which is used to personalize content on platforms like YouTube, were asked to “be very careful to keep your tone positive,” according to Reuters. The authors then updated the article to “remove all references to Google products.”

Another article on using AI to understand foreign languages ​​”softened a reference to how the Google Translate product made mistakes,” Reuters wrote. The change came in response to a request from reviewers.

Google’s standard review process aims to ensure that researchers do not inadvertently reveal trade secrets. But the examination of “sensitive subjects” goes beyond that. Employees who wish to assess Google’s own services for bias are encouraged to consult with the legal, public relations and policy teams first. Other sensitive topics would be China, the oil industry, location data, religion and Israel.

The search giant’s publication process has been in the spotlight since AI ethicist Timnit Gebru was fired in early December. Gebru says she was fired by email she sent to the Google Brain Women and Allies mailing list, an internal group of Google AI research employees. In it, she spoke about Google officials who pushed her to remove an article on the dangers of large-scale language processing models. Jeff Dean, head of artificial intelligence at Google, said she submitted it too close to the deadline. But Gebru’s own team rebuffed this claim, saying the policy was applied “in an unequal and discriminatory manner”.

Gebru contacted Google’s public relations and policy team in September about the document, according to The Washington Post. She knew the company might challenge certain aspects of the search because it uses large language processing models in its search engine. The deadline to make changes to the document was only at the end of January 2021, which gave researchers enough time to address any concerns.

A week before Thanksgiving, however, Megan Kacholia, vice president of Google Research, asked Gebru to remove the article. The following month, Gebru was fired.

Google did not immediately respond to a request for comment from The edge.

[ad_2]

Source link