Google Tells Scientists To Use ‘A Positive Tone’ In AI Research, Documents Sure | Technology



[ad_1]

Sign up for the Guardian Today US newsletter

Google this year decided to tighten control over its scientists’ papers by launching a “hot topics” review and, in at least three cases, the authors have asked to refrain from portraying its technology in a negative light, according to Google. internal communications and interviews with researchers involved in the work.

Google’s new review process asks researchers to consult with legal, political and public relations teams before pursuing topics such as face and sentiment analysis and categorizations of race, gender or political affiliation , according to internal web pages explaining the policy.

“Technological advances and the increasing complexity of our external environment are increasingly leading to situations where seemingly harmless projects raise ethical, reputational, regulatory or legal issues,” one of the pages for research staff said. . Reuters could not determine the date of the message, although three current employees said the policy began in June.

Google declined to comment for this story.

The “sensitive matters” process adds a cycle of scrutiny to Google’s standard paper review for pitfalls such as trade secret disclosure, eight current and former employees said.

For some projects, Google officials intervened at later stages. A senior Google official reviewing a study on content recommendation technology shortly before its publication this summer told authors to “take great care to set a positive tone,” according to internal correspondence read to Reuters.

The manager added, “That doesn’t mean we have to hide from the real challenges” posed by the software.

Subsequent correspondence between a researcher and reviewers shows that the authors have been “updated to remove all references to Google products.” A draft seen by Reuters mentioned YouTube owned by Google.

Four staff researchers, including lead scientist Margaret Mitchell, said they believe Google is starting to interfere with crucial studies of potential technological damage.

“If we are looking for the right thing given our expertise, and we are not allowed to publish this for reasons that are not consistent with high quality peer review, we end up in a serious censorship problem. Mitchell said.

Google says on its public website that its scientists have “substantial” freedom.

Tensions between Google and some of its staff erupted this month after the abrupt departure of scientist Timnit Gebru, who led a 12-person team with Mitchell focused on ethics in artificial intelligence (AI) software.

Gebru says Google fired her after questioning an order not to publish research claiming AI that mimics speech could disadvantage marginalized populations. Google said it accepted and expedited his resignation. It could not be determined whether the Gebru article had been reviewed on “sensitive topics”.

Jeff Dean, senior vice president of Google, said in a statement this month that Gebru’s document dwelled on the potential damage without discussing ongoing efforts to address it.

Dean added that Google supports research on ethics in AI and “is actively working on improving our document review processes, because we know that too many checks and balances can get tedious.”

Sensitive subjects

The explosion of AI research and development in the tech industry has prompted authorities in the United States and elsewhere to come up with rules for its use. Some have cited scientific studies showing that facial analysis software and other AIs can perpetuate bias or erode privacy.

In recent years, Google has incorporated artificial intelligence into all of its services, using the technology to interpret complex search queries, decide on recommendations on YouTube, and autocomplete phrases in Gmail. Its researchers published more than 200 papers last year on responsible AI development, among more than 1,000 projects in total, Dean said.

The study of Google Services for Bias is among the “hot topics” of the new company policy, according to an internal webpage. Among the dozens of other “sensitive topics” listed were the oil industry, China, Iran, Israel, Covid-19, home security, insurance, location data, religion, autonomous vehicles. , telecommunications and systems that recommend or personalize web content.

The Google document the authors were asked to adopt a positive tone for discusses the AI ​​recommendation, which services like YouTube use to personalize users’ content feeds. A project reviewed by Reuters included “concerns” that the technology may promote “disinformation, discriminatory or otherwise unfair results” and “insufficient diversity of content”, as well as lead to “political polarization.”

Rather, the final publication states that systems can promote “accurate information, fairness and diversity of content.” The published version, titled What are you optimizing for? Aligning Recommender Systems with Human Values, withheld credit to Google researchers. Reuters could not determine why.

An article this month on AI for understanding a foreign language softened a reference to how the Google Translate product was making mistakes after a request from company critics, a source said. The published version states that the authors used Google Translate, and a separate sentence states that part of the research method was to “review and correct inaccurate translations”.

For an article published last week, a Google employee described the process as a “long haul,” involving more than 100 email exchanges between researchers and reviewers, according to internal correspondence.

Researchers have found that AI can spit out personal data and copyrighted material – including a page from a “Harry Potter” novel – which had been taken from the Internet to develop the system.

One project describes how such disclosures could infringe copyright or violate European privacy law, said a person familiar with the matter. Following criticism from the companies, the authors removed the legal risks and Google published the article.

[ad_2]

Source link