Google reportedly told AI scientists to ‘strike a positive tone’ in search



[ad_1]

Illustration from article titled Google Reportedly Told AI Scientists to Strike a Positive Tone in Research

Photo: Leon Neal / Staff (Getty Images)

Almost three weeks after Black’s abrupt exit artificial intelligence ethicist Timnit Gebru, more details emerge on the shady new set of policies Google has rolled out for its research team.

After reviewing internal communications and speaking to researchers affected by the rule change, Reuters reported on Wednesday that the tech giant had recently added a “hot topic” review process for its scientists’ papers and, on at least three occasions, had explicitly asked scientists to refrain from presenting the technology of Google in a negative light.

Under the new procedure, scientists are required to meet with special legal, policy, and public relations teams before proceeding with AI research related to so-called controversial topics that could include facial analysis and race categorizations, gender or political affiliation.

In an example reviewed by Reuters, scientists who had studied the recommendation AI used to fuel user feeds on platforms like YouTube – a property owned by Google – wrote a paper detailing concerns that the technology could be used to promote “disinformation, discriminatory or other unfair outcomes” and “insufficient diversity of content”, as well as “political polarization”. After review by a senior executive who asked researchers to adopt a more positive tone, and the final publication instead suggests that systems can promote “accurate information, fairness and diversity of content.”

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly harmless projects raise ethical, reputational, regulatory or legal issues,” an internal web page describing the policy reportedly said.

In recent weeks – and especially after the departure from Gebru, a high-profile researcher who is said to have fallen out of favor with top officials after sounding the alarm about censorship infiltrating the search process – Google has faced increased scrutiny for potential bias in its internal search division.

Four staff researchers who spoke to Reuters validated Gebru’s claims, saying they, too, believed Google was starting to interfere with critical studies of the technology’s potential to harm.

“If we are looking for the right thing given our expertise, and we are not allowed to publish this for reasons that do not correspond to high quality peer review, we end up in a serious censorship problem.” , Margaret Mitchell, said a senior company scientist.

At the start of December, Gebru claimed to have been fired by Google after pushing back an order not to publish research claiming that AI capable of imitating speech could disadvantage marginalized populations.

[ad_2]

Source link