[ad_1]
Google's attempt Reap more dollars in cloud computing market leaders, Amazon and Microsoft had a new boss at the end of last year. Next week, Thomas Kurian is set to expose his vision of the industry at the company's cloud conference, building on his predecessor's strategy of focusing on Google's strength in artificial intelligence.
This strategy is complicated by controversies over how Google and its customers use this powerful technology. After employees protested against a Pentagon deal in which Google formed drone image interpretation algorithms, the cloud computing unit is now submitting its artificial intelligence projects – and those of its clients – to ethical revisions. They forced Google to refuse some business. "There are things we have refused," says Tracy Frey, director of artificial intelligence strategy for Google Cloud, although she refuses to say what.
But this week, the company has criticized the fact that these mechanisms could not be trusted when it failed in its attempt to introduce external control over its AI development.
Google's ethical reviews involve a variety of experts. According to Frey, product managers, engineers, lawyers and ethicists evaluate the new services offered against Google's principles on AI. Some new products announced next week will come with features or limitations.
Last year, this process led Google to not launch facial recognition service, unlike Microsoft and Amazon. This week, more than 70 researchers in artificial intelligence, including nine working at Google, signed an open letter inviting Amazon to stop selling this technology to law enforcement.
Frey says that delicate decisions about how to publish artificial intelligence technology will become more and more common as technology advances.
In February, OpenAI's OpenAI research institute, OpenAI, announced that it would not release new software that could generate surprisingly smooth text because it could be used for malicious purposes. Some researchers have described this episode as a cascade, but Frey explains that it provides a powerful example of the kind of constraint needed as the AI technology becomes more powerful. "We hope we can have the same brave position," she says. Last year, Google announced that it has changed the search for lip reading software to minimize the risk of abuse. The technology could help the hearing impaired or be used to undermine privacy.
Not everyone is convinced that Google can be trusted to make ethical decisions about its technology and business.
Google's principles on AI have been criticized as too vague and too permissive. Weapon projects are prohibited, but military work is still allowed. Google will not use technologies whose purpose is contrary to the widely accepted principles of international law and human rights, but the company is currently testing a search engine for China that, if it was launched, should exercise political censorship.
Since Google revealed its principles of artificial intelligence, the company questioned how they would be applied without external control. Last week, Google announced the appointment of a panel of eight outside people who would help implement the principles. Late Thursday, he announced the closure of the new external advisory board on advanced technologies and the return of the company to the drawing board.
The turnaround came after thousands of Google employees signed a petition protesting the inclusion of Kay Coles James, chair of the conservative Heritage Foundation think tank. She worked on President Trump's transition team and spoke out against policies to help trans and LGBTQ people. As the controversy grew, a board member Luciano Floridi, a philosopher from Oxford University, resigned and said Google made a "big mistake" in naming James.
Os Keyes, a Washington University researcher who has joined hundreds of outsiders to sign the Googlers petition protesting against James 'inclusion, says the episode suggests that Google is more concerned with beating the Conservatives' political favor. than AI technology. "The idea of a" responsible AI "as practiced by Google is not really responsible," says Keyes. "They want to say" not dangerous, unless evil earns money ".
Anything that adds friction to new products or new business could be Kurian's challenge. He took over from Google Cloud last year after the departure of Diane Greene, a seasoned engineer and executive, who led a massive expansion of the unit after joining the company in 2016. While Google's cloud commerce has grown during the mandate from Greene, Amazon and Microsoft have also done so. Oppenheimer estimates that Google occupies 10% of the cloud market, far behind the 45% of Amazon and the 17% of Microsoft.
Google is not the only big company to talk more about the ethics of AI lately. Microsoft has its own internal ethics process for AI business and claims to have refused some of its projects. According to Frey, such reviews do not have to slow down a company and Google's artificial intelligence assessments can generate new business because of growing awareness of the risks of AI power. Google Cloud must encourage trust in AI for long-term success, she said. "If this trust is broken at any given time, we risk not being able to realize the important and valuable effects of avian influenza in companies around the world," Frey said.
More great cable stories
[ad_2]
Source link