Google is organizing a global competition to develop artificial intelligence beneficial to humanity



[ad_1]

One of the biggest hurdles in the field of artificial intelligence is to prevent such software from developing the same flaws and intrinsic biases as their human creators and to use AI to solve social problems instead of just automating tasks. Now, Google, one of the world's leading AI software developers, is launching a global competition to drive the development of applications and research with positive spin-offs on the ground and on society.

The competition, called the AI ​​Impact Challenge, was announced today at an event titled AI for Social Good, held at the company's Sunnyvale, California office, and overseen and managed by the Google.org Charity Trust. society. Google positions it as a way to integrate non-profit organizations, universities, and other organizations not belonging to the business world and looking for Silicon Valley's benefits to future development research and applications of AI. The company announced the award of $ 25 million to a number of recipients to "help turn the best ideas into action". As part of the contest, Google will offer cloud resources for projects and open applications as of today. Accepted recipients will be announced at next year's Google I / O Developer Conference.

Google's main concern with this initiative is to use AI to solve problems in areas such as environmental science, health care, and wildlife conservation. According to Google, artificial intelligence is already used to help locate whales by tracing and identifying whale sounds, which can then be used to help protect themselves from environmental and wildlife threats. According to the company, AI can also be used to forecast floods and identify forest areas that are particularly vulnerable to forest fires.

Another important area for Google is to eliminate the biases of AI software that could replicate the blind spots and prejudices of the human being. A recent and notable example is that of Google, which had admitted in January not to find a solution to solve its photo tagging algorithm by identifying blacks on photos as gorillas, initially the product of a hand-d & # 39; Work largely white and Asian, unable to predict how its image recognition software could make such fundamental mistakes. (Google's workforce is only 2.5% of the black population.) Instead of looking for a solution, Google simply removed the ability to search for certain primates on Google Photos. These are the kinds of problems – the ones Google says it's hard to predict and needs help to solve – that the company hopes to solve.

The contest, associated with Google's new artificial intelligence program for Social Good, follows a public commitment released in early June, in which the company said it would never develop weapons of any kind. 39 and IA and that his research in artificial intelligence and product development would be guided by a set of ethical principles. As part of these principles, Google stated that it would not work with AI surveillance projects that violate "internationally recognized standards" and that its research would follow "widely accepted principles of international law". and human rights ". mainly on "socially beneficial" projects.

Among the major players in technology, including Google, in recent months have been confronted with the ethics of developing technologies and products that can be used by armed forces or likely to contribute to the development of surveillance states in the United States. United States and abroad. Many of these technologies, such as facial and visual recognition, involve sophisticated uses of AI. Google, in particular, has found itself mingled with controversy over its involvement in a US Defense Department UAV initiative called Project Maven and its secret project to launch an algorithmic research product and research. 39, information for the Chinese market.

After strong internal reactions, external criticism and resignations from its employees, Google has agreed to withdraw from its work with Project Maven as a result of the execution of its contract. Still, Google said it was still actively exploring a product aimed at the Chinese market, despite concerns that this product could be used to monitor Chinese citizens and link their offline activities to their online behavior. Google has also announced plans to work with the military. Its highly controversial Google Duplex service, which uses AI to mimic a human and make calls on behalf of a user, will begin to be deployed on Pixel devices next month.

Jeff Dean, head of the company's Google Brain AI division and principal investigator, said the AI ​​Impact Challenge was not really a reaction to the company's more recent controversy over military and surveillance work. "It's been getting ready long enough. We work in the socially beneficial research space and not directly related to commercial applications, "he told a group of journalists after the event. "It is very important for us to show the potential of AI and machine learning and show the example."

Updated 10/29, 15:10 ET: Clarified that Google.org, the charity branch of the company, is the body that oversees the AI ​​Impact Challenge, and Google itself.

[ad_2]
Source link