Google eliminates again one of the functions of its products to avoid the problems related to the prejudices of its AI



[ad_1]

At the end of April 2018, Google introduced the new Gmail, an update with a new design and a host of other news, including smart answers and a Writing Assistant to help us write emails suggesting the following word without having to write it completely.

This prediction is possible thanks to the artificial intelligence of Google called "Smart Compose". But now society has decided to eliminate the pronouns female and male predictions to avoid being accused (yet) of having an AI with prejudices.

Gmail will no longer offer personal pronouns because the risk that technology is wrong in predicting the gender or gender identity of a person is very high. Gmail Product Manager Paul Lambert told Reuters that "all the mistakes are not the same and that gender is something very important to make mistakes"

The risk that technology is wrong when it is to predict the gender or gender identity of a person is very high

It's something that Smart Compose could incorrectly predict and risk offending the recipient. A researcher from the company discovered the problem in January by writing "I will meet an investor next week", which the prediction tool suggested "Do you want to meet him?" instead of "Ella".

Based on the data contained in the emails sent and received by the 1,500 million Gmail users, the IA thought that the investor was most likely a man and that he was very difficult for him to be a woman. Everything is even more complicated if it's a person who prefers a neutral pronoun. Google's decision is to completely eliminate the prediction of pronouns in order to avoid criticism and problems to its users.

AI also learns from our prejudices

Nor is it the first time that a Google AI suffers from these ills. In 2015, the Google Photos algorithm qualified the photos of two blacks as "Gorillas". that the "solution" was simply to block the gorillas and pretend that they did not exist to prevent the racist algorithm will act again.

And if we want to look for a more disastrous example of how an AI can learn from the worst of society, we simply need to remember Tay, the Microsoft bot that had to be removed from racism and even go so far as to publish Nazi slogans.

Without being experts, we can not say much about the level of difficulty and effort that it would take to correct the behavior of an AI like this,
After all, these predictive models learn from the data they have and the world is a place full of prejudices. However, Google again seems to choose to remove a function and not attempt to fix the underlying problem.

[ad_2]
Source link