Google Tweaks mail program that assumed that an investor was a man: NPR



[ad_1]

A Google sign and logo on the Googleplex of Menlo Park, California. This week, Google's project manager talked to Reuters about a problem discovered in the company's email service.

Josh Edelson / AFP / Getty Images


hide the legend

activate the legend

Josh Edelson / AFP / Getty Images

A Google sign and logo on the Googleplex of Menlo Park, California. This week, Google's project manager talked to Reuters about a problem discovered in the company's email service.

Josh Edelson / AFP / Getty Images

If someone told you that he has a meeting with an investor next week, would you assume that this investor was a man?

Google Gmail's intelligent Google Gmail composition function was artificially intelligent and, after the discovery of the problem, the predictive text tool was banned from gender pronouns.

In an interview with Reuters published Tuesday, Gmail Product Manager Paul Lambert leaked the problem and the company's response.

Gmail users – according to Lambert, there are 1.5 billion – probably know Smart Compose, unknowingly by name.

Similar to the predictive keyboard of most smartphones, Smart Compose likes to finish sentences using artificial intelligence models found in literature, web pages and e-mails. For example, if a user were to type the word "as" in the middle of a sentence, Smart Compose could suggest the phrase "as soon as possible" to continue or end the sentence.

"Lambert said that Smart Compose supports 11% of the messages sent from Gmail.com around the world," Reuters reported.

With this volume of messages, there are many possibilities for errors.

In January, a Google searcher typed "I'm meeting an investor next week", and Smart Compose thought that they might want to follow this statement by asking a question.

"Do you want to meet him?" was the suggested text generated by predictive technology, which had just assumed that the investor was an "it" and not a "she".

Lambert told Reuters that the Smart Compose team had tried several times to get around the problem, but none had paid off.

Not wanting to take any risk on the technology by mistakenly predicting the identity of a person's gender and offending the users, the company has completely rejected the suggestion of gender pronouns.

Google might have been extra cautious about potential sex gaffes, because it's not the first time that one of their artificial intelligence systems is caught. in the trap of an offensive conclusion.

In 2016, Carole Cadwalladr of the Guardian said she typed the phrase "are Jews" in a Google search bar, which then suggested, among other options, that Cadwalladr might want to ask: "Jews are they mean? "

And, in the summer of 2015, the company apologized after an artificial intelligence feature that helps organize images of Google Photos users and assigns them an image of two African Americans as a different species. than the man.

However, these blunders are not entirely the fault of the algorithm programmers and a blame can honestly be attributed to the algorithm itself, according to Christian Sandvig, a professor at the School of Information of the United States. University of Michigan, who had spoken to NPR in 2016.

"The systems are of sufficient complexity that it is possible to say that the algorithm has done it," he says. "And it's true, the algorithm is complicated enough and changes in real time. He writes his own rules based on data and input so that he does things and surprises us often. "

Technologies such as Smart Compose learn to write sentences by studying the relationships between typed words by ordinary humans.

Reuters reports:

"A system that displays billions of human phrases becomes adept at filling out common phrases but is limited by generalities.Men have long dominated areas such as finance and science, for example, so that technology would conclude from data that an investor or engineer is "or" him. "The issue is of concern to almost all major technology companies."

Subbarao Kambhampati, professor of computer science at Arizona State University and former president of the Association for the Advancement of Artificial Intelligence, spoke with NPR in 2016 on the ethics of AI.

"When you train a learning algorithm on a set of data, it will find a pattern that is in that data.This is known, obviously, understood by all AI members," he said. -he declares. "But the fact that this can engender unintentional stereotypes, unintentional discrimination, has become a much more worrying problem at the moment."

[ad_2]

Source link