Google removes gendered pronouns from Smart Compose feature in Gmail



[ad_1]

The intelligent composition of Gmail is one of the most interesting features of Google in artificial intelligence for years. It predicts what users will write in e-mail and offers to finish their sentences. But like many artificial intelligence products, it is as intelligent as the data on which it is formed and likely to make mistakes. That's why Google prevented Smart Compose from suggesting gender-based pronouns like "him" and "her" in emails – Google fears he's guessing the wrong kind.

Reuters reports that this limitation was introduced after a company researcher discovered the problem in January of this year. The researcher was typing "I meet an investor next week" in a message when Gmail suggested a complementary question: "Do you want to meet him", which misunderstood the investor.

Gmail Product Manager Paul Lambert said Reuters that his team tried to solve this problem in different ways, but none was sufficiently reliable. In the end, says Lambert, the simplest solution was simply to delete all these types of responses. A change that, according to Google, affects less than 1% of Smart Compose's forecasts. Lambert said Reuters that it is profitable to be cautious in cases like these because sex is an important factor to be wrong.

This little bug is a good example of how software built using machine learning can reflect and reinforce the biases of society. Like many artificial intelligence systems, Smart Compose learns by studying past data, browsing past emails to find the words and phrases that it should suggest. (His sister function, Smart Reply, does the same thing by suggesting small responses to emails.)

In Lambert's example, it appears that Smart Compose learned from previous data that investors were more likely to be men than women, so that it was wrongly predicted that it was also.

This is a relatively small blunder, but it indicates a much larger problem. If we rely on predictions made by algorithms formed from previous data, we run the risk of repeating the mistakes of the past. Guessing the wrong kind in an email does not have huge consequences, but what about AI systems that make decisions in areas such as health, employment and the courts? Last month alone, it was reported that Amazon had had to delete an internal recruiting tool trained using machine learning because it favored female candidates. The bias associated with AI could cost you your job or worse.

For Google, this problem is potentially huge. The company incorporates algorithmic judgments into more of its products and sells machine learning tools around the world. If one of the most visible features of artificial intelligence makes such mistakes, why should consumers trust other services in society?

The company obviously saw these problems coming. In a help page for Smart Compose, it warns users that the artificial intelligence models used "may also reflect human cognitive biases. Being aware of this is a good start and the discussion on how to handle it is ongoing. In this case, however, the company did not settle much – it simply removed the possibility for the system to go wrong.

[ad_2]

Source link