Google removes gender pronouns from Gmail Smart Compose feature



[ad_1]

Gmail Smart Compose is one of Google's most exciting features in artificial intelligence for years. He predicted what users would write in emails and offer to finish their sentence. But like many artificial intelligence products, it is as intelligent as the data on which it is formed and likely to make mistakes. That's why Google prevented Smart Compose from suggesting gender-based pronouns like "him" and "her" in emails – Google fears he's guessing the wrong kind.

Reuters reports that this limitation was introduced after a company scientist discovered the problem in January of this year. The researcher was typing "I meet an investor next week" in a message when Gmail suggested a complementary question: "Do you want to meet him", which misunderstood the investor.

Gmail Product Manager Paul Lambert told Reuters that his team had tried to solve this problem in different ways, but none was sufficiently reliable. In the end, says Lambert, the simplest solution was simply to delete all these types of responses. A change that, according to Google, affects less than 1% of Smart Compose's forecasts. Lambert told Reuters that it was profitable to be cautious in cases like this, since bad is a "big deal" to be wrong.

This little bug is a good example of how software built using machine learning can reflect and reinforce the biases of society. Like many artificial intelligence systems, Smart Compose learns by studying past data, browsing past emails to find the words and phrases that it should suggest. (His sister article, Smart Reply, does the same to suggest small responses to emails.)

In Lambert's example, it would appear that Smart Compose has learned from previous data that investors were more likely to be men than women, so badly predicted that this one was too.

This is a relatively small blunder, but it indicates a much larger problem. If we rely on predictions made by algorithms formed from previous data, we run the risk of repeating the mistakes of the past. Guessing the wrong kind in an email does not have huge consequences, but what about AI systems that make decisions in areas such as health, employment and the courts? Last month alone, it was reported that Amazon had had to delete an internal recruiting tool trained using machine learning because it favored female candidates. The bias of artificial intelligence could cost you your job, or worse.

For Google, the problem is potentially huge. The company incorporates algorithmic judgments into more of its products and sells machine learning tools around the world. If one of the most visible features of artificial intelligence makes such mistakes, why should consumers trust other services in society?

The company obviously saw these problems come up. In a help page for Smart Compose, it warns users that the artificial intelligence models used "may also reflect human cognitive biases. Being aware of this is a good start and the discussion on how to handle it is ongoing. In this case, however, the company did not settle much – it simply removed the possibility for the system to go wrong.

[ad_2]
Source link