A study reveals how artificial intelligence makes racist and sexist data



[ad_1]

(Shutterstock)

(Shutterstock)

An MIT study has revealed how artificial intelligence systems collect data that often makes them racist and sexist.

The researchers looked at a range of systems and discovered that many of them had a shocking bias.

The team then developed a system to help researchers make sure their systems are less biased.

"Computer scientists are often quick to say that to make these systems less biased, simply design better algorithms," said lead author Irene Chen, a PhD student who wrote the report. article with Professor David Sontag and postdoctoral fellow Fredrik D. Johansson. .

"But algorithms are only as good as the data they use, and our research shows that you can often make a bigger difference with better data."

In one example, the team looked at an income forecasting system and found that it was twice as likely to misclassify female employees as low-income and high-income male employees.

They discovered that if they had multiplied by 10 the dataset, these mistakes would be made 40 times less often.

In another set of data, the researchers found that the ability of a system to predict mortality in intensive care units (ICUs) was less accurate for Asian patients.

However, researchers warned that existing approaches to reducing discrimination would make non-Asian forecasts less accurate.

According to Chen, one of the most common misconceptions is that more data is always better.

Researchers should instead get more data from these underrepresented groups.

"We see this as a toolkit to help machine learning engineers determine the questions to ask their data to determine why their systems can make unfair predictions," Sontag says.

The team will present the document in December at the annual conference on Neural Information Processing Systems (NIPS) in Montreal.

This article has been adapted from its original source.

[ad_2]
Source link