AI viral self-classifier ImageNet Roulette is part of an AI-based project



[ad_1]

ImageNet Roulette ImageNet Roulette classifies people's selfies.Isobel Asher Hamilton / Business Insider

  • A website called ImageNet Roulette has gone viral on Twitter to allow users to upload their selfies and then have the AI ​​try to guess what kind of person it is.
  • The artificial intelligence has been formed on a considerable and significant picture data set called ImageNet. The classifications he can establish are extremely varied and include terms such as "computer user", "grandmother" and "first offender", to name just a few.
  • Some people of color, including New Statesman journalist Stephen Bush, noticed that some of the classifier's terms were racist.
  • Show these terms is deliberate, ImageNet Roulette being partly designed to show the dangers of AI bias.
  • Visit the Business Insider home page for more stories.

A new viral tool that uses artificial intelligence to label people's selfies shows how AI can be weird and biased.

The ImageNet roulette site was widely shared on Twitter on Monday and was created by AI Now Institute co-founder Kate Crawford and artist Trevor Paglen. Both researchers examine the dangers associated with the use of data sets with entrenched biases – such as racial bias – to drive AI.

The artificial intelligence of ImageNet Roulette was formed on ImageNet, a database composed in 2009 of 14 million tagged images. ImageNet is one of the largest and most comprehensive training data sets in the field of artificial intelligence, in part because it is free and accessible to all.

The creators of ImageNet Roulette have formed their AI to 2833 subcategories of "people" found in ImageNet.

Users upload photos of themselves and the AI ​​uses this dataset to try to integrate them into these subcategories.

This Business Insider reporter tried to download a selfie and was identified by the AI ​​as being "myopic", a short-sighed person. I wear glasses, which seems the most plausible explanation of the ranking.

Some of the classifications proposed by the engine were more career-oriented or even abstract. "Computer user", "enchantress", "creep" and "pessimistic" were among the proposed classifications. Plugging in some more pictures of me gave jewels such as "bloodhound", "sweaty, sweater" and "diver".

Other users have been variously disoriented and amused by their classifications:

However, a less amusing aspect of the classifier soon appeared, as the classifier proposed disturbing classifications for people of color. New Statesman political editor Stephen Bush discovered a picture of himself, not only based on racial criteria, but also using racist "Negroid" insults.

Another of his pictures was labeled "first offender".

And a photograph of Bush in Napoleon's costume was marked "Igbo", an ethnic group from Nigeria.

However, this is not a case of Roulette ImageNet Roulette unexpectedly, like Microsoft's chatbot Tay on social networks, which had to be closed less than 24 hours after being exposed to locals from Twitter who have successfully manipulated it to become a denier of the Holocaust.

The creators Crawford and Paglen wanted to emphasize what was happening if the fundamental data used to train the artificial intelligence algorithms were bad. ImageNet roulette is currently on display in an exhibition in Milan.

Read more: Taylor Swift once threatened to sue Microsoft against his Tay chatbot, which Twitter turned into a racist racist speech.

"ImageNet contains a number of problematic, offensive and bizarre categories – all from WordNet, some of which use misogynistic or racist terminology," writes the couple on the site.

"Therefore, ImageNet Roulette's results will also be based on these categories, which is what we want: we want to highlight what happens when technical systems are trained in problematic training data." database of word classifications formulated at Princeton in the 1980s and was used to label images in ImageNet. "

Crawford tweeted that, even though ImageNet was a "major achievement" for AI, the project revealed fundamental problems of bias: "be it race, gender, emotions or characteristics." is full-fledged politics, way of "debias" it.

The bias of AI is far from being a theoretical problem. In 2016, a ProPublica survey revealed that a computer program called COMPAS, used to predict the likelihood of reoffending criminals, was racially biased against blacks. Similarly, Amazon had to give up an AI recruitment tool it was working on last year after discovering that the AI ​​system was derisking candidates.

[ad_2]

Source link