See how an AI system classifies you according to your selfie



[ad_1]

Modern artificial intelligence is often praised for its increasing sophistication, but especially in concrete terms. If you are at the apocalypse of the spectrum, the revolution AI will automate millions of jobs, eliminate the barrier between reality and artifice and, ultimately, force humanity to the brink of extinction. Along the way, we may have butlers, we may be stuffed into embryonic pods and harvested for energy. Who knows.

But it's easy to forget that most AI now is terribly stupid and is only useful in restricted niche areas for which its underlying software has been specifically trained, such as playing an ancient Chinese board game or translating text from one language to another.

Ask your standard recognition robot to create something new, like analyzing and tagging a photo using only the knowledge you have learned, and you will get ridiculously absurd results. This is the pleasure behind ImageNet Roulette, a clever web tool designed as part of a continuing art exhibition on the history of image recognition systems.

As explained the artist and researcher Trevor Paglen, who created the exhibition Train humans With Kate Crawford, a researcher in artificial intelligence, it is not a question of judging AI, but of talking about its current form and its complex academic and commercial history, no matter how grotesque it may be.

"When we started to conceptualize this exhibition more than two years ago, we wanted to tell a story about the history of images used to" recognize "humans in computer vision systems and AI. We are not interested in the high-profile marketing version of AI or the legends about the future of dystopian robots, "says Crawford, a former member of the Fondazione Prada Museum in Milan, who presents Training in Humans. "We wanted to look at the materiality of artificial intelligence and take these images of everyday life seriously in the context of a rapidly evolving visual machine culture. This forced us to open the black boxes and examine the current operation of these "sight engines". "

This is a laudable quest and a fascinating project, even if ImageNet Roulette is the good side. This is mainly because ImageNet, a reputable training data set that artificial intelligence researchers have relied on over the last decade, does not generally recognize people. This is primarily a set of object recognition, but the "People" category contains thousands of subcategories, each one trying valiantly to help a software perform the seemingly impossible task of classifying a human being.

And guess what? ImageNet roulette is really bad.


I do not even smoke! But for some reason, ImageNet Roulette thinks so. He also seems to believe that I am in a plane, although his open – plan layout is only slightly less suffocating than the narrow metal tubes hanging tens of thousands of feet in the air.


ImageNet Roulette was created by the developer Leif Ryge under Paglen, to allow the public to take an interest in the abstract concepts of the art exhibition on the unfathomable nature of machine learning systems.

Here is the magic behind the scenes that excites him:

ImageNet Roulette uses a Caffe open source deep learning framework (produced by UC Berkeley) trained on images and labels from the "person" categories (currently under maintenance). Names and categories containing less than 100 images have been removed.

When a user downloads an image, the application first runs a face detector to locate faces. If he finds any, he sends them to the Caffe model for classification. The application then returns the original images with a selection frame displaying the detected face and label that the classifier has assigned to the image. If no faces are detected, the application sends the entire scene to the Caffe model and returns an image with a label in the upper left corner.

Part of the project is also to highlight the fundamentally flawed, and therefore human, ways in which ImageNet classifies people in "problematic" and "offensive" ways. (An example of interest appearing on Twitter is that some men uploading photos appear to be randomly labeled as "suspected of rape" For unexplained reasons.) For Paglen, this is crucial for one of the themes highlighted by the project, namely the fallibility of AI systems and the prevalence of machine learning bias due to its compromised human creators:

ImageNet contains a number of problematic, offending and bizarre categories, all from WordNet. Some use misogynistic or racist terminology. As a result, the results returned by ImageNet Roulette will also be based on these categories. It's by design: we want to highlight what happens when technical systems are trained to problematic training data. AI classifications of people are seldom made visible to people ranked. ImageNet Roulette gives an overview of this process – and shows how problems can occur.

Although ImageNet Roulette is a fun diversion, the underlying message of Train humans is a dark, but vital, one.

"Train humans In particular, it explores two fundamental questions: how human beings are represented, interpreted and codified with the help of training data sets, and how technological systems exploit, label and use this material, "explains the description of the 39; exposure. their biases and politics become apparent. In the context of computer vision and AI systems, forms of measurement are easily transformed – but surreptitiously – into moral judgments. "

[ad_2]

Source link