Based on Your Selfie, How an AI System Classifies You


Written by:

Modern artificial intelligence is often praised for its increased sophistication, but chiefly in the doomer terms. If you are at the end of the apocalypse of the spectrum, the revolution of the AI will automate millions of jobs, which remove the barrier between the reality and artifice, and, lastly, it forces humanity to the brink of the extinction. By the way, maybe we get robot butlers, maybe we get stuffed in the embryonic pods and cut out for energy. Who knows?

But it is easy to forget that most AI is very silly right now and it is useful only in narrow, niche domains, for which the underlying software of it has been specially trained, such as playing an ancient Chinese board game or translating the text from one language to another.

How An AI System Classifies You

Based on Your Selfie, How an AI System Classifies You

Based on Your Selfie, How an AI System Classifies You

To do something novel, ask your standard recognition bot, a novel such as analyze and label a picture by using only its acquired knowledge, and you will get some comically nonsensical results. This is the fun behind the ImageNet Roulette, a nifty web tool created as a part of an ongoing art exhibition over the history of the image recognition systems.

As discussed by the artist and the researcher Trevor Paglen, who is the creature of the exhibit Training Humans with the AI researcher Kate Crawford, the point is not to make a decision regarding AI system, but to cleave with its present form and its complex academic and commercial history, as grotesque as it might be possible.

“When we first started the concept of this exhibition two years ago, we wanted to tell a story about the history of images used to identify humans in computer vision and AI systems. We were not interested in either the hyped, marketing version of AI nor the stories of dystopian robot futures,” Crawford told the Fondazione Prada Museum in Milan, where the Training Humans is featured. “We wanted to engage with the materiality of AI, and take those everyday images seriously as part of a rapidly evolving machinic visual culture. This led us to open up the black boxes and look at how these ‘engines of seeing’ currently work.”

This is a worthy search and a fascinating project, even though ImageNet Roulette represents its goofy side. This is because AI researchers have relied on renowned training data set AI researches for the past decade, which is generally bad at identifying people. It is mostly an object validation set but has a category for “People” with thousands of subcategories, each trying to help the software perform the seemingly impossible task of classifying a human being.

Guess what else? The ImageNet Roulette is super bad at it.

I do not even smoke! But for some reason, the ImageNet Roulette thinks I do smoke. It also appears that I am located in an airplane, although to its credit, open office layouts are the only slightly less suffocating than the narrow metal tubes that suspend tens of thousands of feet in the air.

The ImageNet Roulette was put together by the developer Leif Ryge who working under Paglen, to allow the public to engage with art exhibition’s abstract concepts regarding the inscrutable nature of the machine learning systems.

Here is the behind-the-scenes magic which makes it tick:

The ImageNet Roulette uses an open-source of Caffe deep learning framework (that produced at UC Berkeley) trained on images and labels in “person” categories (that are currently ‘down for the maintenance purpose’). The proper nouns and the categories with fewer than 100 images were removed.

While uploads an image by the user, first the application runs a face detector to detect any faces. If it finds any, then it sends them to the Caffe model for the classification purpose. Then the application returns the original images with a bounding box in which the detected face appears and the label assigned by the classifier to the image. If no faces are detected by the application, then the application sends the whole view to the Caffe model and it returns an image with a label in the top left corner.

To highlight the fundamentally flawed is also the part of the project, and therefore human, ways that the ImageNet classifies people in “problematic” and “aggressive” ways. (An interesting example to pop up on Twitter is that some men uploading their photos appear to have been randomly tagged as “suspect of rape”.) Paglen says this is important for one of the themes that the project is highlighting, which is the fallibility of the AI systems and prevalence of the machine learning bias resulted in its compromised human creators:

The ImageNet has several problematic, aggressive, and bizarre categories – all drawn from the WordNet. Some use incorrect or racist terminology. Therefore, the ImageNet Roulette returns will draw on those categories as well. This is by design: we want to shed light on what happens when the technical systems are trained on the problematic training data. The AI classification of people is rarely made visible to the people who are classified. The ImageNet Roulette offers a glimpse into those procedures – and to show the methods things can go wrong.

Although the ImageNet Roulette is a fun distraction, the underlying message of the Training Humans is dark, but a vital one.

“The Training Humans, in particular, explores two fundamental issues: how the humans are represented, interpreted and codified by the training datasets, and how the technological systems harvest, label, and use this material,” reads the exhibition statement “As the classifications of humans by AI systems becomes more invasive and complex, their biases and politics become apparent. Within computer vision and AI systems, forms of measurement easily – but surreptitiously – turn into moral judgments.”

Leave a Reply

Your email address will not be published. Required fields are marked *