Here's How an AI System Classifies You Based On Your Self

Trending Desk

While modern artificial intelligence is often lauded for its sophistication, reality suggests that an AI revolution will automate millions of jobs, eventually forcing humanity, perhaps, to the brink of extinction. However, that lies in the future, despite making amazing inroads into the field, AI for now is a narrow, niche domain that plays on software that it has been specifically trained to follow.

A story in The Verge, now says that ask your standard robot to do something that it has not been programmed to do and one will get something comically nonsensical as a result.

According to The Verge, that is the fun behind ImageNet Roulette, a web tool built as part of an ongoing art exhibition on the history of image recognition systems.

Trevor Paglen, who created the exhibit Training Humans with AI researcher Kate Crawford, says that the point of the exhibition is not to pass a judgment on AI, but to engage with its current form and its complicated academic and commercial history, as grotesque as it might be.

Speaking from the Fondazione Prada museum in Milan, where Training Humans is featured, Crawford said that when they started conceptualising the exhibition around two years back they wanted to tell a story about the history of images used to ‘recognize’ humans in computer vision and AI system.

He added that they wanted to engage with the materiality of AI using everyday images seriously as a part of a rapidly evolving machinic visual culture.

ImageNet Roulette on its part represents the goofier side to it, because it is generally bad at recognizing people.

It is mostly an object recognition software, but has a category called "People" that contains thousands of subcategories, each valiantly trying to help software do the seemingly impossible task of classifying a human being.

Put together by developer Leif Ryge working under Paglen, ImageNet Roulette is a way to let the public engage with the art exhibition’s abstract concepts about the inscrutable nature of machine learning systems. Part of the project is also to highlight the flawed ways that ImageNet classifies people in “problematic” and “offensive” ways.

Paglen says this is crucial since it highlights the fallibility of AI systems. Furthermore, the underlying message of Training Humans is that it explores two fundamental issues in particular -- how humans are represented, and codified through training datasets, and how technological systems harvest, label and use them.