WELCOME TO HAPPYLAND | TATE EXCHANGE 2020

Project Leader and Manager, Network Facilitator, Workshop Design, Facilitation,  Producing

In March 2020, CRIN joined the Digital Maker Collective at the Tate Modern’s Tate Exchange to explore the intersection between art schools, technology and social good.

 

For this year’s event, we developed a series of workshops and talks introducing the general public to children’s rights and the digital environment, including the installation ‘Welcome to Happyland’ exploring surveillance, a panel on the ethics of children’s data and finally, a panel on surveillance, facial recognition and its dangers.  

 

“Welcome to Happyland: Get your passports ready” is an immersive installation which aims to introduce the general public to surveillance, facial and body recognition and children’s rights. The installation was split in different activities exploring a topic each with each step being a ‘security measure’ for audiences to complete in order to receive a passport visa to Happyland.

WORKSHOPS

1. The Emotion Recognition Tool

We introduced facial recognition to the audience by having the user sit in front of a webcam and wait until the machine scanned their emotion, which it classified as either ‘Happy’ or ‘No Emotion detected’. If they were classified as Happy, they would get to pass to the next security gate. If not, they were labeled as having ‘No Emotion’ and thus denied entry until they were classified as Happy.

 

The idea behind this interactive artwork is to explore how human emotion determines one’s humanity or lack-of and understanding the bias behind an artificial intelligence machine. By building a machine which classifies people with any other emotions than happy as as ‘non-human’, it shows how easy it is for a technologist to influence an extremely important message and outcome through curating the database and the machine’s classification features. What if the machine classified skin colour instead of emotions? What if the technologist himself is biased against specific races? What kind of bias would happen and how would it affect the outcome?

2. Real Cat or Cat Drawing: Object Recognition

Once they have passed the emotion recognition stage, we introduced the public to machine learning and object recognition through an activity, where participants had to show an image of a cat or draw their own cat and have Google’s Teachable Machine differentiate a real cat from a drawing. Some people were able to fool the machine by drawing realistic cats with the same colour palette found in the “real cat” database. Google’s Teachable Machine is an online free and accessible tool for users to create machine learning models with no coding required, which introduces everyday audiences to databases and machine learning in a fun and effective way.

3. The Body Recognition game

We introduced the public to body recognition through an interactive game, where participants had to choose 3 body stances and recreate the positions using PoseNet, an open-source body detection tool.

We wanted to introduce this fun piece of technology through a simple game of mimicry. Each body is shown as a basic skeleton on the webpage, and we created 6 specific poses which the audience member had to recreate 3 of in order to pass the final security measure. As the audience members tried various poses, some harder than others, it led them to understand how the machine recognized and recreated their digital skeleton in real time, and how it is possible to fool the machine into specific positions. 

4. Deepfake Baby!

Once the audience members have gone through each step, they receive a stamped visa passport granting them access to Happyland, where they meet an AI-generated citizen from Happyland: a baby.

The baby, whose face is AI generated, is put into the MugLife app, where participants can manipulate the baby’s facial features to make her say or do things such as lick her lips and close her eyes. This crucial step confronts the user to a possible and realistic and sometimes, horrific outcome resulting from unregulated data collection (specifically facial and body recognition) which can also affect them or their own children.

Each step before this introduced audience members to data collection and privacy issues, where their face and body was recognized and kept by the machine. Once collected, people can manipulate the data obtained using deep learning to generate new, fictional outcomes. An example is a deepfake which is a synthetic, realistic piece of media generated through deep learning by merging databases together, for example a politician whose lips have been AI-generated and voice recreated to say things which they have never said in real life.