Apple of my Eye

Ways of seeing in the digital age with artist Trevor Paglen

The work of the Berlin-based US artist Trevor Paglen explores hidden systems of surveillance and power structures in the 21st century. Photographing military installations with powerful telephoto lenses, capturing images of stealth drones and undersea internet cables, and working to expose the CIA’s extraordinary rendition programme, Paglen’s practice reaches further than journalistic documentation, asking viewers to question the social and political structures, pervasive and unseen, that shape our way of life.

As he prepares for a new exhibition at the Barbican Centre in London, Port spoke to Paglen about the latest focus of his research – artificial intelligence systems.

Photography Tim P. Whitby

What draws you to explore these hidden power structures?

Put simply, I’m interested in learning how to see. We see with our eyes but also, increasingly, with the technologies we build, whether that’s cameras or sensors or drones or artificial intelligence systems. We also see with our cultural backgrounds, we see from the moment in history that we’re in, and we see through the lens of economic systems that we’re embedded within. I don’t think you can easily pick these things apart.

Where did the interest in AI come from?

I was working on a film about Edward Snowden, called Citizenfour, which developed out of my work trying to understand how mass surveillance infrastructures operate. I had been looking at institutions like the NSA, or the GCHQ in the UK – these huge, global, essentially military, surveillance agencies – and came to realise there were other institutions that are a hundred times bigger called Google, Amazon and Microsoft. They’re very similar in many respects, especially in the way they collect and use data. From there, you encounter AI pretty quickly – it’s an essential part of data collection at that scale.

What should we know about AI?

First of all, the word intelligence is quite misleading. AI is statistics; it’s non-linear algebra, which begins to demythologise AI – it immediately removes these conversations about AI being able to take over the world by itself. On the other hand, AI is everywhere and it is increasingly built into our infrastructures, with the people that run these systems extracting as much data as possible about our daily lives, and, of course, the goal of extracting all that data is ultimately to make money. All that information about your behaviours and habits is sold to insurance agencies that will modulate your insurance premiums and credit agencies to modulate your credit ratings. AI is not passive, but actively sculpts our lives in ways that will financially benefit the massive corporations that operate at this scale.

How does that take form in your recent work?

I’ve been looking at training images – the images that are fed into AI systems in order to teach them how to recognise different objects. Say you want to distinguish between apples and oranges, you can build what’s called a neural network and give it thousands of pictures of oranges and thousands of pictures of apples, and it will ‘learn’ to identify what an apple is and what’s an orange. This is an incredibly simplistic example – in real life it happens at a much larger scale.

The piece at the Barbican draws from one of the most prominent training sets, ImageNet, which was developed by Stanford University and published in 2009 – a set of 15 million images organised into about 20 thousand categories. In the words of the set’s founders, it’s an attempt to map out an entire world of objects – it’s crazy how extensive it is. The work consists of a montage, going from concepts that we think are not particularly controversial, like an apple – a concrete noun – to something like an apple picker, a more ambiguous concept: Your definition of an apple picker might be completely different to mine.

The piece explores how machine learning systems are being trained to see. The training sets establish ‘natural’ or absolute definitions of things that are entirely historically constructed – for example gender, when the only way to know someone’s gender is to ask them. So the method of classifying in this way becomes problematic very quickly.

What will be the implications of AI for art?

I mentioned that we might both agree on the concept of an apple, but actually the first piece in the Barbican show, before you see anything, is an image of a Magritte painting that says ‘Ceci n’est pas une pomme’, and yet it has been classified by the AI system as an apple. Who gets to decide what an apple is? Is it the artist, the viewer or the machine learning system?

Will the primacy of the artist not always be there?

You don’t know! I think I’m one of the few people to have actually opened up these training sets and looked at the images and how they are categorised. When you do, you see the assumptions – the categorisation – which is really very regressive; a kind of physiognomy. These images, these taxonomical structures are being built into infrastructures around us all the time, and they operate autonomously; you are not able to challenge how you, or anything else, is being seen.

You can see how, the worse the assumptions are, the more potential there is to harm the most vulnerable sections of society. There’s an urgent need for people to do more work opening up these systems, to try and under- stand how they work technically, but also the kind of politics that are being built into them. Technologies are never neutral – they actively shape society according to certain rules. There are always winners and losers.

From ‘Apple’ to ‘Anomaly’ runs at the Barbican Centre, London, until 16th February 2020

This article is taken from issue 25. To buy the issue or subscribe, click here