So when experience studio AKQA were asked to create opening titles for the upcoming Future State conference (a component of Semi Permanent 2019) to a theme of our changing relationship with technology, it made a certain amount of sense to create a new AI to thwart us at yet another game favourite — this time, the Atari classic, Asteroids.
But what began as a demonstration of AI’s mastery of game play quickly evolved into an experiment into how our binary conception of ‘machine’ and ‘human’ might one day dissolve, through cooperation, or by merging into a single, new entity.
The resulting work, Neuromuscle, demonstrates this speculative reality through an AI that is designed to fit on to, and control, human hands.
The device works by connecting an AI to a TENS (transcutaneous electrical nerve stimulation) aparatis. When connected to a human ‘user’s’ upper arms, the AI is able to command the hands to play an original 1979 Asteroids arcade cabinet.
To the team responsible for the creation, the project became an allegory for the anxieties around the perceived supremacy of artificial intelligence, and what AI development will mean for the future of human work and purpose.
Tim Devine, Executive Creative Director and Dr Jaehyun Shin, a machine learning specialist at AKQA talks us through their unnerving invention.
Let’s start by talking about how your team works and how this Future State project came about.
We are a creative agency but we're also a bridging agency. We bridge brands and artists with emerging technology; trying to understand both enough to do something meaningful. After Somesthetic Transfer in 2018 (a project which fused machine intelligence with human creativity through a series of style transfer artistic collaborations), we were talking with Murray [Bell, founder of Semi Permanent) about how we could explore an ongoing project; something that started at the festival one year and finished at the next. Murray was into it and he started talking about the Future State panels and if we wanted to do the titles.
We don't really do motion design or animation, but we do AI and machine learning which was to be a huge part of Future State. So for all intents and purposes we kicked it off with the title design, but that set the scene for how the future will be presented; where we are now vs. where we will be soon.
What was the original intent to create a direct lineage between this project and Somesthetic Transfer?
We kept thinking about how we collaborate with machines and how we're increasingly relying on AI to make decisions for us. Adam [Grant, Creative Director at AKQA] had this idea of life as just a series of problem solving exercises: 'my task is to get coffee. My task is to go somewhere' etc. And that's the current state of artificial intelligence too: 'I'm a car. I drive down the road. That's a house. That's a tree'. But it's not so good at saying 'I'm going for a joyride.' It's good at finding tumors or playing AlphaGo or understanding if we're depressed. But they're all discrete tasks.
None of them are particularly visual either.
Often we're experiencing AI without even realising it. We wanted to experience machines doing something better than we can. Similar to Somesthetic Transfer, the initial idea was that we'd give a machine learning neural network a few visual frames and instruct it to extrapolate that into a longer video. After the first week of generating videos, it wasn't particularly exciting. Then we thought instead about building on the idea of Future State itself, which was where the idea of the Open AI gym came up.
The Open AI gym is a system for training old Atari games without giving it any other information other than 'increase the score'. Video games are an interesting space because you can really clearly show a machine being better than a human. You could watch a 3D line learning how to walk or Atlas the Robot learning to jump, but it only shows them doing it as good as a human. The same goes for a self-driving car; over time we want it to kill less people but it's not driving noticeably better than a human can.
A video game requires a lot of logic and some creativity too.
We thought showing a video game would conjure the thought that a human could never move or think that fast. We'd seen a few examples of machines playing games better than humans: the first try is always really poor, the second follows a different strategy, then it just gets better and better. We loved the idea of showing the machine failing but getting better. Adam had the idea to build an arcade unit to over-emphasise the fact a computer is playing.
It adds a physical element to a virtual environment.
We thought we'd film it and make it really loud. Ryoji Ikeda was a big reference here; black and white and noisey with a cacophony of machinery and 8-bit sound effects. Between that and the visual concept of zooming out to slowly reveal what's happening, it allowed us to show how quickly it's learning in relative human time. We talked about Malcolm Gladwell's 10,000 hours theory: that it takes 10,000 hours of practice to truly master a craft. If we could get a machine to achieve that within a day, that would be interesting.
Dr Jaehyun Shin: Collaborating with other people is an important process for us; seeing how other people process ideas and work and learn from them. We're doing the same thing here but from an AI.
You can watch it learn how to play this video game in real time where it makes these crazy, cold, tactical decisions mid-generation. Sometimes you learn something, sometimes it's just the machine being random, then it dies and takes another path and tries again. There is no fear or hesitation or danger.
I felt both this project and Somesthetic Transfer were at the convergence of fine-arts and technology. Do those approaches differ?
As eccentric as this is, this was more about giving people the experience of machines getting better. Art, particularly fine-art, is a process and we haven't followed an artistic process but a design one. We had a purpose and a clear outcome instead of something that was abstract.
What was it like watching the machine learn in real-time?
AG: It's tragedy and comedy. We spent quite a bit of time laughing at its dumb decisions, not just in the arcade game but in the generative video pieces and understanding its different ways of thinking.
TD: That's why we pivoted to showing the process instead of the outcome, because we found the most enjoyable part was the tension of its weird decisions that made no sense to us a logical humans. For example, we gave it a few frames from a hyper-lapse video of a seedling growing, thinking it would make sense of that pretty easily. But we were forgetting our own bias of growing up in nature and knowing the natural order of things. The videos we got back were weird and nothing like what we expected. It was funny and frustrating but ultimately fruitful as it caused us to change tracks. Instead of showing this final artistic product, we could lift the lid and show the process of how it makes (what we might deem) a bad decision, though ultimately ending up in a superhuman place.
How optimistic are you about AI and machine learning in general after this experience?
TD: The tech is ok. We're the problem.
There's that comment from Dave Graney about indie musicians playing chords they don't understand, and in many ways we have access to technology that's far more powerful than we can comprehend. We understand it enough to work with it, but we don't understand all the implications yet.
For example, we don't understand what a social media network does to us. Six years ago the Arab Spring was this great liberator, but five years later Donald Trump is elected via the greatest disinformation platform that’s ever existed and starts cutting funding for the environment. You can draw these lines about how things affect the world. AI is no different. We don't understand technology and that's part of the story. We need more critical design to counterweight the gloss and hype.