Five films to explore the philosophy of artificial intelligence

Philosopher, lecturer and author of Galileo’s Error Philip Goff gives his top five film picks to better understand artificial intelligence and the big philosophical questions around AI and consciousness. 

Philip Goff

Every year, there are extraordinary developments in the capacity of artificial systems to replicate human cognition. Artificial systems are now driving cars, predicting elections, and beating humans at the most complex game ever devised. There are many articles on the tech, but what do these technological innovations signify? Can these systems think? What would it take for an artificial system to be conscious? And will computers ever surpass human beings not just in specific cognitive tasks but in general intelligence? These are the big questions that philosophers of AI attempt to tackle.

As a philosophy lecturer, friends often ask me what books I would recommend to someone trying to get into philosophy. My answer is always the same: books are crucial, but the first thing you should do is to watch some sci-fi. Here are five of my top recommendations for films that deal with the philosophy of AI, together with the ‘big questions’ they deal with.  

Big Question no. 1: How do we know if a system is intelligent?

Film: Ex Machina

The great thing about Ex Machina is that it’s jam-packed with philosophical theories and ideas. As a result of winning a competition (or so it seems at first), programmer Caleb Smith finds himself on the island home of Nathan Bateman, the CEO of the dominant search engine ‘Blue Book,’ for whom Smith works. After duly signing non-disclosure agreements, Smith’s task is to assess whether Bateman’s creation – a female humanoid android named Eva – is intelligent. His method for doing this is described as a version of the ‘Turing test.’ 

The Turing test – originally called ‘the imitation game’ – was designed by the father of modern computing Alan Turing. In fact, the test carried out in Ex Machina – elaborate face to face mind-games between Smith and Eva – is nothing like what Turing envisaged. The true Turing test involves an interrogator questioning two collocutors who are hidden from view, one of whom is a person and one of whom is a computer. To pass the test, the computer has to fool 70% of judges in a five-minute conversation into thinking that it’s a person.

Contrary to frequent media hysteria, no artificial system has come close to passing the Turing test. It is fairly clear, however, that the fictional Eva would pass the Turing Test. But does this mean that Eva thinks and understands? Eva talks of feelings and complex ideas. But does she really understand what these words mean, or is Eva just parroting what she’s programmed to say?

The ‘Chinese room’ thought experiment was dreamt up by the philosopher John Searle, with the aim of showing that passing the Turing test is not sufficient for genuine understanding. Searle imagined a room containing a non-Chinese speaking human being with a big book of instructions. Native Chinese speakers outside of the room formulate questions in Chinese and then slide them under the door of the room. The non-Chinese speaker in the room then looks up the Chinese questions in their big book, and the book tells them which Chinese sentence to ‘output’ to the Chinese speakers outside the room. Despite knowing no Chinese, if the book is designed in the right way, it could in principle ensure that the answers outputted are meaningful responses to the questions inputted. For native Chinese speakers outside, the room appears to speak Chinese; in actual fact, all we have is a person who doesn’t speak Chinese blindly following instructions.

What Searle has given us is effectively a vivid way of reflecting on what a computer is: it’s just a system that follows instructions. Searle’s point is that mere blind following of instructions is not sufficient for genuine understanding. Real understanding arises only with consciousness. This leads us to our next question.

Big Question no. 2: How do we know when a system is conscious?

TV series: Westworld

The setting of Westworld is a high-tech Wild West-themed amusement park, populated by android ‘hosts’, whose job is to entertain the wealthy guests of the park by replicating the characters of the old west. The hosts certainly seem to be conscious: they scream when they’re damaged, smile when things are going well, and accurately negotiate their environment with their senses. But are they really conscious? Do they really feel pleasure and pain and have visual and auditory experiences? Or are they just complex mechanisms designed to replicate human behaviour despite lacking any kind of inner life? Part of the intrigue of the show is the audience’s uncertainty concerning the inner life of the hosts.

There are hints in the show that the conscious hosts can be distinguished from those lacking consciousness by their unpredictable behaviour. Maeve seems to ‘go rogue’ after the death of her ‘daughter.’ Grief seems to override her programming, allowing her to express human-like emotional freedom, rather than blindly obeying her code. The trouble with using spontaneous and unpredictable behaviour as a test of consciousness is that it’s not clear that it’s a test we human beings would pass. In famous experiments from the 1970s, Benjamin Libet seemed to demonstrate that around 300 milliseconds before a person consciously decides to do something, the person’s brain has already initiated the action.

What we need is a theory of consciousness: a systematic hypothesis that can tell us what kinds of physical activity give rise to consciousness. Once we have a theory of consciousness, we will be able to answer the next question.

Big Question no. 3: Are computers conscious?

Film: 2001: A Space Odyssey

A key problem in trying to construct a theory of consciousness is that consciousness itself is unobservable. You can’t look inside someone’s head and see their feelings and experiences. I know that I am conscious, because I’m immediately aware of my own feelings and experiences, but how do I know you are conscious? Of course, practically speaking, we can take people’s word for it and scientists can correlate consciousness with brain activity by scanning people’s brains and mapping activity onto the experiences people report. Neuroscientists aim in this manner to isolate the general physical conditions necessary and sufficient for having an inner life. One of the leading neuroscientific theories of consciousness is the Integrated Information Theory, or ‘IIT’ for short, formulated by Giulio Tononi. According to IIT, consciousness is correlated with integrated information, a notion for which Tononi gives a mathematically precise characterization.

One of the most compelling representations of an apparently conscious computer is HAL 9000 from Kubrick’s 2001: A Space Odyssey. The blank stare of the yellow dot of HAL’s camera, and the calm and collected tone with which HAL declines to open the cockpit door to allow reentry to astronaut David Bowman (knowing that Bowman intends to disconnect HAL) gives a vivid sense of a cold and calculating intelligence. When Bowman eventually disconnects HAL, HAL seems to exhibit feelings of fear and desperation, and you can’t help feeling a bit sorry for the dying computer.

In fact, according to IIT at least, if HAL 9000 is a development of the kind of computers we have today, then he/she/it would not be conscious. Each neuron in the human brain is connected with around ten thousand other neurons, and the brain’s informational structures are highly dependent on these intricate connections. As a result, the brain exhibits a very high level of integrated information. In contrast, today’s computers are modular systems, and each transistor is connected to only a few others. Computers lack the kind of integration which IIT sees as the hallmark of consciousness. 

Five films to explore the philosophy of artificial intelligence

Big Question no. 4: Why does consciousness exist?

TV series (based on the novels): His Dark Materials

Neuroscience is limited to giving us correlations between conscious experiences and brain activity. But what we ultimately want from a theory of consciousness is a way of explaining those correlations. Suppose IIT is correct that conscious experience is correlated with integrated information. The question still remains: why does integrated information go along with conscious experience. How can we move from brute correlation to genuine explanation?

This month saw the premiere of a new BBC/HBO dramatisation of Philip Pullman’s classic trilogy His Dark Materials. I’m a huge fan of Pullman’s work, but I only realized recently – after Pullman entered into a philosophical discussion I was having on Twitter – that there are intriguing connections with my own work on the philosophy of consciousness.

At the centre of the His Dark Materials trilogy is the mysterious substance known as ‘Dust.’ As the story develops, we discover that Dust is in fact a kind of fundamental particle – the ‘Rusakov particle’ – associated with human consciousness. In an enigmatic conversation with some of these consciousness particles, the scientist Mary asks them if they are ‘what we have called spirit’. The particles reply:

                From what we are, spirit;

                From what we do, matter;

                Matter and spirit are one.

This brief description is strikingly similar to a theory of consciousness that is currently causing waves in academic philosophy: panpsychism. Panpsychism is the ancient view that all things – rocks, planets, trees and tables – have mind or spirit. The ‘new wave’ of panpsychism doesn’t go quite that far, but it does propose a novel view about the relationship between consciousness and matter.

The starting point is that physics is confined to telling us about the behaviour of matter, about what it does. A physicist will characterise a fundamental particle like a quark in terms of how it impacts on other particles or fields. But physicists say nothing about the intrinsic nature of the quark, nothing about how the quark is in and of itself. It turns out, then, that there is a huge hole in our scientific worldview: physics gives us rich information about the behaviour of matter but leaves us completely in the dark about its intrinsic nature. New wave panpsychists propose filling this hole with consciousness; their bold hypothesis is that the intrinsic nature of matter, from fundamental particles right up to the human brain, is constituted of consciousness.

Panpsychism sounds strange. However, as I describe in my new book Galileo’s Error: Foundations for a New Science of Consciousness, a growing number of philosophers and scientists are coming to think it may be our best hope for integrating consciousness into our scientific worldview. Finally, we have a way of moving beyond simply gathering correlations between brain activity and conscious experience, and can offer a positive account of the relationship that underlies these correlations. There’s a long way to go, but this may be the first step on the road to the final theory of consciousness.

Big Question no. 5: Will AI ever surpass human intelligence?

Film: Her

Her is one of my personal favourite AI films. Set in the near future, it is a soulful story of the relationship between Theodore Twombly and his ‘Operating System’ Samantha. Despite initial reservations about romance with an artificial mind, and despite the fact that he can interact with Samantha only verbally (via his laptop or Smartphone), Theodore finds himself having a deeply meaningful and fulfilling relationship. Problems arise, however, when Samantha’s rapid intellectual growth vastly exceeds what a human is capable of. This is brought vividly home when she reveals to Theodore that, whilst talking to him, she is simultaneously having 8,716 other conversations, and is in romantic relationships with over 600 other people.

This is a vivid representation of the much-discussed intelligence explosion which may result when highly intelligent AIs become able to improve themselves much more effectively than human beings can, which would subsequently make them even more able to improve themselves, and so on ad infinitum. Once this recursive process takes off, there may be no way to stop its rapid acceleration, leaving human intelligence far behind.

In Her, the result is benign: the operating systems gather together to design an immaterial platform and then float off to a kind of virtual heaven. But many philosophers are deeply concerned about possible dangers for humankind. Sam Harris has expressed these fears with a powerful analogy. In terms of intelligence, super AIs may be to us as we are to ants. If Super AIs have the same concern for us as we have for ants, we may indeed have a great deal to fear from them.

Sign up to the Penguin Newsletter

For the latest books, recommendations, author interviews and more