ADVERTISEMENT

‘Ex Machina’: An 'Ideas Movie'

Director Alex Garland discusses his new film, AI, and why you should be more wary of Google

Alex Garland
April 17, 2015

"Basically, it’s an ideas movie," Alex Garland said toward the end of our interview about his new picture, Ex Machina. "It’s to provoke conversations and the thing I really hoped about this film more than anything—and it was made at a budget that allowed this to happen, a sort of creative freedom, really—is that it could be thoughtful."

Ex Machina, reviewed here, is definitely a drinks-after-watching sort of flick, the type of movie that makes you want to stop off at a nearby bar to discuss what you’ve just seen.

The film stars Oscar Isaac as Nathan, a genius programmer who runs a Google-like search engine company and thinks he has cracked the secret of artificial intelligence—a secret he has housed in Ava (Alicia Vikander), a beautiful robot. Caleb (Domhnall Gleeson), an employee at Nathan’s company, is flown out to the CEO’s secluded estate in order to test the robot and see if she has attained sentience.

But what is sentience? How does one even define that idea?

"Computers are not self-aware at the moment at all—and what is that?" Garland asked while we talked about the idea of artificial intelligence. "If you had a computer that was able to differentiate between itself and other computers, to say ‘I’m gonna give myself the value X, and other things I’m going to give the value Y or Z,’ is that machine at that point self aware? Because it can recognize other things and recognize it is not those things?"

The answer, it seems, is no. And this is one of the reasons why the Turing Test, as intriguing a concept as it is, is not terribly useful for determining whether or not a computer program has developed anything like self-awareness or sentience. A chatbot tricking a person into thinking they’re talking to another person might be interesting, but it’s not an evolutionary leap forward.

Interestingly, given the way the movie plays out,* Garland is not terribly worried about the development of AI—or the fear that its creation will mean our destruction.

"When we can do something, typically we then do it. So the question stops being ‘should we or shouldn’t we,’ but how do we deal with it when it happens, if it happens," Garland said. "All I can say is that when I see people getting very anxious about it, like Elon Musk and Stephen Hawking, stuff like that, whilst I understand where the anxiety comes from, my starting point isn’t that. … I start from a position of hope and interest, broadly speaking."

These are rather esoteric concerns, however; we’re still a long ways off from developing a real artificial intelligence. More in tune with the times is Garland’s concern about the information industry’s rapacious capturing of our data and the extremely modest limits on their ability to mine our lives for their own economic well being.

"Google is like NASA in the 60s: They’re the guys who are going to the moon. They’re the guys that are dropping massive amounts of money on strange, blue sky type enterprises—that have no obvious monetary function and are really just about pushing forward the boundaries of what humankind can do," Garland said. But that sort of advance can only come with power—and power needs to be monitored. "I really don’t care whether big tech companies are doing anything wrong. They need oversight because they’ve got so much power. And I think this film is paranoid not about AIs, but about big tech companies. So it’s slightly nudging people in that direction."

*No spoilers here; you’ll need to pony up $10 to see how the movie ends. And you should, as it’s a great flick.