0. Introduction
Oxford's Michael Wooldridge is a veteran AI researcher and a pioneer of agent-based AI. In this interview, Wooldridge is going to take us through the entire 100-year history of AI development, from Turing all the way to contemporary LLMs. Now you might wonder, why should we bother studying the history of a technology? Technology should be about the state of the art techniques. It should be about the latest developments. What's the point of revisiting the old and outdated? It turns out there's at least two important reasons to study this history. And the first is that it'll help you better anticipate its future. Wooldridge is extremely skeptical that this iteration of AI will lead to the singularity, partially because he's lived through so many similar cycles of AI hype. Whenever there's a breakthrough, you always get apocalyptic predictions pushed by religious zealots, which sets up unreasonable expectations that eventually sets the field back.
Studying this history of boom and bust will help you see through the hype and grasp at where the technology is really going. The second and even more valuable reason to study this history is that it contains overlooked techniques that could inspire innovations today. Even for me, someone who started coding when I was 15, who studied CS in college and then went to build a tech company, Wooldridge uncovered entire paradigms of AI that I haven't even heard of. Wooldridge's claim is that these paths not taken in AI, the alleged failures weren't wrong, they were just too early. And so this 100-year history is a treasure trove of ideas that we should look back to today for inspiration.
1. The Singularity Is Bullshit
Johnathan Bi: This is the provocative chapter title in part of your book, The Road to Conscious Machines. The singularity is bullshit. Why is that?
Michael Wooldridge: There is this narrative out there, and it's a very popular narrative and it's very compelling which is that at some point machines are going to become as intelligent as human beings and then they can apply their intelligence to making themselves even smarter. The story is that it all spirals out of our control. And of course this is the plot of quite a lot of science fiction movies, notably Terminator. I love those movies just as much as anybody does. But it's deeply implausible, and I became frustrated with that narrative for all sorts of reasons, one of which is that that narrative, whenever it comes up in serious debate about where AI is going and what the risks are, there are real risks associated with AI. It tends to suck all the oxygen out of the room in the phrase that my colleague used. And it tends to dominate the conversation and distract us from things that we should really be talking about.
Johnathan Bi: Right. In fact, there is a discipline that's come out of this called existential risk. It's kind of the worrying about the Terminator situation and figuring out how can we perhaps better align these super-intelligent agents to human interests. And if you look at not just the narrative, but actually the funding and what the smartest people are devoting their time into thinking in not only companies, but policy groups, ex-risks, existential risk is the dominant share of the entire market, so to speak. Why do you think this narrative has gained such a big following?
Michael Wooldridge: I think it's the low probability but very, very high risk argument that I mean, I think most people accept that this is not tremendously plausible. But if it did happen, it would be the worst thing ever. And so very, very, very high risk. And when you multiply that probability by the risk, then the argument is that it's something that you should start to think about. But when the success of large language models became apparent and ChatGPT was released and everybody got very excited about this last year, the kind of debate around this sort of reached sort of slightly hysterical levels, and it became slightly unhinged at some point. My sense is the debate has calmed down a little bit and is being more focused on the actualities of where we are and what the risks are.
Johnathan Bi: Right. I think that's quite a charitable reading, I think, of the psychology. It's a rational calculus. There's a small probability, but there's a large sort of cost. I study religious history. And when I talk to people in the ex-risk world, the psychology kind of reminds me of the Christian apocalyptics. There's these people throughout Christian history that are like, now's the time. This happened most recently, probably, when we were going through the millennium, right, in 1999. And it's this psychological drive that wants to grab at something Total and eschatological in a way to orient the entire world. And so people I guess what I'm trying to highlighting is it may maybe you can see some of the psychology and climate risk as well. It's not to say that these things are true, right? It's not to say that the world is ending in Christianity, that climate isn't changing, or there is no ex-risk. It's that the reason that people seem attracted to this narrative is almost a religious phenomenon.
Michael Wooldridge: I think that's right. And I think it appeals to something almost primal and kind of human nature. I mean, its most fundamental level is the idea that you create something, you have a child and they turn on you. That kind of the ultimate nightmare for parents. You give birth, you nurture something, you create something and then it turns on you.
Johnathan Bi: Zeus and Cronus, right? It's an archetype.
Michael Wooldridge: Exactly. Or, and this, that narrative, that story is very, very resonant. And for example, you go back to the original science fiction text, Frankenstein. That literally is the plot of Frankenstein. You use science to create life, to give life to something, to create something, and then it turns on you and you've lost control of that thing. So it's a very, very resonant idea, I think. It's very easy for people to latch on to.
Johnathan Bi: Right. It's easy for us to critique the psychology here, but what do you think is wrong or what do you think people miss about the argument itself that once we have super intelligent or at least on par with human level machine intelligence that they can recursively improve upon themselves? What do you think people are missing when they give too much weight to that argument?
Keep reading with a 7-day free trial
Subscribe to Johnathan Bi to keep reading this post and get 7 days of free access to the full post archives.