0. Introduction
When I used GPT-3 for the first time, it filled me first with awe and then with dread because I could see how close AI was in surpassing me in research, writing, philosophy that I trained so hard for. And if the Copernican revolution took away our special place in the universe and Darwin robbed us of our special place in nature, then AI threatens to undermine the last pride of the human race, our intelligence. Today, work gives many of us purpose and meaning. But AI is making more and more of that work obsolete until one day, perhaps, all human activity may be redundant. Nick Bostrom's Deep Utopia is about that day and what to do about it. In this interview, you're going to learn where you can invest your time, effort and resources. That is most AI proof. And what the good life looks like in a world where AI has made you obsolete.
1. Pre-Utopia
1.1 Learning Over Production
Johnathan Bi: AI changes the payoff functions of different types of human activities. Certain activities are going to be made obsolete sooner, and other activities seem to be more robust. The three I want to talk with you about are learning over production, relationship over things, and capital over labor. For the first one, learning over production, I was thinking about your book where you gave this thought experiment of a technology that can help us learn. You describe this as potentially one of the most difficult technologies to create because it requires reading and changing billions, if not trillions, of synapses in our brains to be able to learn a new language or gain the experience of becoming a good parent. My thought is, anyone engaged in intellectual work may be rendered obsolete. Maybe in just a few years' time we can feed in your book Deep Utopia, a podcast and interview, and AI will generate a better script than the one we can have today. And then video AI is able to create something more beautiful than the conversation we're having. However, if I wanted to...
Nick Bostrom: Even more beautiful.
Johnathan Bi: Yes, even more beautiful. However, if I wanted to understand Professor Bostrom's work, I still have to do the hard work of sitting there and reading it, it seems, for a much longer time. So that seems to be one area of life that's going to be more resistant to AI innovation.
Nick Bostrom: If you think about what it actually would require through some sort of neurotechnology to directly download skills - first, you would have to have the ability to read your current synapses, how they are configured. As you said, millions or billions or trillions of them. Then you would need to interpret what all of those synapses actually encode currently, and then figure out how they would need to be changed so they now encode this additional knowledge without messing up what's already there or changing your personality too much. And then you would have to physically change all of these. This definitely seems like a task for mature superintelligence. But until that time, if you want to have some sort of relatively fine-grained control over what goes on inside your brain, the best method is the traditional one of reading, thinking, talking, working on yourself, and meditating.
Johnathan Bi: I think that's actually quite optimistic because, at least according to Aristotle in the beginning of his metaphysics, he says all men desire to know that knowledge and the contemplative life is in some sense the most human life. For Aristotle, that is the one that will be made obsolete at the end.
Nick Bostrom: I always find it a little bit funny when philosophers think, "What's the best and highest form of being a human?" and the conclusion is being a philosopher. But maybe they are right. I've actually for a long time thought of philosophy as something that has a deadline. This occurred to me in my late teens. It seemed to me that at some point our current philosophical efforts would be rendered obsolete either by AIs that could become superintelligent, or maybe by humans developing cognitive enhancements that would make subsequent generations much better at this. Rather than spending my time thinking about the eternal questions of philosophy, it seemed more useful to focus on that subset of questions where it might actually matter whether we get the answer now as opposed to in a few hundred years.
Johnathan Bi: Would you prefer to know the answers to those eternal questions?
Nick Bostrom: One is certainly curious, but if we do end up in a trajectory where human lifespan becomes extremely long, then maybe rather than using up all these mysteries right away to immediately know the answers, you would want to spread it out. So you have interesting things to learn and discover even hundreds or thousands of years into the future. Ignorance might become a scarce commodity. Maybe put some on the shelf like a bottle of champagne from a specific year that there are only small numbers left of. You might want to save it for a special occasion. But there might be some other professions as well that are relatively immune, where it's not just the specific knowledge and skills we have, but the fact that these things be done by a human is regarded as significant in its own right.
Johnathan Bi: Could you give some examples?
Nick Bostrom: Well, priest, prostitute, politicians - where there might be a demand specifically that the service be provided by a human being as opposed to a machine, even if a machine could have all the same functional attributes.
1.2 Relationships Over Things
Johnathan Bi: I want to move on to the second type of human activity that is somewhat AI resistant, and that seems to be relationships over things. In your book, you gave a thought experiment: let's say we're at technological maturity and we have a better version of an AI parent in every domain. It's better at changing diapers, better at teaching, better at emotional support. I still would be willing to bet that most people wouldn't want to completely outsource their role as a parent. This could be even more resistant to technologically mature AI because what's constitutive to forming a relationship is that it requires, at least in our current view, two humans.
Nick Bostrom: I think relationships is one of the more plausible places where we might find purpose that survives this transition to technological maturity. There is some value in honoring and continuing this existing relationship between two people. Even if you could teleport in a robotic surrogate that would be functionally superior, it wouldn't be as good. Or maybe it would be better in some respects, but it would also lose this value of continuing the current relationship, even if the robot appeared indistinguishable from the original parent. If you imagine a thought experiment where nobody would actually notice any difference, you might still think that the reality is that there is now this different person playing the parenting role. If you care about truth in relationships, that might already be a disvalue. The existing human relationships, to the extent that they partially consist of intrinsically valuing this connection to a particular other being, would potentially be resistant.
Johnathan Bi: What are things in a child's education that are potentially made more obsolete given the current wave of innovation in AI?
Keep reading with a 7-day free trial
Subscribe to Johnathan Bi to keep reading this post and get 7 days of free access to the full post archives.