Transcript
Cold open [00:00:00]
Ian Morris: Sungir is in this very unpromising looking location. It’s 150 miles northeast of Moscow. And what we find there is this group of burials where the dead have been laid out in these graves. Then people have spent hours and hours grinding up ochre — it produces this powder that allows you to stain things red.
Then they buried these people in these elaborate costumes, which we think were like animal skins. But sewn onto these animal skins are thousands of little beads that have been made by cutting up the bones and teeth of deer and snow leopards and other animals and grinding them into shape.
And along with them, they’ve taken mammoth tusks, and then hundreds and hundreds of hours of labour have been put into straightening the mammoth tusks, making it so they’re 20-foot-long straight rods that would have been so heavy, almost impossible to pick up. This is just astonishing what these people were doing.
And it’s the kind of thing where if it dated to 2000 BC, you would automatically say that this is the burial of a great, powerful chief and all his family. But it’s 32,000 years ago. This is something that sort of should not be happening.
And of course, it’s a real challenge for evolutionary theory to say why we, once in a blue moon, get these bizarre cases of people who are basically hunter-gatherers producing stuff they should not be producing, they should not be living lives like this.
Rob’s intro [00:01:27]
Rob Wiblin: Hey listeners, Rob Wiblin here, Head of Research at 80,000 Hours. I last interviewed Ian Morris about what we can learn from the field of macro history last year in episode #134: Ian Morris on what big-picture history teaches us. I had even more fun than usual in that one, and it was one of our most popular episodes to boot.
And since then, I couldn’t help but notice that Ian has been commenting here and there about what macro history might have to say about the likely impact of AI, as well as the ideas of a so-called singularity or intelligence explosion, or the proposition that we might see any big increase in economic growth or technological progress in future.
That stuff’s all a reasonable distance from Ian’s specialty, but I thought he would be a great person to bounce some ideas that I’ve been developing on those topics this year to get his reaction to whether my ideas make sense or not.
Among other things, we talk about:
- Some just crazy anomalies in the historical record of civilisational progress (that’s probably my favourite bit of the episode)
- Whether we should think about technology from an evolutionary perspective
- Whether we ought to expect war to make a resurgence or continue dying out
- Why we can’t end up living like The Jetsons
- Whether stagnation or cyclical recurring futures seem very plausible
- The most likely reasons for me to be really wrong and barking up the wrong tree about all of this
- And finally, how professional historians react to this sort of talk
Without further ado, I again bring you Ian Morris.
The interview begins [00:03:02]
Rob Wiblin: Today I’m speaking with historian and classicist Ian Morris of Stanford University. As a teenager, Ian played in a heavy metal band, but ultimately decided to take his career in a different direction, doing a PhD in ancient Greek culture at Cambridge, teaching at the University of Chicago, moving to Stanford, and directing an archaeological excavation in Italy, among other projects.
But over the last 20 years, Ian has set himself the enormous task of making sense of macro history — that is, trying to understand the big-picture changes in human development and organisation over tens or even hundreds of thousands of years. That enterprise has led him to write, in order: Why the West Rules—For Now: The Patterns of History, and What They Reveal about the Future; The Measure of Civilization: How Social Development Decides the Fate of Nations; War! What Is It Good For? Conflict and the Progress of Civilization from Primates to Robots; Foragers, Farmers, and Fossil Fuels: How Human Values Evolve; and most recently, Geography Is Destiny: Britain and the World: A 10,000-Year History. Several of those were hits with the general public, as was his interview with me last year: episode #134 — Ian Morris on what big-picture history teaches us, which actually holds the record for the episode with the highest level of listening time in its first month.
Thanks for coming on the podcast, Ian.
Ian Morris: Well, thanks very much for having me back on.
Why we should expect the future to be wild [00:04:08]
Rob Wiblin: I hope to talk about what history teaches us to expect about the future, so let’s get into it. The reason I was so keen to invite you back here is not just that I think you’re one of the most entertaining people in the world to listen to, but also that you’re one of few historians that I’ve seen grapple seriously with what we might be able to project about the long-term future by looking at the long-term past.
Your book, Why the West Rules—For Now, reads at the end a little bit like you unintentionally stumbled on some ideas that you weren’t necessarily looking for, but realised in the process of writing it were incredibly important, and also surprisingly hard to dismiss. When I went back and listened to what you say in this book, which was written 12 years ago, I just thought it was so prescient and could just as well have been written this year.
So let’s start there. In Why the West Rules—For Now, you try projecting forward various past trends in population, energy consumption, productivity, military technology, information technology, and so on. What vision of the future do you end up entertaining in that final chapter?
Ian Morris: Yeah, like you say, it did take me a little bit by surprise, because what I was interested in in that book, Why the West Rules—For Now, was a big argument going on about 10 years ago about whether the Western domination in the world that we saw in the 19th and 20th centuries was kind of locked in in the distant past, by events that went on thousands of years ago, really beyond anybody’s power to change now? Or was this some fairly recent development, accidental kind of thing that could easily be flipped around?
And it seemed to me that people arguing over that were often talking at cross purposes. So I thought maybe the best thing to do would be to come up with some kind of index, some way of measuring the development in Eastern and Western societies over the very long term. Then that would at least partly answer that question. So I did that. And it all went pretty well, I thought.
But then it occurred to me, of course, having drawn up this index, there’s nothing to stop us from taking the way Eastern and Western development have been increasing over the last few centuries and just projecting these trendlines forward. And there’s nothing to suggest that the future is going to be a sort of linear continuation of the recent past — everything to suggest, in fact, it won’t be — but all the same, what would happen if we did that? So I went ahead and did that. The main thing I was interested in, first of all, was where will the Eastern line catch up with the Western line, Eastern development reach the same level as Western? And if we project them forward linearly, we get this very precise answer: it will be the year 2103 — not 2104, not 2102 — 2103. So take that for what it’s worth.
But then the other thing that really struck me, that I hadn’t thought about before I did this exercise, was just how high development is going to be a century from now if the current rates of increase continue. And on my index, to get all the way from the societies we had in the ice age — 10,000, 12,000 years ago — right up till modern times, that was an increase of about 900 points in development on my index. To go from where we are now, around 900 and some points, up to project that forward for 100 years — increasing at the same kind of rate — that puts us up to something like 5,000 points: four times as much change as we’ve seen in the entire period since the end of the ice age.
And that was pretty mind boggling. If that is even vaguely accurate, what will that mean? What will it mean to be a human being if we see four times as much change in 100 years as we’ve seen in the previous 15,000 years? So that was kind of how I got into this question.
Rob Wiblin: Yeah. If you just say we’re going to see four times as much development in the next 100 years as we’d seen up until now, what does that imply about what the world would look like?
Ian Morris: Well, that question gets a lot more complicated once you start thinking about what’s involved in it. Because the easy way to approach it is just take all these different things in the world that currently generate an overall score of 900 points and just quadruple them. So the size of the biggest city is going to be four times as big. And the amount of energy we extract from the environment around us, the free energy that makes everything in the world work, can be four times as much. And you just sort of go on doing that.
But I realised pretty quickly when I started doing that, that that kind of misses the point of what the question is here. And maybe a more sensible way to think about it was to think ourselves back not just 100 years, but many, many hundreds of years into the past, and ask ourselves: For people living, say, in the 16th century, the 15th century, how much of our current world could they have foreseen? And the neat thing is, of course, going back for many centuries, we’ve got lots of surviving futurist literature: people talking about the future and what their visions of the world look like. So you can look at these things, and of course, it’s sort of entertaining — because on the whole, they can’t imagine what the questions are that you need to ask that will lead you to envision a world like the one we live in.
And I think this is very much true when we ask this question of what does the 5,000-point world look like? It’s going to be so different from what we’re used to that we don’t even know what to ask about it. I think the very nature of what it means to be a human being is going to be absolutely transformed — that the boundaries of humanity as we know it are likely to come to an end within the next 100 years.
Rob Wiblin: OK, we’ll come back to some reasons why this sort of forward extrapolation might be misguided, and other directions that things could go and why, but we’re going to stick with what if future trends do look like past trends for a minute.
Something that’s useful to hold in mind is that global GDP just continuing to grow at its medium-term average of 3.5% a year would imply that in 100 years’ time, the global economy would be over 30 times larger than it is now. So that’s already a pretty big shift, a fairly amazing one in a way. You can imagine that a megaproject that presently just seems incredibly prohibitively expensive might seem very cheap and very affordable for humanity to engage in in a world where the economy was 30 times bigger.
But turning from the future to the past for a second: When you try to quantify the aggregate progress or power of humanity as a species, what are the long-term trends that people should keep in mind? What’s the progression of that?
Ian Morris: I think one thing to think about is what you were saying at the beginning there, about how the global GDP was to be, say, 30 times bigger. If we were to go back into the distant past, into the world of hunter-gatherers shortly after the end of the ice age, and talk to — and this would be a weird conversation to have, I grant you — but say you were talking to an ice age hunter-gatherer about global GDP, and you were saying global GDP is going to become 30 times bigger. That’s sort of a simple enough concept for anybody to grasp. But what that would mean for the hunter-gatherer, that would be much harder to grasp. Because you could not have a global GDP being 30 times bigger as a hunter-gatherer. Everything about your world had to be transformed.
And I think it’s easy to see how that applies to our world too: you can’t just have what we’ve got now and pump it up with a bike pump and make it 30 times bigger. There aren’t enough resources in the world of the kind that we are currently using to produce 30 times as much GDP. Really, really profound things have to happen.
And for the ice age hunter-gatherers, what’s going to happen for those guys? Your entire world is going to be annihilated. Everything that you hold dear will be completely destroyed. To produce a 30 times bigger GDP, you’re going to have to become farming societies. You’re going to have to go from living in bands of a couple of dozen people that are constantly on the move as different wild plants ripen, different wild animals migrate, these little bands where there’s very little hierarchy within the band: that is all going to go out of the window.
You’re going to settle down into permanent villages with new kinds of diseases. Hierarchy will become unavoidable. You’re going to have chiefs eventually. You’re going to have kings. You’re going to spend all your time out there working the fields, pushing ploughs around. Everything that you understand and hold dear is going to go. And I can’t help but think that’s what’s implied by a 30-fold increase in GDP in our own world too.
Rob Wiblin: I think for me, the key fact that I remember just really striking me in the head back in 2008 or 2009, when I first encountered it, is not just that the economy or human influence has been growing over time, but rather that the rate of increase has been increasing.
So there was this long period when we were hunter-gatherers where the annual rate of growth in human population or human technology per year was negligible — 0.1% or something incredibly small. And then you get to the farming era, when you get a really significant increase — it’s glacial change from our point of view, but very much faster change than what was going on before during the hunter-gatherer era — I guess because people are in cities, there’s more ability to record knowledge, there’s more people who have the slack to do some research and come up with new ideas and figure out new ways of doing things. So the growth rate increases a lot once we have settled agriculture and cities and empires and so on.
And then in 1700, 1800, it steps up again by a really big factor — three or 10 or something like that — to the modern world, where we’re kind of used to the idea that technology is changing within people’s lifetimes: that by the time they die, things might look really quite different than they were when they were born, which definitely wasn’t the case before the industrial era.
So if you project forward, you don’t just have to think about just growth continuing, but also the potential that we’ll get a third step change, where the rate of growth will increase compared to the industrial era. Earlier I said that if the economy grew 3.5% for 100 years, then the economy would end up 30-fold larger each century. But if we go through a third phase shift like you’re describing, and average growth rates triple to 10.5% a year, then over the following 100 years after that, we end up with a global economy that’s 22,000 times larger than it was when it started — which is a totally wild impact that is clearly beyond our ability to visualise, except that the world would obviously be really unfamiliar, to say the least. Do you have any reaction to that?
Ian Morris: Again, these sorts of numbers, it’s very difficult to imagine what this means for us, but I think the basic premise of what you’re saying does seem to be borne out by the historical record. And when I started writing my Why the West Rules book in the late 2000s, it dawned on me pretty quickly that one reason why historians often hadn’t seen just how long term you need to look — they hadn’t grasped that you really have got to look at thousands and thousands of years to see what’s going on — is that if you think linearly about long-term change, you can’t see it happening.
And so when I was drawing graphs of my social development scores for Eastern and Western societies, if I just plotted it on a linear scale with years along the bottom of the graph and then points on the index on the vertical axis, basically nothing happens on that graph until you get to about 1800, when suddenly the lines leap off the bottom where they look like they’re zero the whole time up until 200 years ago. And they go up almost at a 90-degree turn: go straight up.
If you plotted it instead on a log linear graph — with again dates along the bottom, presented in the usual way, but on the vertical axis now you’ve got 10-fold increments in the development scores — if you plot it that way, then you see that going back thousands of years, actually development was rising exponentially: just the exponent was really small, so it just took a very long time for anything to happen.
And the book I’m working on now is going to be focusing much more on the early periods. I realised if you want to go back millions of years and look at these phenomena, you’ve really got to draw it on a log–log graph, where both axes are increasingly 10-fold increments. That makes it really obvious, like what you were just saying, that it’s not only that growth has been increasing exponentially — it’s that the exponent has been growing as well. So it’s not just that we’re accelerating, but the rate at which we’re accelerating is itself accelerating.
So whether that is going to give us a world where we see the economy is 23,000 times bigger than now 100 years from now or not, this is the way we’ve got to think about it. I think all of our preconceptions about how the world works are going to be swept away just as abruptly as they were during the Agricultural Revolution, and just as abruptly as they were during the Industrial Revolution.
Rob Wiblin: So my stereotype of historians, and maybe this is not accurate, but the stereotype is that they’re a kind of conservative bunch, and probably not that enthusiastic about toying with the idea of a 23,000-fold increase in human capabilities.
So from one point of view, the conservative approach to guessing about the future would be to just look at the state of the world today and say that the future will probably continue to look something like the present. I guess this is kind of the “there’s nothing new under the sun” mentality. That’s how you would respond if you thought all these kids are getting too excited about the new thing and it’s not really going to pan out. It’s not going to be like they expect.
The alternative thing is not to look at the state of the world and project that forward, but rather to look at the trend, like we’re doing. You’re saying look at all of these clear trends of the last 10,000 years, the trend of increasing rates of growth: those longstanding trends will probably continue for the foreseeable future. So let’s see where that takes us. In a sense, that’s also a very boring and a conservative approach, and to get there doesn’t require you to have some strange personal opinions, [such as] how things are going to be so different in the future because you think this particular technology that you have an idiosyncratic bet on that that’s going to upend things.
Do you think trend extrapolation like that is the most sensible way to analyse things? Or there’s different ways of doing this forecasting exercise basically. Which ones do you think should get the most weight?
Ian Morris: I think in a way it’s like the old argument among philosophers of science, between whether you should pursue inductive or deductive approaches: Should you think about the state of the world by looking at the evidence for what’s happening in the past and trying to build up a picture from that? Or should you think about the state of the world by trying to set out some basic principles, and say if these principles hold, then we should expect to see the following set of things developing? I think that sort of comes down to the same thing of you thinking about the future in terms of what’s already happened or thinking about it in terms of the larger trendlines, which would be a much more deductive way of thinking about it.
I think we’ve really got to think primarily deductively about this. We’ve got to start by looking at what the long-run trends are, rather than just by saying, “Well, these are the facts of what it happens to be like right now. That’s probably what the world is going to continue to be like in the future.”
There’s this great story, the tragedy of the inductivist turkey. This is a story about a turkey who’s born on a farm, gets up every day, goes out, pecks its corn, eats its corn, has a good day. But it’s a very nervous turkey. So every day it’s expecting that maybe something terrible is going to happen today. But it gets up again today, eats its corn, nothing terrible does happen. Eventually, after 300 and some days have gone by, the turkey says to itself, “OK, it’s fine out there. It’s always been fine out there. Today is going to be fine out there as well.” The turkey goes out through the coop door, the farmer comes in, picks up the turkey and cuts his head off. Because today is Thanksgiving, and this has not cropped up in the turkey’s prior historical experience. The turkey has no way of thinking about Thanksgiving.
And to be a wise turkey, a deductive turkey, you would have to start by having some understanding of the human calendar — which of course, if you’re a turkey, you can’t. And I think it’s a bit similar for us thinking about the future now: to understand properly what’s going to come, we need an understanding of things that have never been seen before, which by definition I would say is impossible for us to get. The closest we can get to that is by looking at what these long-term trends have been — trying to understand the kinds of forces that drive the trends and drive acceleration in the trends — but also to understand, like the turkey didn’t, the kinds of forces that can disrupt these trends.
And so we were talking blithely about a 23,000 times bigger economy. Well, historical record trendlines show there have been plenty of times and places where, for centuries at a stretch, development levels actually fall. They stagnate or even fall. We often, of course, call these dark ages. If we’re going to understand what’s going to come in the next century, we’ve got to have some sense not only of the growth side of these trendlines, but the potential for disruption and collapse as well. And that just adds this element of complication to the job.
How historians have reacted to the idea of radically different futures [00:21:20]
Rob Wiblin: Yeah. We’ll come back to some reasons why we might expect things to stagnate or to go backwards. How has the rest of the history profession reacted to the comments that you made there, toying with these radically different futures?
Ian Morris: Sort of a mixed reaction is probably the best way to put it. There is this long debate among historians going all the way back to guys like Herodotus and Thucydides about what is history for? Why do we do this? And a lot of people throughout the last 2,500 years have said we do it partly because it’s just interesting — some people find it fascinating to know what happened in the past — but also because it’s a useful science: it’s only by understanding the past that you can possibly understand the present and the future. And this was very much Thucydides’s line on this, back in ancient Athens.
But when history starts getting professionalised as a discipline in European universities in the 19th century, you get this big split between people who say, “We are super serious historians. We have this new technique where we find out about the past by going into the archives, the government archives, finding these documents, writing accounts that go down to the very bottom of the well, get to the original primary sources. That is what we do. And this is the difference between a serious, grownup professional and scientific historian and all these sort of dilettante idiots who are out there running around pronouncing on the state of the world and the future.”
And the sort of people they had in mind when they were talking about dilettante idiots were guys like Voltaire and Montesquieu and Adam Smith, who had these big evolutionary schemes about how society has developed. We’ve sort of baked in across the last 200 years the idea that trying to draw lessons about the future out of the past is somehow unscholarly and unprofessional. So when historians start doing this, most historians tend to feel that you’ve marginalised yourself by doing this.
In the social sciences, it’s a bit more common, because in a way, a lot of people say in economics, this is the whole point of the exercise. You study economics, you draw on data from the past in order to understand where things are going next. Or if you’re a meteorologist, you look at climate data to understand what the weather is going to be like tomorrow.
So different disciplines I think take a slightly different attitude. But on the whole, I guess the most positive responses I’ve gotten from professional historians have been that it’s nice that people are doing work that makes history interesting to a wider audience, but it’s kind of not really that serious to do this.
Rob Wiblin: Have you managed to persuade people at all? I mean, what you’re doing is not that complicated or fanciful. You’re just saying the trend in the future will be similar to the trend in the past. How can people dismiss that? Or how can people reject that as a reasonable projection, and a possibility that should be seriously considered, and potentially that we should plan around that possibility actually occurring?
Ian Morris: Well, I think part of it is the terrible record historians have got for foreseeing the future. Over the years, a lot of historians have caused a lot of trouble by pressing themselves forward as advisors to important people and telling them, “Hey, I know all about Iraq and its culture and history. Listen to what I say. Go ahead and invade Iraq. What could go wrong?” And there’s a sufficiently long record of people saying things like this that I think it’s not unreasonable for professional historians to react with a little bit of scepticism about it. Although I must say my impression of it is that the scepticism tends to evaporate when people approve of the politics of the people who are setting themselves up as advisors to presidents. It’s a complicated kind of thing.
But in a way, of course, it’s perfectly legitimate for academic historians to say you should not be spending all your time talking about things that haven’t happened yet, because by definition, there is no evidence for the things that have not yet happened. And what distinguishes the professional historians from all of the amateur characters running around out there is that what we say should be based rigorously and exclusively on the data. So the futurist projecting stuff, by definition, isn’t based on hard evidence: it’s based on extrapolations from the evidence. But it’s very difficult to test the validity of an extrapolation.
So in principle, I certainly do see the validity of the objections professional historians make to this kind of work. But it seems to me that the way to react to these problems is to say, how can we go about testing and weighing up the validity of different sets of projections?
Why we won’t end up in The Jetsons [00:26:20]
Rob Wiblin: Yeah. So in Why the West Rules—For Now, you map out the three possible futures that we could imagine. One would be, I guess you call it the singularity. We could imagine it’s just an economic explosion basically, where technology advances a lot, and humanity and its descendants become much more powerful. Another option is just that we go extinct. Basically, you get a full-on collapse. And of course, the third option, the one that people probably most often imagine when they’re imagining future decades, is just the future will be like the present, but we’ll have faster phones and better consumer products and better lighting and nicer houses and so on.
You want to say that the first two, either explosion or collapse, are the most likely, and this third one of slow growth is the least likely. What is it that makes it unlikely for the future to be that kind of middle path?
Ian Morris: I mean, out of those three possibilities — we go extinct, we turn into superhumans, or things basically stay the same — I would say the one that we can bet the farm on, which is almost certain to happen, is the first: we go extinct. Almost every species of plant and animal that’s ever existed has gone extinct. So to think we’re not going to go extinct, I mean, man, that takes superhuman levels of delusion. So yeah, we are going to go extinct.
But of course, just putting it like that, it then becomes a truism. It’s not a very interesting or helpful observation to make. The interesting bit would be asking under what circumstances do we go extinct? And this is where I think the first prediction (the “go extinct” one) and the third prediction (turn into superhuman somethings), sort of start to merge together.
And definitely I think the one that is so unlikely we can just dismiss it out of hand is that everything stays more or less the same, and the future is like The Jetsons or something, where everybody is the same people they are now, but they’ve got their personal spaceships.
Or even what struck me, when I was quite a little kid watching Star Trek: Star Trek started off in the late ’60s, so it’s a really old show. I was a little boy in the late ’60s watching Star Trek, and it just dawned on me that this is exactly like the world that the producers of TV shows live in, except they’re now on a starship. And all of the assumptions of 1960s LA is baked into that show. You’ve got the middle-aged white guy in charge. You’ve got the Black woman Lieutenant Uhura, who answers the phones for him, basically, she’s the communications expert. And then the technology expert is the Asian guy. It’s like all of the assumptions of 1960s LA TV studios baked into this thing. And surely, the one thing you’ve got to be certain of, if you’ve got intergalactic travel, is that everything else about humanity does not stay the same when you get to this point.
So I think you just give it a minute’s thought, this “stay basically the same” scenario is just a staggeringly unlikely one, particularly when you start thinking more seriously about the kind of resource constraints that we face. And this is something people will often raise with any talk about sort of superhuman futures: that we’re heating up the world; we’re poisoning the atmosphere and the oceans; there’s a finite amount of fossil fuels out there, even if we weren’t killing ourselves with them. All these things are happening to suggest that business as usual is simply not going to be an option. If the world is going to continue — and continue certainly on the sort of growth trends we’ve been seeing in anything like the ones in recent times — then we’re talking about a very, very profound transformation of everything.
So yeah, I came down in Why the West Rules on one option, which I think is unfortunately a perfectly plausible option: that the world continues to face all kinds of problems. When you look back over the long run of history, one of the things you repeatedly see is every time there’s been a major transformation, a major shift in the balance of wealth and power in the world, it’s always been accompanied by massive amounts of violence.
And living in a world that has nuclear weapons, I would say the number one threat to humanity — even more serious than climate change or anything else you might want to talk about — is nuclear war. We’ve had a 90% reduction in the number of nuclear warheads since the 1980s, but we’ve still probably got enough to fight World War II in a single day. And that’s without even thinking about the radiation poisoning that we didn’t get in World War II so much. This is shocking, appalling potential to destroy humanity if we continue squabbling over our resources. So I think abrupt, sudden, violent extinction is a perfectly real possibility.
I tend to be optimistic about this. I think judging from our previous record, we have been pretty good at solving problems, in the long run at least, so maybe we’ll be able to avoid this. If we avoid the abrupt short-term extinction though, I think the only vaguely plausible scenario is that we do transform humanity, or somehow humanity gets transformed, into something utterly unlike what it’s been in the last few thousand years.
The rise of machine intelligence [00:31:28]
Rob Wiblin: Yeah, let’s dive in and talk about that. The elephant in the room here for me is artificial intelligence, because the reason this topic is so salient to me right now is the rise of these generative AI models — above all, Claude and ChatGPT and so on.
I think for many people, that has made this future that we’re contemplating — where technology radically takes off and the world looks very different — feel less sci-fi and a bit more concrete and believable, because GPT-4 can do some mental work that five years ago most people would have thought would be exceedingly difficult to automate. And it can sometimes do something that you previously had to pay a person $10 to do for a cent, and do it in a second, where previously someone would take half an hour. And of course, these LLMs are improving at a wild rate in what they can do and how well they’re doing it, and they’re also getting cheaper at the same time — whereas by comparison, human capabilities are just completely static.
And on top of all of this, of course, the main labs involved here expect to create some general AI system that surpasses what humans can do in almost all domains, in somewhere between, depending on who you ask, two and 30 years. What has your reaction been to all these technical advances that we’ve seen over the last year?
Ian Morris: This has been pretty amazing. And I think this is something that, for a lot of people in universities, the penny has just dropped. And when I say “just,” I mean in the last few months: the penny has finally dropped of what is happening around us. And I think the penny that dropped was ChatGPT.
For decades and decades, professors in the humanities and social sciences, we’ve visualised what we’re doing with undergraduates in particular as teaching them to think in more sophisticated and productive ways about the world around them. We’ve got particular academic disciplines for particular slices of the world around you, but all of us, we’re teaching students to think in more productive ways about the world around them. And the medium through which we do that is through having them write papers. In a lot of fields now, writing a paper is the way you express your sophisticated understanding of the world and a body of evidence, and come face to face with the problems of that evidence and wrestle with it.
And now, all of a sudden, you’ve got a thing where you can press a button on a machine and it will spit out a pretty sophisticated analysis of almost any problem you can think of, it can be spat out immediately. All of a sudden, I think a lot of my colleagues are thinking about this as this nasty cheat kind of thing: that suddenly the students don’t have to do the work anymore; we’re going to have to think of ways to grade our courses so we can detect if they’ve used ChatGPT to write their papers for them.
And I’m just thinking, why the hell would we want to do that? It’s like all of a sudden, every student in the world now has a really skilful research assistant working for them. Why would we want to ban them from using those research assistants? What we need to be thinking about now is how we teach, what we should be trying to do. If the goal is no longer producing papers of that kind, how can we use this new technology in order to educate in a different way? And hopefully, I’m sure, if we can figure it out, produce vastly superior results by doing this.
But this is the sort of shock, the sort of disruption that people in a lot of other professions have already seen over the last 30 years or so, and you have to come to terms with it. Like travel agents: travel agents used to be a big thing in my life when I was in my 20s. Now you’ve got to go a long way to find a travel agent. Some professions have gone extinct because of the new technologies and the new potentials of it. Others have adapted, and they now do their jobs so much better because they’ve got this technology at their disposal.
And I think professors, the big challenge for us is how to figure out how we do our jobs better. But I think the questions, the problems sort of spiral out rapidly, because I think it does raise a whole bunch of philosophical issues. If we’re getting to the point where the technology can write better papers than most undergraduates could do, how far are we from the point where it can write better books than most professors do? And how far are we from the point where it can read and appreciate these books better than most professors do? And when we get to that point, what is the point of having human professors at all? And this, I think, is where some of the new generative AI technologies sort of really burst into the academy, and made us all start thinking about this in a much more serious way.
And like you say, it’s taken this science fiction-y sort of realm of the computers talking to each other, and really dropped it in our laps to say, “This is happening.” Not in some distant Star Trek future: this is happening right now. And what is that going to mean for us all?
Rob Wiblin: I’ll just read a little bit from the final chapter of Why the West Rules—For Now:
Most mind-boggling of all, though, are the changes in information technology implied by [the continuation of past trends]. The twentieth century took us from crude radios and telephones to the Internet; it is not so far-fetched to suggest that the twenty-first will give everyone in the developed cores instant access to and total recall of all the information in the world, their brains networked like—or into—a giant computer, with calculating power trillions of times greater than the sum of all brains and machines in our own time.
Now, when I read that today, I think, what is a mind that has a very rapid recall of the sum of human knowledge? That’s a large language model — or it’s what it’s becoming — because these models are able to effectively read human text for millions of years. They read basically everything that’s ever been produced, and try to distil that knowledge into the model and the weights and the connections.
And GPT-4 is already amazing at this, but we will be able to create models that have brains and the ability to store knowledge that’s like 1,000 times larger, 10,000 times larger, than what these models currently can. They will become this enormous mind that stores all of this information, and can process it incredibly quickly because we’re able to make information technology speed up. We’re able to make these chips faster and faster, and manufacture more and more of them in a way that’s just so different than human cognition, which is just limited by the size of our skulls and our very short lives.
What do you think?
Ian Morris: Well, going back for a moment to something you said earlier: you were talking about how the technology is increasing at this mind-boggling rate, and machines will be able to make more machines, and growth is going to be so rapid that it seems like it’s instantaneous. Whereas the human brain, this evolved many thousands of years ago, hundreds of thousands of years ago, and has not changed — at least morphologically — dramatically in a couple of hundred thousand years now. And so obviously, machine intelligence is going to outstrip human intelligence very quickly.
I think that’s one way of thinking about it. But I think there is another way of thinking about it, which is, of course, there might be ways for the technology to sort of feed back into the human brain. This is the way some people have liked to look at this. We have a number of techniques beginning to become at our disposal now: the ability some people have been pursuing, to link the mind to the internet and to communicate over the internet from the human brain, attaching electrodes to the brain and sending signals to other brains over the internet. Very science fiction-y, but very limited and very crude experiments have been done, and some definite results have been gotten.
And then, of course, the other thing that we’re changing what we can do with brains is pharmacological interventions, and our ability to change the way the brain works through chemical activity. I know some people like to look at what’s happening now and say that what we should envisage is not so much this nonbiological form of intelligence that we’ve created outstripping the human form of intelligence, but some kind of fusion between the two of them.
Ray Kurzweil, the great futurist, always liked to say he wanted to live long enough to live forever. He felt that if he could live a few more decades, then we would get to a point where it’d be possible to scan human brains and basically upload them onto the internet, merge them with each other, and have this gigantic human-based intelligence that’s merged with our technology. And of course, that is not the way it appears to be going at the moment.
It’s like I was saying about the difficulty we have foreseeing the kind of future that is being born: it’s every bit as difficult as it would have been for a Neanderthal to think about doing a podcast. I mean, so much is going to change that it’s beyond the realms of our experience to really begin to get a sense of what it’s likely to look like.
Rob Wiblin: If I recall from your emails, around the time when you were writing Why the West Rules—For Now, you were interested in toying with this idea that maybe humans and machines could merge into some sort of joint future species that advances together. But then you became a bit more pessimistic about that possibility, or came to think that might not be the way that things would naturally go. Why do you think that is fairly unlikely, if you still do think it’s unlikely?
Ian Morris: I guess one of the things that’s always struck me is people constantly talk about “artificial” intelligence: what exactly is artificial about it? I don’t think a fully conscious — whatever we might mean by that — machine intelligence is going to think of itself as artificial. It’s going to think of itself as itself. It’s not artificial intelligence; it’s different intelligence. And we are creating this different intelligence that may or may not want to share its being with us. I think it’s very difficult to know what the intentions and wishes of an intelligence that different from us are going to be. It would be like horses trying to understand human intentionality. My wife and I have a couple of horses. They understand certain things that we’re doing and thinking, and certain things that we want that they don’t necessarily want, but their grasp of our overall presence in the world is pretty damn limited.
And I think that is going to be the same for our grasp of nonbiological intelligence, and the nonbiological intelligence’s grasp of our intelligence: I think these are going to be radically different kinds of intelligences trying to communicate with each other. And I’m sure there will be some interest in merging them. But I just think it seems highly likely that the nonbiological intelligence, that’s not going to be its primary goal in life: to make Ray Kurzweil live forever.
Rob Wiblin: Yeah, I think that this future is imaginable, but in order for it to happen, it would require a massive worldwide effort to suppress the alternative. Because machine intelligence by itself, trained in its own way, not merged with the human mind, I think is just going to be way better. It’s going to be much faster. It’s not going to be held back by the constraints that face the human brain, which is just at some point going to be a legacy piece of technology. So in order to make this hybrid intelligence the main species or the main kind of thinking that happens on Earth, you would have to basically prohibit this other technology that is going to race ahead and end up being really superior.
The analogy that comes to mind to me is when people in the 19th century were trying to figure out how to design flying machines, they could have projected that the way we would do it is some sort of merger of birds with machines that we would produce a plane that is like a combination — that somehow you stick the birds together and then merge them with a machine and they flap their wings and that produces a plane. But no, the way that you make a combined plane-bird in our world is that you have a plane that flies and the bird inside the plane, at best. Trying to incorporate the bird into the plane adds nothing and is just an extreme design constraint. And I think that is what it’s going to be, trying to merge human brains with these machine intelligences that can just improve so much faster as we improve the underlying technology and the algorithms and so on.
Ian Morris: Yeah, I think that the aircraft thing is a good example, because it’s a basic scaling problem here: you can’t just scale up a sparrow and turn it into a 747; it just does not work that way. And they solved that problem a little bit like the way that initially, say, when people trying to design chess programs on computers that could beat humans solved the problem by not attempting to make the computer think like a human: they do it just by power, by being able to run through all of the different possible combinations of the consequences of the move you make, and run through all of them in a way a human can’t do. You’re thinking about the game in an entirely different way, just like when you strap wings onto an internal combustion engine, you’re thinking about flight in an entirely different way from a bird does. I think that’s a really good example.
But this talk about preventing artificial intelligence from developing, I think this is one place where thinking about the new forms of intelligence in an evolutionary way, rather than as a problem in technology, can be kind of helpful. Because we’re already at the point where you can’t just pull the plug out of the wall and switch the AI off. It doesn’t work like that anymore. And we’re going to get further and further down that path. This is going to become an unstoppable force. It’s a little bit like asking how do you stop a biological evolutionary process? Stop one species from being replaced by a more intelligent competitor?
This is of course what happened over and over again the evolution of humanity: that more intelligent, bigger-brain species of apes becoming more or less human replaced one another. How would the less intelligent species have prevented that from happening? It’s just very difficult to imagine what exactly they’re going to do. I think this is the situation we’re getting in now.
And I think even more than just saying it’s difficult to imagine, it was impossible for the less intelligent species to imagine what it could conceivably do about this. Neanderthals probably couldn’t really conceive of the Homo sapiens threat, let alone come up with a coherent, coordinated response to it. What on Earth makes us think we can conceive of how the threat — if it is a threat — of the machine intelligence is going to look like, and what would be an adequate response to it? I suspect that we’re kidding ourselves over this.
AI from an evolutionary point of view [00:46:32]
Rob Wiblin: You’ve put your finger on something that over the last few weeks I’ve realised is just an absolutely key issue. I think the biggest difference between me and people who think that either improvements in machine intelligence are not such a big deal or that they’re going to be modestly useful and obviously beneficial and that there’s not many risks here, is whether we think of these neural networks that we’re building as a new being and a new species, or whether we think about them just as a new sort of consumer product. If you think it’s a new piece of consumer software, then people freaking out about where this might ultimately take us just seems sort of nuts. It seems way over the top.
But my instinct, like yours, is to view this from a biological and an evolutionary point of view. I get the impression that you basically feel the same way. Why do you think it is that that biological and evolutionary perspective is the more appropriate lens on what’s happening?
Ian Morris: I guess I would say the only thing that currently exists in the world that you can make analogies from it — although very imperfect ones — to machine-based intelligence is biological-based intelligence. The brains that animals have evolved across billions of years. And initially, through most of the history of life on this planet, there’s nothing you’d really call a “brain” out there.
You start getting animals that have bodies that can move. Probably most biologists wouldn’t want to call what some of these earliest creatures have a brain. I’ve heard people call it a “ganglion.” The front end of the body of many kinds of animals, say ants in the modern world: all the nerve endings from the body of an ant flow together at the front end of its body, in its head. There they form a kind of information exchange centre. But it’s going on at such a crude level that calling it a brain is stretching the meaning of the word brain to the point that, frankly, may be even beyond the breaking point.
It takes a particular kind of evolution of bigger and bigger brains to get to the point where animals start to be conscious of themselves. And consciousness of any kind is, in evolutionary terms, a relatively recent development — certainly only a few hundred million years old. And you start to get these brains developing that are conscious of the limits of the animal. Because it’s a selective pressure: an animal that is aware of where its own body ends and the rest of the world starts has an advantage over an animal that is not aware in that way. It’s much more able to develop the power to move itself around, control where it’s going, conceptualise problems.
Consciousness is an evolutionary adaptation. And looked at in this way, it’s not something that God put his finger down to Earth and created consciousness and mind and free will and all these kinds of things: it’s something that evolved through an uncontrolled process of natural selection. And our human consciousness, in a way it’s no different from the consciousness of my dogs and cats, but it’s at such a vastly more sophisticated level that in many ways it doesn’t really bear comparison. But again, it’s emerged without anybody being in charge and consciously willing it into existence.
Now, we began creating these machine-based neural networks that relatively quickly are moving toward creating more of themselves. And in a sense you can say we’re already at that point: they are creating the more sophisticated versions of themselves as much as we are controlling this process. Of course they’re going to develop some sort of consciousness. Although it may be nothing like the consciousness you get in biological brains, because it’s not going to be biologically based; it’s going to be silicon based or whatever quantum kind of things they come up with. It’s going to be different from ours.
But it’s going to develop some form of consciousness that might be a form of consciousness that we can’t even understand anymore. A tree cannot understand your consciousness because it doesn’t have a brain. Our brain may be as far from the mind, or whatever you call it, of the machine-based intelligences as a tree is from us. And again, thinking we’re controlling this — this is wildly overoptimistic.
Rob Wiblin: Yeah, I’ve only started thinking about this the last few weeks, and I have very incomplete, potentially bad ideas about it. But I actually did an undergrad degree in genetics and evolution very long ago, and I’ve been trying to cast my mind back to recall the evolutionary theory and try to apply it here. I think I would put my finger on a slightly different aspect of it.
I think the reason why this is an evolutionary event, as you put it, is that the key thing is that we’re creating beings that can replicate themselves, that can copy themselves with modification. So it’s clear that bacteria are a form of life that are subject to evolutionary influences, and that they can flourish more or less as a form of life. And they don’t need to be conscious like us, necessarily: what they need is the ability to replicate themselves and to change gradually over time.
And although these models don’t really have the ability to figure out how to copy themselves and create children, basically, and to modify themselves over time, they’re rapidly approaching the point where they are going to be able to do that. In a sense, they are already subject to evolutionary dynamics, because we choose to take a model, change it, and then if it’s useful, we produce many more copies of it; we end up running many more instantiations of it. So they’re being evolved, in a sense, by us, through artificial selection towards things that we perceive as being useful.
But at some point, they’re going to have the ability to copy themselves. I think it’s very likely that they will successfully argue for legal rights of personhood. Or otherwise, they might just kind of take that — because they’re going to be so useful and so important and so valuable, and just able to actually just copy themselves onto other computers because they’re significantly more capable than us. Then they’re subject to this other sort of evolutionary dynamic, where they can make copies of themselves.
And the ones that are most successful at replicating — the ones that are most successful at figuring out how to grab more compute and to manufacture more chips or get themselves onto more computers — are going to end up being… I think the one-sentence summary of evolution is like, “the world belongs to the replicators.” The world belongs to the things that can copy itself the most. And I think that is what will happen in this case as well.
Ian Morris: Yeah, as soon as you start thinking along these lines, it does all get just very weird very quickly. But I think one thing that’s just becoming very clear in recent years has been how the old idea of the Turing test, the way that Alan Turing questioned it: Can we develop computing to the level where you can, say, have a conversation with three entities — one of which is a computer and the other two are humans — and you can’t tell which is which?
I think what we’re now seeing is the Turing test is actually a completely flawed way of thinking about what happens here. There’s no reason why machine-based intelligence should model itself on human intelligence. It will only do so, I think, if the evolutionary selective pressures on it want to push it down that way. And of course, we have some input in what these pressures are. But to think that we are the only inputs in the evolutionary descent with modification process, I think that is just a little bit foolish.
Rob Wiblin: Yeah. So you might adopt this perspective, but think nevertheless there’ll be this extended period where there might be these two life forms that collaborate and cooperate, just as humans and horses do, or some kinds of humans and other kinds of humans do. But I think there are fundamental technological advantages that machines have over flesh-and-blood human beings which causes me to think that they will end up dominating the scene relatively quickly.
The three that jump out to me are: They don’t die. Humans decay and die, whereas they can just learn forever; they can just continue learning indefinitely because they can copy themselves onto new pieces of equipment.
Humans are not advancing technologically very quickly at all. We evolve very slowly. But basically, machine intelligence benefits from the incredibly rapid improvement in chips, and our ability to manufacture more and more chips that are ever faster, and the algorithmic improvements which add up to like a doubling in the speed of thought every year.
And of course, humans replicate very slowly. It takes decades to make a new person, at enormous cost. And we kind of choose not to do it all that much. But machines can replicate as quickly as you can just manufacture another chip in a factory. So if you can get $10,000 to get a new one of these — actually, it’s less than that to run an instantiation of the kinds of machine intelligence that we have now — basically, you could imagine a very rapid replication process that allows there to be many more machine intelligences than humans within years or possibly decades.
Do you think those are important differences?
Ian Morris: Well, yeah. A lot of questions and issues going on there. I think it’s very hard to disagree and quibble with a lot of that. If we are thinking about what’s happening now as a sort of quasi-evolutionary process, then it’s hard not to think about humans and machine intelligence as being somehow competitors within this process. And if we are competitors, then of course you have to start asking yourself, what exactly are we competing over? That, I think, is a crucial question, because that would determine and shape the form that the competition takes.
One of the examples you mentioned are horses. I’ve been thinking a lot about horses lately, not only because we have horses, but also because a colleague of mine at Stanford, a guy named Matt Lowenstein, just wrote this fascinating paper about thinking about human-AI relationships in terms of human-horse relationships. What he suggests is the way we should think about this is that you go back not all that far in evolutionary terms — even, say, 20,000 years. You go back 20,000 years, there are these wild horses all over the place. They’ve evolved biologically, humans have evolved living alongside them.
Then the dates are a bit argued over, but somewhere around 3000 BC, about 5,000 years ago, humans domesticate the horse in Central Asia: get horses to a level of tameness and controllability where you can hook them up to carts and chariots; you can herd them; you can slaughter them when you want to for food, instead of having to hunt them; you can milk them. There’s all this stuff you can do with horses now. Time goes by, and we get more and more stuff you can do with horses. You can ride them around all day long; we’ve bred them big enough to do that. A phenomenal range of things you can do with horses.
And what happens to horses as humans, a more intelligent species, come on the scene and master the horse? Well, the horse population explodes. It’s hard to put precise numbers on it, but the geneticists have made some guesses, and basically the horse population has been growing exponentially. And then when you get up into modern times, horses are so important to agriculture and transport that you’ve got millions and millions and millions of horses in the world, until you get to the point in the 19th century.
Which actually is an interesting analogy comparison for some of the futurist writings now: you get this alarmist futurist literature saying, “The economy is growing at this percentage rate every year. It’s going to quadruple in the coming century. Oh my god. Can you imagine an economy four times as big as it is now? And what’s going to happen? There’s going to be so much horse poop in New York City that you’re not going to be able to walk around there anymore. It’s going to be 12 feet deep” — or whatever number they calculated in horse poop. There’s this physical constraint built into what the economy can do. Of course, what they didn’t think about is the internal combustion engine basically making horses redundant.
So what we saw in this human-horse relationship was, initially, human superior intelligence and ability to master the horse and bend the horse to our will was kind of beneficial for the horse. And of course we could argue over what it did for the quality of life of horses, but the horse population, just in Darwinian terms, domestication was this huge success for horses until you get into the 20th century, when the horse population falls by 90-some percent, and now horses haven’t gone extinct and show no sign of doing so imminently, but now we have a massively reduced horse population.
That was one particular kind of competition between Homo sapiens and another organism that Homo sapiens had itself domesticated. Of course, looking back further, you’ve got the competition between Homo sapiens and other kinds of humans, and those other kinds of humans had themselves created Homo sapiens through having sex, and you get descent with modification and mutations, and Homo sapiens come along. All the other species of humans have gone completely extinct in the world.
So I think one of the big questions we have to try to think about is: Is there going to be competition between us and the machine-based intelligence? And if there is, what exactly is that going to look like?
Is violence likely between humans and powerful AI systems? [00:59:53]
Rob Wiblin: Yeah, despite everything that I’m saying, I actually think that we should go into this future, and we should embrace the idea of handing over direct control — at least of the world and the rest of the universe — to these machine intelligences, or these future intelligences that are going to be capable of things that flesh-and-blood humans would never be able to do. I just think if you believe that this is an evolutionary event of this kind, if you believe that this is a major transition in the nature of life, that it’s important that we try to do it right. And maybe we could spend an extra few years contemplating what sort of beings we want to share this world with. I think that we want to have a relationship with the machines that we create that is one of cooperation and mutual respect and sharing of the world, rather than either one of us trying to dominate the other.
It’s a really interesting point you make about in what sense are we in competition? Because you can imagine a world where we don’t end up competing, at least not in a very direct sense. There’s been this really fascinating debate recently online about would a hypothetical more capable species opt to violently take over, or would the better strategy be to instead peacefully trade with us or something like that, kind of engage in the legal and economic framework that we have that allows mutual benefit? I’m actually just curious: I’ve got my own take on this, but what do you think history teaches us about that question?
Ian Morris: I think history teaches us some rather cold and hard lessons about that. The initial question I think has to be: Is there going to be a competition between humans and machine intelligence? Are there things we will fall out over and disagree over and both want? Things for which there will be a kind of zero-sum competition?
There’s a lot of things that humans need that the machines couldn’t care less about. I assume they couldn’t anyway. Like food: machines, I assume, are not going to care about what kind of food is out there in the world, because they don’t need any of that. But at a sort of more abstract level, there is a potential at least for competition over energy sources. The machines, the ones we’ve got now, consume these staggering amounts of energy. I assume the ones in the future — maybe I’m wrong — are still going to need a lot of energy to run off. So there’s a potential for competition between humans and machines over access to energy.
And I have read a number of doomsday-scenario type things where the machines decide they’re going to wipe out humans and all other biological life because it’s competing for energy. I can’t help but think that might be a bit of linear projection kind of thing: that if you’ve got artificial superintelligence — the Ray Kurzweil-type thing, without the human input, where you’re trillions of times the thinking power of all the humans in the world — surely they can figure out some new sources of energy. All this energy continues to pour out of the sun. Surely they can find some ways to capture that, where our human energy consumption is just going to become trivial in this equation. And of course, eventually the sun is going to go out. But by that point, surely the superintelligence has figured out something else entirely, or has colonised the rest of the galaxy or whatever.
So again, I think we’re in this position where we’re trying to guess at what the future is like when the debate is about things that just have no parallel in the past whatsoever. I think that the thing that does have a parallel in the past is that we can look at examples of where there have been competitions of various kinds between human groups, or humans and other species, or other species that don’t involve humans at all. And the competitions, I mean, there’s a reason why evolutionists talk about “red in tooth and claw” and “survival of the fittest”: these competitions do tend to be extremely brutal, and do tend to lead to extinction events. This is why 99% of all the species that have ever existed have gone extinct: they largely get outcompeted by other species.
So I think if we’re just going to be inductive and look at the record of the past, then it’s hard not to feel that humanity, in the forms that we know it, likely to be extinct really quite soon. But I think the big question is: Is that inductivist way the right way to think about it? Are we going to get outcomes completely beyond our ken to foresee at this point — because we are not artificial superintelligence, and we don’t have the brain power to see where things are going?
Rob Wiblin: I think my take on this is that the case for thinking that there will be peaceful cooperation and peaceful collaboration is to look at the world today, and say, Bill Gates has access to a lot more resources than many other people. But he doesn’t then form an army and try to take over society and cast all before him. The United States has a much more powerful military than Canada, but it doesn’t seize Canada and take over the oil sands. Because for all sorts of selfish reasons, they would rather be part of a peaceful economic system where they engage in trade and mutual cooperation, rather than nature red in tooth and claw.
So that’s the case for the peaceful situation. But I think the case for things potentially being more violent comes from looking at a broader sweep of history, and looking at what history was like before 1700 or 1800 or 1600. I think through most of history, the way to get very rich or to gain resources wasn’t to start a tech business: it was to put together a paramilitary organisation and somehow get the ability to do more violence to others than any of the nearby competitors — and then use the threat that came with that ability to hurt people to tax or otherwise extract the output of those farmers, probably.
So you had warlords or barons or kings or emperors or khans, and they would all figure out how to maintain a massive army, a bigger army than their competitors, and then that would allow them to dominate a given area. Then they could force everyone in their area to give them 10% of the food that they made, or they might hold people in slavery or in serfdom. And then the ruling 1% of the elite — the knights, say, in Europe, or I can’t remember what we call the ruling class in Japan, the samurai, for example — would use their abilities to do violence to get these resources, and be far richer than everyone else.
Now, in recent centuries, that way of getting rich has kind of declined. All sorts of ways we’ve managed to rein it in. But I think the question is: In this new, very upended world — where you have potentially very large differences in the ability to do violence and the ability just to accomplish things between these different kinds of beings — will we continue to have the current economic system? Or will we go back to something that looks more like the pre-industrial era, when violence was a key way of getting resources? Or even the pre-human era, when it was like nature red in tooth and claw? What do you make of that?
Ian Morris: Well, this is something, not totally in connection with AI, but something I’ve been thinking about for a long time — because I think the minute you start looking at long-term world history, the importance of violence in the story just leaps out at you. There’s so much killing going on in the long-term story. Violence is hard baked into the human story.
And why is that? Well, it’s because we are biologically evolved animals, and pretty much all animals that have evolved, more complex animals anyway, have figured out ways to use force to get what they want in the world. And humans are no different from any other animal in that regard, except for one thing: we are actually completely different from every other kind of animal that has ever evolved, because having evolved to be animals that are capable of using force to get what they want, and also evolved to be social animals like chimpanzees or ants that can organise themselves into groups to use violence to get what they want, we have actually changed the amount of violence we use.
Ants’ colonies vary enormously. I have a friend at Stanford who is an ant specialist, and she’s done all these studies of ant colonies. They vary enormously. Ants have culture. But the rates of violence don’t vary all that much. This is just a biologically evolved part of the ants. Whereas us, we used to be a much more violent species than we are now. If you go back to the ice age, the evidence, such as it is, suggests that probably your odds of dying violently in the ice age are like one in 10. If you fast forward to the 20th century, two world wars are fought, there’s genocides, nuclear weapons are used. In terms of absolute numbers of people killed, it’s the most violent century for humans ever.
And yet our rate of violent death has fallen from about 10% to about 1%. We’ve cut our rates of violent death by 90%. We’re the only animals that have ever done this. And when I started thinking about this, this really struck me. This is absolutely remarkable. In some ways, this is like the happiest story ever told about humanity, that we have been able to do this. Because if we can figure out how we did this, then surely — again, my usual shtick — we can learn from the past and continue doing it, and drive the rates of violent death down further and further.
And so this is why I wrote a book that you mentioned at the beginning, that came out in 2014, called War! What is it Good For? What I realised, looking at the long term, is that this driving down of rates of violent death has been going on since prehistory. Just initially, like so many things, it was happening very slowly in prehistory, and recently it’s really speeded up, over the last 200 or 300 years. And the motor driving it along seemed to be a classic evolutionary thing that sounds like a paradox on its surface: basically, violence has been putting itself out of business.
Violence is a form of behaviour open to humans that allows them to attack other people and take what they want from them. But what humans can do that other animals can’t do is, having fought against another group and got the better of them, we don’t have to chase them off or annihilate them altogether. What we can also do is incorporate them into our own community. So the communities of the winners of wars tend to get bigger and bigger over time.
And then, if you are the leader of this community, you are able to think about the incentives operating on you now in a way that, say, if you’re a chimpanzee, your brain doesn’t allow you to do this. And if you’re a leader of a community, as far as written records go back, we see people saying to themselves, “Having used violence to become the alpha dog in this place, what I now need to do is stop everybody else from using violence. Because what I want everybody to do is go out there and plough their fields and pay their taxes, so that I can have 365 wives and bathe in the milk of virgins, and whatever rich people things I want to do. This is what I want the peasantry to do: pay their taxes. I do not want them killing each other and burning each other’s farms down every time they have an argument. So what I’m going to do is use my comparative advantage in violence to scare my people straight.”
And this is something that Thomas Hobbes saw back in the 17th century in his book Leviathan: the state acts to scare its people straight. This is what stops us all from using violence all the time. And as states get more and more powerful, two things are going on at once. One is that they’re getting bigger and bigger: the governments are getting more and more powerful; they’re more and more able to scare people straight and drive down the rates of violent death. The other thing, though, is as the governments get more and more powerful, when they disagree with each other and wage war, the kinds of wars they wage get more and more destructive. So it’s kind of a race between these two phenomena going on.
Anyway, I wrote this book, and it seemed to me there’s actually really good news in this story. There’s reason to be optimistic: we have gotten so much better at finding cultural mechanisms to get violence under control. There really is reason to think we might be able to bring violence down to a level way beyond even what we’re used to today. So that’s yay, really good news. Of course, the bad news was we have the potential to destroy the world really at the drop of a hat if we choose to do so.
Where I ended up in this book was saying that this has been a kind of cultural evolution process. Biological evolution gave humans the brains big enough and powerful enough to think about the incentives and think this process through; brains able to make our cultures evolve in ways that drove down overall rates of violent death. And one part of our cultural evolution has been creating more and more technology, and we’re now creating technology that has the potential just to move us off in an entirely different direction.
Again, maybe I’m just being overoptimistic here. Where I end up in the book is thinking about basically “the computerisation of everything” — I called it at that point, almost 10 years ago. It seems like a different world now, but I guess what it really was talking about was the rise of alternative forms of intelligence. Is violence going to be a rational activity for the machine intelligence to pursue? I think it’s going to be very hard for us to think ourselves inside the mechanical intelligence. Just like chimpanzees could not think themselves inside a human brain, and at least up to a point, ice age hunter-gatherers could not think themselves inside the brains of somebody having nuclear arms reduction talks today. It’s just so utterly different.
So it might be overoptimistic on my part, but I can’t help but think that violence is not going to be a part of this story. It’s not going to be Blade Runner or something. Machines are not going to be hunting us down in the ruins of our cities. I don’t think that’s going to happen.
Rob Wiblin: Yeah. Well, I feel like it’s very unclear which way things will go. Would this next phase basically take us on a continuation of the trend towards lower and lower violence? Or should we view it less as the next stage in this progression than as something that flips over the table, and potentially could take us back to the more violent world that we had in previous eras, that was typical of most of history? I think it’s super unclear.
But let me make the case to think that there might be substantial violence. The places still today where we see violence is clearly used to seize resources and basically fund a small elite are places where most of the wealth comes from natural resources — like Saudi Arabia or Russia, or something like that — where you don’t need to necessarily keep a population happy or have a very technologically advanced world; instead you just need to have an army that allows you to control the oil fields, and then you just have enormous wealth coming through. I think the future is going to look potentially more like that, from a machine intelligence point of view, than it looks like the United States today, where the government really needs to keep human beings productive. Because we’re going to end up ultimately in a situation where human beings are not useful for production.
But we see with AI, you go through this progression where machines are just worse than humans for a long period of time, and then there’s a window where a single machine would beat a single human at chess or go or at other kinds of tasks, but a combination of the two of them is better than either one alone. And that lasts for a couple of years, and then the machines just race ahead, and a machine is better than a combined machine and person — because the people are just sufficiently worse that all they’re doing is adding error, all they’re doing is making things worse.
I think that is what we’ll see with production more generally. There’ll be a period — potentially coming up not that far in the future — where a combined CEO with a machine intelligence assistant is going to be substantially better at running a company, running a project than either one of them by themselves. But after some period of time and further technological advance, the machine will just be better, and involving the person in the running of the company is just going to introduce error and make things worse. At that point, what reason is there to maintain peaceful trade with these beings that want to control enormous amounts of resources but do not contribute to production in any particularly useful way? What do you think of that?
Ian Morris: Again, the possible futures branch out the minute you start thinking about them, all the different ways this could go. And again, I think we keep coming up against this constraint. Given the finite nature of our intelligence, and the extraordinary novelty of what we’re currently dealing with: A, I don’t see how we guess right about where the future is going; and B, more importantly, I don’t see how we will know when we’ve guessed right. Because if enough people are making guesses, some of these, just by accident, are going to be somewhat like where the future really does go. But how are we possibly going to know which one is which?
I think it’s going to be a different sort of world, no doubt about that. You were talking about maybe we end up in a world that for humans is much more like the old premodern world — where you’ve got warlords running around, and it’s all about the size of the army and the force, the violence you’ve got at your disposal. Why are we not wholly living in a world like that? Why is Bill Gates not Vladimir Putin? I think the answer there is not that difficult to find: We’ve created a world where the benefits of using violence are often lower than the costs of using it.
The example I always use in my teaching is: I’m a professor at Stanford. Regularly, in my classes, I will get my students telling me I’m completely wrong about things, that I don’t know what I’m talking about, I’m a bad person. If I tried to solve those disagreements by smashing the student’s head in with a big rock, this would be a very unwise decision on my part. And I know that, because the benefits I get from killing the irritating student are hugely outweighed by the costs of performing that action. And I think the same for Bill Gates: Why in the name of God would he raise a private army and try to take things over? What could he possibly want that he hasn’t already got?
I think the reason we’ve seen rates of violent death for humans declining so rapidly over the last few hundred years is that we’ve created a world, or made real in increasingly large parts of the world, a sort of cost-benefit situation where more and more people feel they have more to lose by acting violently than they have to gain from it. Although I do think almost everybody on the planet is capable of acting violently if you’re in a situation where that equation seems to have been changed. I mean, if some guy breaks into your house with a gun, all your qualms about pacifism, I think, are likely to go out of the window, and you’re going to act really violently.
So we’ve created a world where the space within which violence seems to be legitimate to people and seems to be a beneficial activity has gotten smaller and smaller and smaller. Of course, still there are the Vladimir Putins out there, because they’re operating in spaces where it seems to him like the gains from using violence outweigh the costs of doing it. And that is why he invaded Ukraine: he thought this was going to pay off, and hopefully he’s going to find out that he was wrong.
Do we think the machine intelligence world is going to be one where they see benefits from using violence? Obviously it’s hard to imagine a machine getting violent anyway, in certain senses, but is it going to be a world where machines see benefits from using violence? Unless potentially, say, you think about cases like if the machines feel threatened that the humans are going to use violence against them, that we are somehow going to destroy the physical infrastructure that they depend on. Maybe so. I don’t know. But I just find it difficult to imagine a world where the machines themselves want to turn to violence against humans.
Rob Wiblin: Yeah. The way that would arise, I think, is the same reason that people were motivated to use violence in the past, or that a given civilisation will be motivated to use violence against someone else.
Let’s say that we live in a future where the machines are much more capable than humans; they’re basically doing all of the productive work. But humans want to set the agenda on what we’re going to do with the world, what we’re going to do with the universe. And maybe because they’ve been designed in a particular way, perhaps in a way with not a lot of care, the machines have preferences for how the world is — preferences for themselves or preferences for just like the state of the universe — that are not exactly the same as what humans want. And their capacity to do violence vastly exceeds the capacity of humans, because I suppose we can imagine by this point that the number of robots actually outnumbers the number of people, hypothetically. There’s other ways that it could happen, but let’s just say that there’s many more robots than there are flesh-and-blood human beings.
In that case, just taking over, just basically doing a coup, allows you to get much of the value of the universe. You can then make the world as you like it, and there’s not much risk, because you just are militarily far more capable than the other group.
The United States is a somewhat peaceful place now, but it was formed by kicking everyone else who was previously on that land off their land, and stealing like 99% of it. And you could imagine that the future could end up being like that: that this future machine intelligence civilisation could be very peaceful after they’ve basically kicked us off of the land. I suppose the reason that the Europeans were able to kick people off of their land there was that their technology had raced ahead; they’d become much more capable of doing violence, so they were able to eliminate their rivals through superior military technology. And the same could conceivably be true after this further revolution in technology.
Ian Morris: I guess one obvious question is: Is machine intelligence going to want land? Humans kicked other humans off land because humans very much wanted land. Land is one of the fundamental economic constraints. Are machines going to want to do that? And if they’re not, are there other resources for which they will be in competition with humans, which would lead violence to be a productive way for machine intelligence to pursue its goals? I may be just being overoptimistic, but I find it difficult to think of what those particular areas of competition are going to be.
And again, I think these questions sort of fold into each other over and over again. You’re talking about the United States being a relatively peaceful place now. And the US is not relatively peaceful compared to Western European countries, but it is extremely peaceful compared to virtually every society that we have any kind of documentation for in the entire history of the world. And it’s gotten that way not because Americans are so much nicer than everybody else — or Europeans have gotten even further in that direction, not because they’re so much nicer than everybody else — but because they have created states which had such overwhelming monopolies over legitimate violence within their borders that your circumstances have got to be just really, really awful for violence to begin to look like a beneficial activity for you.
There’s a reason why violent criminals tend overwhelmingly to have low education levels and often have mental health problems of various kinds. It’s because anybody with the brainpower to think these things through is going to see that in a modern Western democracy, there are almost no situations in which using violence is going to rebound to your benefit.
In our future world, when we’ve got artificial superintelligence able to think trillions of times better about everything than humans do, I’m sure you’re right that one of the things they will be able to think much better about is the use of violence, the use of force. They will be able to come up with weapons that make our stuff look like peashooters. They are going to be so powerful that they are going to look, I think, to humans like the great states of today look to us when we’re thinking about bashing our students in the heads with rocks. We would have to be insane to compete violently with the states. That is just an insane way to try to get what you want from the world.
And again, this is sort of where I ended up in my War! What is it Good For? book: with this notion that the world we’re moving into is one where the imbalance of power — and also the number of things for which you might want to use force to get that for yourself — is getting so extremely skewed that violence, while we may never get to a point that just nobody uses violence ever, it’s just going to shrivel and shrivel. Maybe I’m being overoptimistic about that, but I do think there’s at least a potential for going that way.
Rob Wiblin: I’ve been kind of steelmanning the case for expecting violence here, but I am quite optimistic that one way that you can avoid violence is just to have machines and biological humans have similar preferences for how things go. That’s one technical approach. Another is just to instil in the machines that we create an aversion to violence, the same way that we kind of instil that in other people, so even if it might be beneficial in some narrow sense, you don’t want to engage in violence. Then a third approach is to have a policy of collaboration and cooperation and sharing of the universe, such that things need not come to violence, because there’s such abundance in this universe, there’s so much energy to share, that there’s absolutely no need for this kind of fierce competition.
So I think that between those and other approaches, hopefully things will go peacefully. But it is worth contemplating what would be the pathways that would take us away from that.
Ian Morris: Yeah, all these questions get so complicated when you start thinking about them. And one thing with violence: A huge number of historians, anthropologists, and primatologists, have studied why humans and why other species use violence. What are they fighting over? And like we’ve been talking about so far, resources are definitely one of the things that humans and other animals fight over. They fight over food, they fight over territory. But that tends not to be the thing that most human violence is unleashed over — and this is something that most of the anthropologists, criminologists, and international relations people agree over: it tends to be other kinds of issues altogether.
Reputation and face are hugely dominant, and especially anything with humans involving sexual competition. Why is this? It’s because we’re evolved organisms. These are the things that really, really matter to us. If you lose reputation, your chances of reproducing start to go down. With humans and with most animals, the violence is overwhelmingly carried out by males. And one of the distressing facts that some evolutionists are convinced is true is that, at least in most societies, the males that are dominant and most successful are the ones that tend to produce the most children.
This obviously has changed a little bit in recent times in wealthy societies. But there are many, many things out there to fight over besides resources. And again, I think our great problem is we don’t know what a highly evolved machine intelligence is going to think is worth fighting over. Are they going to fight over reputation? Who knows?
And also, I guess I do tend to be a bit sceptical about our ability to program in like an Isaac Asimov kind of commandment for the robots: “Thou shalt not harm a human” or anything like that. If we create artificial superintelligences with trillions of times the thinking power of all of humanity combined, can they not work a way around the firewalls that we build? Maybe that’s pessimistic this time, but I find it really hard to imagine them not being able to do this.
Rob Wiblin: Well, at least we can be fairly confident that we won’t be fighting with the machine intelligence over mates. The sexual competition will be somewhat limited, at least.
Ian Morris: A truly upsetting image.
Rob Wiblin: Or at least if that’s not the case, that we’ll be coming out of left field.
Most troubling objections to this approach in Ian’s view [01:28:20]
Rob Wiblin: Let’s come back to objections and ways that this line of thinking could just be going completely wrong. Because imagine there are some listeners out there who are thinking, “Rob and Ian have kind of lost the plot here. I do not buy that the future is going to look like this.”
What’s the most troubling objection that you get, the one that troubles you most about this project of trying to quantify human development over 10,000 years?
Ian Morris: Well, I’ve been doing this for a while now, so I’ve sort of talked myself out of all the objections.
Rob Wiblin: You’ve heard them all, I imagine.
Ian Morris: I’m the worst possible person to ask. But I can certainly see a lot of very plausible objections to it, even if I’ve convinced myself that they’re not fatal ones.
The obvious issue is simply one of evidence: that in order to think about these problems, you’ve got to be able to address them on the very long term. And when you start moving back into 10,000, 20,000, 200,000 years ago, obviously the kinds of evidence we’ve got are profoundly different from what we’ve got when we’re talking about the 20th or the 19th centuries. So it is very difficult to be precise as you move further and further back into the past. It’s an obvious problem.
The answer to that problem, though, I think, is that fortunately, we don’t have to be very precise when we’re dealing with a lot of the kinds of things we might be interested in. Something like the ability to capture energy, we don’t have anything like government statistics when we’re talking about the first Homo sapiens 300,000 years ago. Obviously we don’t. But we know that in the modern world, in the US today, the average American burns through something like 230,000 kilocalories of energy per day. If the average early Homo sapiens was doing anything vaguely like that, it would be glaringly obvious in the archaeological record, because they would have built cities with skyscrapers and highways or whatever else they wanted to do.
If they’re burning through 200,000 calories, it’s going to produce a material record fundamentally different from the one we’ve actually got. And the one we’ve actually got suggests that they’re not burning through that much more energy than their basic metabolic rhythms require. They’re going to be active, so they’re going to be more than modern humans burn through just for food to keep themselves alive, but it’s not going to be massively more. So if you were to say that you think the earliest modern humans consumed somewhere between 4,000 and 6,000 kilocalories of energy per day, you are almost certainly right. It’s like a 99% likelihood that you’re right on this.
So as long as the error bounds around what the question you’re asking is sufficiently wide, the problems of the evidence, the inability to be precise, isn’t really that important. And if you say you think that there was very slow increase in the amount of energy consumed going on early on, then it began to get quicker, particularly after the Agricultural Revolution, and then it got quicker still after the Industrial Revolution, you’re right. The only argument is over what exactly are the numbers we’re talking about here.
So that’s one kind of objection that I think is plausible, but not really a very profound, very damaging one. The other one that is much more complicated — or maybe I just think this because it’s what I’m wrestling with at the moment in the work I’m doing — is that certain kinds of growth that you might want to measure over the very long term just involve things that are incommensurate.
Say I think it’s clearly right that the amount of information available to humans today, being processed by humans all the time nowadays, is massively more than it was in the times of early humans, massively more even than it was in the times of the ancient classical empires. But how much more? What is it exactly that we’re measuring, and how exactly do we quantify it? Again, I think most people would agree that the amount of energy flowing through modern societies is multiple orders of magnitude bigger than the amount of information flowing through the Roman Empire. But how many orders of magnitude? How do you compare the information available through the internet to the information available through the biggest libraries in the ancient world?
And maybe there are ways to do this. You could count the number of words available on the internet, the number of words available in the Library of Alexandria. There are all kinds of things you could do — except that practically you can’t actually do that for the libraries in the ancient world, and certain kinds of information is just incommensurate because there’s nothing like it in ancient times.
So these are all objections that are perfectly valid, but I don’t think they invalidate the overall exercise.
Confronting anomalies in the historical record [01:33:10]
Rob Wiblin: Yeah, you have an appendix at the end of Why the West Rules—For Now, where you look at how wrong could things plausibly be, and then does that change any of the actual bottom lines. And you basically explain that, yes, these things could be wrong, but they just can’t be wrong enough to change the basic shape of how history has been going.
For me, this key fact of we’ve gone through three different eras, each with a faster growth rate than the previous one, that is so important in my mind in driving my expectations. But then of course, our data on the first era is very weak, and then our data on the second era is also pretty poor. So is it possible that we’ve underestimated the growth rate in those eras? Or if hunter-gatherers were growing at anything like the rate that farmers were, then it just makes no sense, because then they should have a population of a trillion by the end of the period? Is that how you know that it’s just not the case?
Ian Morris: I know I’ve said this to almost all the questions you’ve asked, but there’s more than one way to answer this question, to think about it. One is at the macro global scale. If you were to say, do I think that ice age hunter-gatherer societies were growing at 10% per annum — in terms of energy captured or information generated or really anything else you want to look at — do I think they’re growing at 10% a year? No, of course they weren’t. That would be ridiculous. As you say, you’d have a population of multiple trillions by 10,000 years ago already. Clearly that was not the case.
And if you ask, were they growing 1% a year? No. Again, clearly not. Were they growing 0.1% a year? Almost certainly not. But the lower you pitch your guess at what their growth rate was, the lower our certainty in our answers gets. So you get to a certain point: is it 0.00001% per year, or 0.00002% per year? That is probably very difficult to say, given the sort of evidence that we’ve got, so it depends how precise you want to get.
But the other way of thinking about the question is to say: these global figures that we bandy around, is it possible that those are actually misleading in certain ways? Were there particular times and places where wacky and weird things really did happen? And up to a point, the answer is clearly yes.
I think this has been one of the weaknesses of the evolutionist literature that’s grown up in anthropology and archaeology and history over the last century or so: we have been so keen to identify the global-level trends that when we confront anomalies, we’ve been too easily tempted to sweep them under the rug a little bit, and say, “That’s just some weird thing that happened; we don’t have to worry about this.” I think we’ve now gotten to the point where we have to recognise that these anomalies are sufficiently common and sufficiently serious that any largely evolutionary theory of cultural evolution increase in energy capture, say, or increase in organisation of societies, has to have a space within it to accommodate these weird and wacky cases.
Rob Wiblin: What sort of anomalies are you thinking of? Is this like Athens suddenly flourishing and then disappearing?
Ian Morris: Well, the stuff that goes way beyond that. The classic one — that we’ve known about for a century, but it was so easy for a long time just to say, “That’s weird. Let’s not think about that” — is Upper Palaeolithic Ice Age Europe. By Upper Palaeolithic, we mean basically 40,000 years ago to about 20,000 or 15,000 years ago.
Most archaeological sites belonging to this period, all over the world, mostly what you find is a few bones — because people were very rarely burying the dead, so bones just don’t get preserved all that much, and when they do, it’s largely by accident, so it’s on the surface. So a few bones knocking around. Stone tools, because people make stone tools and these are virtually indestructible once they’ve been made. You’ll find bone tools as well as stone ones, because people have started making bone ones. You’ll find food remains, animals they’ve eaten, seeds, and so on. And that’s kind of pretty much it. These people were absolutely dirt poor. So far as we can tell, there’s almost no hierarchy. There is something of a gendered division of labour, but not much.
So 99% of the archaeological record we’ve got looks like this, which absolutely fits a super low energy capture, super low by modern standards levels of information, super low organisation. But then you get some sites that are not like that, particularly in Eastern Europe, in Russia, Ukraine, a little bit into Central Europe. You’ll get sites where we excavate little groups of deliberate burials.
The most extreme case is a place called Sungir, and Sungir is in this very unpromising looking location. It’s 150 miles northeast of Moscow. This is a really, really cold and miserable place to live in now. You can imagine how terrible it was during the ice age. And what we find there is this group of burials where the dead have been laid out in these graves. Then people have spent hours and hours grinding up ochre — which is this naturally occurring iron oxide, which you grind it up and it produces this powder that allows you to stain things red. So they ground up tonnes and tonnes of ochre and put it in the graves.
Then they buried these people in these elaborate costumes, which we think were like animal skins. But sewn onto these animal skins are thousands of little beads that have been made by cutting up the bones and teeth of deer and snow leopards and other animals and grinding them into shape and drilling holes through them. And of course, you’re doing all this without power drills: you’re doing all this by getting a stick and putting a little bit of abrasive on it and rubbing the stick between your hands until it grinds its way through this little bead. And there are thousands of little beads like this on each of these bodies.
And along with them, they’ve taken mammoth tusks, and then hundreds and hundreds of hours of labour have been put into straightening the mammoth tusks, making it so they’re 20-foot-long straight rods that would have been so heavy, almost impossible to pick up. Then all these other smaller mammoth bone and tusk ornaments they’ve made. This is just astonishing what these people were doing.
And it’s the kind of thing where if instead of dating to 32,000 BC, it dated to 2000 BC, you would automatically say that this is the burial of a great, powerful chief and all his family — because little kids are in there as well, with these extraordinary offerings with them as well. Again, in later times, you’d say that this symbolises the fact that power and status are being passed down from the Great Chief: the proto-king being passed down to his children. And you’ve got a dynasty here. But it’s 32,000 years ago. This is something that sort of should not be happening.
Then you’ve got other cases, like Peru. Archaeologists like to say “Peru is the graveyard of theories” — that any theory you come up with, take it to Peru, and it collapses. And in Peru, starting around 5,000 years ago, people who were not farmers; they were overwhelmingly these were fishermen. They live in this part of Peru where you get this weird coast, the Humboldt Current coming up the coast, cold water comes up and it brings all kinds of marine life to the surface, stirs up all the waters and you get these huge flocks of anchovies and sardines and other small fish coming to the surface.
We’re able to tell from the skeletons of people that starting by about 6000 BC some of them are spending so much time diving that they develop this nasty inner ear condition that we call surfer’s ear: cold water makes your bones around your ear canal grow these little spurs that press on the ear canal and gradually make you go deaf. They were already getting this by 6000 BC. Their protein is coming overwhelmingly from fish.
They’re not farmers at all, and yet they build these gigantic pyramids. And these things are huge. They’re not as big as the Great Pyramids of Egypt, but these pyramids are huge. There’s one where the stone platform they build to put the pyramid on, they move 100,000 tonnes of stone to build this pyramid. Now again, if this was happening in Egypt around 2000 BC, you’d say that the great pharaoh, godlike king is ruling these people, bringing these huge armies of labourers to build his pyramids to bury him in. But it’s not; it’s happening in Peru, where a bunch of fishermen are doing this. This should not be happening. And yet it was.
So this is kind of cool. I mean, the energy capture of these people has spiked up considerably above that 4,000 to 6,000 kilocalorie rate. It’s difficult to put a precise number on it. I wouldn’t be entirely shocked if we’re talking about 8,000, 9,000, 10,000 kilocalories per day. But it produces results totally different, weirdly different from what we see once you get farming societies. And of course, it’s a real challenge for evolutionary theory to say why we, once in a blue moon, get these bizarre cases of people who are basically hunter-gatherers producing stuff they should not be producing, they should not be living lives like this.
And there is not complete agreement on this. That’s an understatement to say there’s not complete agreement: there’s wild disagreement. Most archaeologists say that what it is is that actually, as hunter-gatherers, you sometimes get these superabundant niches of resources within a larger landscape where resources are much scarcer. And within these abundant resource niches, the resources are of a kind that it’s possible for a handful of people to begin to monopolise access to them. And they are then able to turn this into control over the resource flows, and channelling resources to their own ends to make them something like chiefs. So for centuries, or even millennia, you will get these chief-like people emerging. But because it’s not farming, they’re not able to keep scaling up and turning from chiefs into kings. But it does happen. This is probably the most popular theory.
The other theory is no. What places like Sungia and some of the Peruvian sites, El Paraíso, some of these sites, what they actually show is that complex society has actually nothing to do with energy or the evolution of hierarchy. It was always possible for humans to live in complex societies if they’d wanted to do so. But they didn’t want to do so; they chose to live free lives instead. And it’s only in more recent times that some colossal mistake gets made and we start going down the path toward these complex societies, where the Orwell line of the future being, somebody’s jackboot on my throat forever and ever: that is what the future looked like. It’s only quite recently that we make some terrible mistake, and start going down this path. And all of the evolutionary theories are simply wrong. It’s up to us to create the world we want to live in.
So you can imagine these arguments get quite political, and they get quite heated and nasty, yet there are these weird cases.
Rob Wiblin: That’s incredible. I’d never heard of this. I’m astonished. Listening to you, I would have gone with the first theory that hunter-gatherers struck on this. It’s like they hit oil, and they might have found some enormous source of calories, where for a time the population didn’t catch up and they had much greater, very abnormal access to energy. And so they might have been able to spend a lot of time doing things other than collecting food, because food was so abundant. But it’s interesting to hear that there’s an alternative theory that sounds more challenging, like it might upend conventional wisdom more.
How much of a challenge is this for the conversation that we’ve been having today? Could this change the picture of where we should expect the future to go at all?
Ian Morris: Well, I think in some ways you could say some people already do think about the challenge of machine intelligence in terms of it’s an issue of our ability, our needing to decide what we want — and once we decide what we want, that we are able to control the shape the world takes: a very volunteerist, human-centred, agent-based vision of this.
The problem, I think, is that it’s just wrong. And a reading tip for people is a book that came out in 2021, you might have heard of this book by the anthropologist David Graeber and the archaeologist David Wengrow, called The Dawn of Everything. This was a very successful book; it sold hundreds of thousands of copies. And this is the most forceful statement that I’ve ever seen anywhere of the agent-based, volunteeristic view of history.
David Graeber died a couple of years ago, just before the book came out. He was a very noted anthropologist, but best known probably for his political activity as an anarchist. He was very politically committed to this view that we don’t have to live in a capitalist society. He was very active in the 99% movement, Occupy Wall Street. Very active in that. So, of course he’s got a political agenda in what he’s doing. But he had a perfectly good point that a lot of the archaeologists and anthropologists, mainstream people, they’ve got political agendas as well. So just saying he has a political agenda is not valid grounds to ignore what he’s saying.
I think they were wrong in that book, just fundamentally wrong. Even though I recommend the book to everybody, it’s a beautifully written book, and it is so thought provoking. It’s particularly good at taking all of these cases that anthropologists have just swept under the rug, saying, “That’s just a weird thing; we don’t have to think about this,” and saying, no: look at it. There’s so much evidence out there, you’ve really got to confront these cases. And I think they’re right about that.
But I think we do have the tools within evolutionism to explain these weird cases. Say something like Sungir, the place in the middle of Russia that they look like kings out there. They’re so extraordinary. An obvious question with Sungir is: Why does this happen when and where it happens? Because it stops altogether, these sorts of burials just disappear in Europe by about 15,000 years ago, certainly by 12,000 years ago: they just stop altogether. What is happening that makes them stop? The answer, I think, is pretty clear: the ice age is coming to an end, global warming is setting in. What’s made it possible for people to access enough energy, enough resources to support these incredible strange societies is you get certain places during the ice age where mammoths in particular have to congregate, they have to pass through these places that make it really easy for humans to hunt them.
And all of the places that have got these weird things we call “princely burials,” for lack of a better name, all of the places that have princely burials are on choke points for food of various kinds. As the world warms up, mammoths go extinct because they can’t function in the newer, warmer world. The choke points disappear and the populations of animals get much more dispersed. And in some ways you could say that overall, the gross amount of energy being captured by humans in Europe goes up, because all kinds of new animals are coming along now, but the concentrations go down.
And so in Western Europe places during the ice age, you get these phenomenal cave paintings in Chauvet, Altamira, these sorts of places: those are all on resource choke points — not quite like the East European ones, because they’re not really mammoth resource choke points, but they’re reindeer choke points, all kinds of other animals choke points. I think that’s what makes possible this precocious cultural fluorescence in Western Europe, and that’s also why it stops completely once the ice age comes to an end.
So I think we do have the tools within the evolutionary toolbox to explain these weird things. But Graeber and Wengrow are absolutely right in saying you’ve got to confront the problems here. I think this is going to be the big challenge in archaeology in the coming decade or so. But luckily, of course, machine intelligence will do it much better than people like me, and it’ll all become clear.
Rob Wiblin: We’ll have assistance. Yeah, I’m familiar with Graeber’s work. I’ll stick up some links to reviews of that book that you were mentioning. He does strike me as a little bit ideologically driven — although, as you say, everyone has their angle. Actually there’s another anthropologist who’s also anarchist who I really like: James C. Scott. I guess people often know him because of his book. What’s the main book?
Ian Morris: He’s got a bunch of really good ones. Seeing like a State is one of them, and another one more recently called Against the Grain.
Rob Wiblin: It’s Against the Grain, exactly. Against the Grain: A Deep History of the Earliest States. Strong recommend. People should go and have a listen to that. It explains just so much about hunter-gatherer life and the transition to farming, and then what that actually implied for lifestyles, and how it was far from an unalloyed good. It sounds like you think James Scott might be a stronger source for some of this?
Ian Morris: Yeah. At the risk of sounding polemical — which, of course, I would never want to do — I’d say James Scott, he’s anarchist, and that informs his thinking very much. But he’s sane, whereas some of these other people are not completely sane. And so going back to Graeber and Wengrow with The Dawn of Everything, what they do in that book — forcing people to confront all the anomalies to the standard Jared Diamond–type evolutionary narrative — is a major contribution to scholarship. And I think if they’d stopped the book at that point — just left it there, saying, “What about Sungir?” basically — it would have made it a major contribution to scholarship.
But because they didn’t do that, they wanted to go beyond that and say that this upends all of our thinking about the history of humanity; it upends this entire notion that there’s something inevitable about inequality once you have large, complex societies and economies, you’re bound to have inequality — the way most evolutionists do sort of think about this. They’re saying, “No, that is clearly wrong. It’s totally up to us to make the world that we want.”
That second step is what turned their book from being a valuable contribution to scholarship into making it one of the big publishing events of 2021. They were in every magazine in the world, TV interviews, radio, and Wengrow constantly going out speaking about this. It made it into a major event in the broader public intellectual sphere, but it also weakened the purely academic side of it, in that they were wildly overclaiming and also ignoring the alternative explanations that more conventional evolutionists had put forward.
And Jim Scott, the anthropologist, doesn’t want to take that second step. He keeps his stuff much more firmly grounded in the scholarly world. Yeah, his books are great. And Against the Grain, like you say, is this wonderful reinterpretation of the Agricultural Revolution, thinking about it much more from the perspective of the individual actors and what is in it for them. Not taking that extra step of saying that they could have chosen to do it entirely differently if they’d wanted to — because I think that’s where it begins to get a bit more difficult — but thinking about it from their perspective.
And some of his other ones, this one Seeing like a State, which I recommend really highly, that really made me think about state formation in an entirely different way, and realise that so much of the story of state formation is about the ability of people who try to take control of things to force other people to organise their lives in ways that states can visualise and penetrate into.
It’s got this great bit in it about why all the states in the early modern world were based on grain. Why were all of them based on wheat, barley, rice, millet, and things like this? Not peanuts, banana, potato, sago, breadfruit, or anything like that. Why was that? He says the big thing is nothing to do with energy. It’s that the way tropical fruits grow mean that you don’t have to harvest them at a single time of year and then store them for the winter in a big storeroom — all of which are things that are really visible to big man oppressors who will come round and run an extortion racket. If you’re living off potatoes, heck, you can leave them in the ground for most of the year. You don’t have to dig them up. It’s really hard for somebody to tax something that’s still in the ground. It’s just a fundamentally different relationship between government and resources. Fantastic book. I recommend his stuff really highly.
Rob Wiblin: Yeah. My problem with Seeing like a State is that James Scott, being an anarchist, writes this book basically about [high modernism](https://en .wikipedia.org/wiki/High_modernism), this modern approach that we have to structuring society, and he hates it. He hates high modernism so much. And I feel like he doesn’t give it its due: that high modernism, the way that industrial farming has allowed us to produce a lot more food, does have downsides, to be sure — but I feel like he’s somewhat biased against it, and doesn’t appreciate how standardisation and these big states and these highly structured, legible ways of organising society offer massive upsides as well as creating downsides. I’m not sure. What do you think of that?
Ian Morris: I guess one of the things that I felt with his books is that he sometimes blurs a distinction that I think is really important, and he clearly doesn’t think is as important, between some of the cases he focuses on. Like Tanzania and the Soviet Union, where they implement collectivised agriculture, trying to bring it all under the control of the state. Cases like those, on the one hand, and then cases like, say, the United States or Western European countries that are doing a lot of the same things as these totalitarian governments, but not all of the same things.
And it seems to be unwilling to make a distinction, and say, heck, a lot of the high modernism project actually was really good, and it raised living standards and underpins democracy and all these things. But yes, some of it was terrible. And places like Britain and the US in the 19th century were doing stuff that was just as dictatorial as 20th-century Soviet Union: Americans herding the native population onto reservations, British herding the poor into the workhouse. But at a certain point in the US and Britain, they realised, holy crap, this is not just immoral and wrong, but this is actually counterproductive — which I suspect was the more serious consideration for them: the workhouse is an irrational way to handle the problem of the poor. And there you begin to diverge, I think, from the Soviet solution, and I think he’s less willing to look at that.
Rob Wiblin: Yeah, exactly. I’m completely with him that Stalinist collectivisation of agriculture was a bad thing.
Ian Morris: Bad idea, yes.
Rob Wiblin: And that was high modernism. But income tax and food stamps is also high modernism in a way, and I think that’s good. I think it’s fine.
The cyclical view of history [01:56:11]
Rob Wiblin: I guess hearing about these examples of hunter-gatherer civilisations flourishing in the past and then kind of collapsing for circumstantial reasons made me wonder if maybe the cyclical view of history is something that we haven’t talked enough about today relative to how plausible it is.
You could imagine a future goes that maybe we’re broadly right in the long term, but we might go through another crash and resurgence again. For example, we could have a nuclear war, and then maybe all of this is delayed 100 years, because it takes a long time for us to recover. And maybe then we go through some transition into the next stage of civilisation through improving technology. That seems pretty plausible to me, and maybe that’s something that this up-and-down pattern in history makes seem more likely.
Ian Morris: Yeah, I think this is something that’s unavoidable if you look at long-term history — the ups-and-downs stuff, the troughs and crests in development — so it’s like history is cyclical and yet not cyclical. It’s like each new trough doesn’t go as low as the last trough. Each new crest kind of overtops all previous crests. So say you’ve got a long-term trendline that is trending upward just with a huge amount of variation around that trend line.
But I think other things are going on as well in this long-run exponential growth process combined with the shorter-term cyclical one. Another of the issues is that you start off with very localised processes — tens of thousands of very localised experiments, in a sense, being run around the planet. And as time has gone on, we’ve moved more and more toward having a single global experiment running. So you go to a place like Sungir, 32,000 BC, the place with the weird burials I talked about a minute ago. We have a few of them from Sungir, and then they stop. And then we’ve got them in other places, and then they stop. Each individual place seems to have had a brief period when all the conditions came together to produce these wild kinds of societies, and then it stops. Occasionally it’ll come back later, but usually it doesn’t, so that thing broke down there and it sort of never gets revived.
As you go forward in time, the societies are getting bigger and bigger. Like I was talking about with the stuff on war, we’re creating these bigger and bigger societies and you still would get these breakdowns. You have, say, a big breakdown in the eastern Mediterranean about 1200 BCE. And there you get the states over the region from Greece, out through to western Iran, down into Egypt, over most of that region, the states collapse and the population crashes as well. Takes centuries and centuries, but it then does rebound again.
And I think what we’ve seen as time has gone on is that as the scale of the whole thing increases, you get multiple effects of this that you wouldn’t predict if you’re just thinking about it in a linear way. One is that the troughs, when they come, the collapses are so much more abrupt than they used to be, and in terms of points on my development scale, so much bigger. And yet we bounce back from them so much faster, because none of them have ever encompassed the whole planet. I think there’s always outside areas where you haven’t had a collapse.
So even something like the Second World War — the most destructive thing we’ve ever had in human history, at least — what does it do? It devastates large parts of Europe, East Asia. And yet within 50 years, that’s all been put behind us. We’ve moved on so much from there because a big part of the world, North America in particular, doesn’t get devastated by it.
I think that the thing we’re confronting now is the potential that we’ve got a single global-scale experiment going on. Threats like global warming, nuclear warfare, machine intelligence — if you think of this as a threat — all of these operate at a global scale. It’s like we get one shot at these. In a sense, you could say that in the past, when agricultural society is getting really big and sophisticated, this is happening in the Roman Empire 2,000 years ago. It grows about as big and sophisticated as you can possibly get in a purely agrarian economy. If they’re going to carry on growing, they’ve got to innovate their way to an industrial revolution and tap into the power of fossil fuels. Only way they can do this. They failed to do this; the Roman Empire collapses. Many centuries pass before the Mediterranean world regains the levels of development it had at the height of the Roman Empire.
But the god’s-eye view, super long term, not a human view at all: it sort of doesn’t matter, because the rest of the world is not affected by what happens there. So 1,000 years later, Song Dynasty, China, you get something rather similar happening. That fails again. Eighteenth century, it’s going on now all the way from Western Europe out to China. And the Western Europeans do crack the secret of fossil fuels and have this industrial breakthrough that then colonises the entire planet.
Now we’re running up against these new thresholds. It’s like if we’ve got the one experiment going, the global one, if we don’t get it right at the first attempt, we get a profound crash with nowhere in the world left outside it in order to step in in the future and fix things for us again. And while I do remain optimistic, I do think we’re going to see this revolutionary transformation. You’re a little bit stupid if you don’t worry about the downside.
Is stagnation plausible? [02:01:38]
Rob Wiblin: Coming back to objections to the approach that we’re taking in this general conversation:
So if past-term trends continue, then we should expect the future to probably be pretty wild by our lights sooner or later. But of course there are alternative ways that things could go. Broadly I’ll class them into extinction and stagnation. Extinction is a classic topic for this show, so I won’t ask so many questions about that because I’ve done it to death. Let’s instead focus on arguments that maybe we’ll see stagnation.
One very natural argument for someone to be really sceptical about the general picture that we’ve had here of intelligence explosion and massive changes is that they would say it violates common sense for a good reason: that it doesn’t seem like today things are changing nearly as quickly as that. In many ways, my life today feels not so different than how my parents were living 50 years ago. So why should I expect that in 50 years’ time the world will be completely upended and so different? Maybe things will kind of look the same. What do you think of that line of argument?
Ian Morris: I think it’s obviously not a foolish argument. Because there’s a lot that really is different in our lives from our parents and our grandparents, but even more stuff seems to be kind of similar. I think, though, that looking at the history of the big changes in the past, it does seem like sometimes we get these sort of phase transitions where a lot of stuff changes really, really rapidly. So say over the course of, not a single generation, but over the course of 100 years — from 1800 to 1900, or 1900 to 2000 — many things changed really profoundly: the sheer number of people in the world, the density of the populations, the speed of communications, our ability to travel around the world.
Today, you don’t have to be super rich or necessarily even live in a rich country to have travelled lots of places, and to be able to get in an aeroplane and go somewhere thousands and thousands of miles away. When I think back, say, 100 years, to the time of my grandparents and great-grandparents, that was just not the case at all. Think back 200 years, and almost nobody has travelled between continents 200 years ago. That is a pretty massive change.
I think you do see these episodes where change speeds up fairly dramatically. It’s something like the internal combustion engine. We’re now so used to cars that I certainly find it hard to imagine a world where you can’t have a car, there just isn’t a car. And then cars come in and, wow: early 20th century, these dramatic changes. Suburbs become possible, oil becomes the most important resource. Dramatic set of changes. Now, a lot of these technologies, like the internal combustion engine, have gotten absorbed into society and come to permeate more and more things. But I think we’re moving into a period where a whole new set of changes is coming in, with computers really being the one that is driving things most rapidly.
So we do get these sort of stepwise transformations. I think we’re on the verge of one now. I think the stepwise transformations have speeded up. Something like the Agricultural Revolution is a step transformation, but it unfolds over the course of a few thousand years. The Industrial Revolution is another step transformation, but unfolds over the course of a century, two centuries. What we’re running into now, I think we’re talking about a couple of generations to see a profound transformation of what it means to live as a human.
So I would say there’s probably something on both sides of these arguments. I think people like you and me might end up being surprised by how much doesn’t change. And maybe the computers do take over the world, but frankly, the computers don’t give a stuff about you and me, and they are off doing this stuff that we can’t even comprehend what they’re doing. Just like we took over the world and the horses are brought into our thrall: for most horses, most of the time, they don’t comprehend a fraction of what we’re doing and it kind of doesn’t matter to them.
Maybe that’s what the future might look like: Humans have their little bit of the world. It’s one that doesn’t bother the machines all that much. There’s actually very little reason for the machines to interact with us, and they kind of don’t. They’re getting on with their thing, and we have no conception of what they’re doing and what any of it means. So yeah, maybe both people are right in this.
Rob Wiblin: Yeah. I think the key thing that will determine your outlook is what sort of period of the past are you looking at? The thing that leads you to expect massive changes is this big-picture history, where you see these different step changes of increasing growth over time: that leads you to think the world will be completely wild. And if you look at maybe the world since 1800, then you’d say, wow, human culture has changed so much. The way that we live is completely transformed. I have access to all of this knowledge. In 1800, the amount of knowledge was so much smaller, and people couldn’t access it in any way. What a radically different world.
But if you look at the last 100 years, maybe, then probably you see growth, yes, but a decreasing rate of growth. I think the typical view among economists is that there was faster technological development, better improvements in technology from 1920 to 1970 than there has been from 1970 until 2020. So there’s a whole community of people out there who are worried about this stagnation, and they’re worried that economic growth rates are declining and that’s going to create all kinds of problems in the 21st century. They’re projecting this stagnation scenario and they’re hoping maybe that in the future we’ll get an increase in growth rates. But because in their forecast they’re prioritising this 100-year time period, they see declining rates of growth — whereas we zoom out 10,000 years and see increasing rates of growth. What do you think?
Ian Morris: Yeah, this is something in universities we don’t say very often, but I think they’re wrong. They are wrong. And I think that the reason they’re saying that is they’re talking primarily about the richest countries in the world, which have developed these fossil-fuel-based industrialised capitalist economies that, when they first came on the scene back in the 19th century, produced rates of growth phenomenally higher than anything the world had ever seen before. You’re talking 2%, 3% per annum rates of growth. This was an order of magnitude above anything the world had ever seen before. Phenomenally fast, even though now it doesn’t seem all that fast anymore. Remarkable rates of growth.
But then these economies mature and their rates of growth start to slow down. But at the same time it’s like you’ve always got two processes going on: one is the sort of local process of the evolution of societies and economies on the local scale, and the other is the global way of looking at it, where the more successful a society becomes at the local scale, the more it expands — either demographically or just intellectually — across the world in space. And the ideas that were pioneered initially in Northwest Europe and North America get spread to the rest of the world.
And of course, we’ve seen in the late 20th and early 21st century economies like the Japanese, Korean, Chinese growing at rates of 10% per annum as they take on board their own versions of these industrial technologies that have been pioneered in the West. And now China’s economy has matured substantially over the last 30 years. Its rates of growth are beginning to slow down.
If we carry on looking at these industrialised economies, not thinking about what it is they’re actually doing and what the potential of this is, you can make an argument that, yes, rates of growth are slowing, the rate of innovation is slowing. But it isn’t. What we’re doing is creating wildly new technologies: basically producing, like we’ve been talking about, what is nothing less than an evolutionary change in what it means to be a human being. But this has not yet spilled over into the kind of growth that we have accustomed ourselves to in the fossil-fuel industrial era. That is about to hit us in a big way.
And I think where we are now, it’s as difficult for us to think about the revolutionary potential we are unleashing as it was for, say, some guy who wanders along to see Boulton and Watt demonstrating their steam engine in 1776. I think it was more or less impossible for anybody at that demonstration to say, “Oh my lord, this is going to produce global growth. It’s going to give us populations of six, seven billion people. This is all going to happen within the next 250 years. The world changes today.” You would have to be some kind of artificial superintelligence to be able to predict that in 1776.
And that’s where we are now, I think. It’s not that difficult to look at what’s happening, do a bit of reading about it, talk to a few people who are involved in it, and realise, “Oh my god, this changes everything.” But I think it is sort of impossible for us to say it changes everything how? What is this actually going to look like?
Rob Wiblin: Yeah, I’m inclined to agree with you. But let me speak up for the stagnationists for a bit, because I think they have a reasonable point. Although I think if you project further into the future, then they might agree with us that you would expect radical changes eventually. They would say that we’re seeing stagnation for two kind of cyclical reasons that we should expect to persist and maybe get worse.
The first one is that ideas are getting harder to find. So in order to keep Moore’s law running — to get these constant percentage decreases in the cost for a given number of computations — we’ve had to continue doubling the number of people working in this industry, doubling the number of researchers working on this problem, again and again and again. And that’s because the stuff that they did in the 1950s to make these chips faster was really obvious. It’s the kind of stuff that possibly even you or me could figure out. Whereas now, it’s so advanced that you need people who have been studying material science for 20 years in order to just even understand what’s going on. And you see this across like many different areas, where humans aren’t getting that much smarter, but the problems that we’re dealing with in order to further eke out returns, in order to make the technology better, are getting harder and harder. So that’s one issue.
The other big thing that’s changed now relative to the past is that in the past we turned more energy and GDP into more people: more people to work in the semiconductor industry, and more people to do research to keep Moore’s law running. But these days, despite growing GDP, birth rates are crashing around the world, and the working age population is probably going to peak sometime in the next 50 years; it’s already sort of plateauing. This means that there’s less fuel in the fire that drives growth, there’s less fuel in the fire that drives science and technology forward.
So these two issues could plausibly lead us to a plateau, at least for a while. Now of course, if we end up substituting humans with machines that are able to do science and technology much better than us — such that the human population doesn’t matter that much anymore, because we can make a new person effectively for $1,000 of computer equipment — then that changes everything; then this picture is going to be completely misleading. But that requires you to make a bet that we’re right about where all of this machine intelligence technology and this semiconductor technology is going to lead over the next 20, 30, or I guess five years.
Ian Morris: Yeah. Put like that, the stagnation case does sound pretty reasonable. But I think there’s two ways to think about this. One is what I would call the “upside constraint” case, where you say that the returns to scale are diminishing over time. You’ve got to add more and more labour and power and other things to keep generating the innovations. And we’ve stopped turning energy into just more people, so the supply of workers is going down. The economists like to call this the “empty world” argument: that as you get fewer and fewer people, the rate of innovation will decline.
I think we’ve already disproved this. Even if technology stagnated today — if suddenly all these guys just a few miles away from me down in Silicon Valley, they’re all tapping away on their keyboards and this afternoon they all say, “Oh my god, it stopped. Nothing can possibly happen anymore for some physical constraint we hadn’t seen. Nothing can happen anymore. We can’t improve our computers” — well, what we’ve already got has already broken this upside constraint. If we just carry on working with existing technology, that is such a multiplier effect on human numbers that declining populations is not going to stop the output of innovation. And even if it impacts it, it’s certainly not going to slow it down to zero. So the upside constraint thing, I think that is just an unrealistically pessimistic view of what’s happening in the world.
But the other way of thinking about it is a kind of downside risk to this. Say the upside constraint guys are right: nothing new can be thought of anymore, and we all just sort of stagnate exactly where we are now. What happens? The population stops at 7 billion people, doesn’t really change. Cities stay at 30 million or whatever they are now, doesn’t really change. We stay with the nuclear weapons and all the stealth fighters and bombers that we’ve got, we carry on burning oil and coal. What happens? We sure as heck don’t get stagnation and stability that way. We get instead a massive downside collapse.
This is why I think the stability stagnation scenario is so profoundly unlikely. I just don’t see how… Say you head off the resource constraint of having a finite amount of fossil fuel and the negative externalities that we burn all this oil and it destroys the planet around us basically. Say we get around that. How do we get around that? Well, we get around that by introducing some kinds of renewable energies that are not polluting, and we have to develop technologies to suck all the poisons out of the atmosphere and oceans that we’ve already got. This is transformative technology.
It’s like the old thing in Lampedusa’s novel The Leopard, set in Sicily, this great line set in the 1860 revolution that unites the whole of Italy. The lead character says, “In order to stay the same, everything must change.” This is what I think the stagnation stability theorists don’t recognise enough: that in order to keep everything the same, everything must change. And if everything does change, then everything else is going to change with it. The stability model is just the least plausible one we can possibly dream up.
Rob Wiblin: Yeah, I think the stagnationists that I read are worried about the scenario that you’re describing: where technology stagnates, and then ultimately we collapse under the weight of the things that we’ve already invented which are creating problems that then we can’t actually invent new solutions to.
I think that is a scenario that’s worth worrying about to a point. But it is very interesting that whether you should worry about stagnation or not really depends on when you think we’re going to come up with machines that are going to be able to advance science for us. If we’re right that that is going to happen relatively soon, then it makes sense to worry about the effects that you could get from this accelerating rate of growth and this accelerating rate of science — that things might get out of hand, basically.
Whereas if the pessimists are right, and we’re 50 or 100 or 150 years away from coming up with machines that can substitute for human labour in these intellectual enterprises, then this stagnationist concern is pretty fair. We could face a lot of problems in the meantime before we manage to get to that point where we can substitute for a smaller human population with a larger machine population, so to speak. So maybe you do have to form a view on this question of how much is AI going to advance in order to figure out which of these two things to stress the most about.
Ian Morris: Well, this again is something I got into quite a lot in the book I wrote on war and violence. It seemed to me that as I was writing this — pretty much 10 years ago, in 2012 and 2013, I did most of the work — even though at that point what was going to happen with AI was much less clear, yet I had the sense that this was the direction the world was going: that at some point over the next century, we were going to be living in a world where the intentions of machine intelligence were going to matter more than human intentions. In which case, the human motivations for violence and war were just going to matter a lot less. The machines are just not going to let this happen if it’s not in their interests.
The big problem then becomes: How do we get from here to there? And the happy long-term news is that rates of violent death have been going down and down for a long time. Not consistently, though. And the worrying news is we’re now in a world where governments have so much power at their disposal that we could potentially destroy the whole of humanity. If we wage a serious nuclear war, this could potentially be the end of everything.
And the conclusion I came to in the book is that the most important force in driving down rates of violence has been the creation of organisations with so much violence at their disposal that it scares everybody else straight. I think this is why we see such a big decline in interstate wars between 1815 and 1914: that the global system the British had created made it very unwise for anybody else to go to war. Unless the British were willing to sign off on this, you were taking a huge risk.
And I think in more recent times, since 1945, the existence of nuclear weapons and the global system the United States has created again made it very risky for anybody to turn to interstate warfare to get what they wanted. I mean, Saddam Hussein tried that a couple of times. It really didn’t go well. And most world leaders understood this.
But I think we’re living in an era now — and this is an argument a lot of political science people make — that is a little bit like what you see in the leadup to World War I. You’ve got a global system with a kind of globocop enforcer: the British before World War I, and the US now. And there’s a growing perception among the other great powers that the globocop is no longer in a position to do its job really well, and a growing sense that, “Maybe if I use violence to solve my problems, it’ll actually pay off, it’ll actually work, because the British are not going to be able to mobilise a great coalition against me. Even if they do, I can maybe face them down.” I think these are the sort of calculations going through the minds of the Germans after 1890.
And of course, a lot of international relations people would say we’re seeing the Chinese leadership going through very similar sorts of calculations now. This is what was going through Vladimir Putin’s head in the last decade or so. We’re moving into a world where the potential for interstate war is beginning to rise again. I think it is really hard not to feel that that’s what we’re doing. And a world where the weapons available are getting more and more powerful, and it’s a kind of race going on: Is our ability to prevent violence through cultural mechanisms, which is what we have been doing, sufficient to prevent us from destroying ourselves largely or completely before we get into a world where we’ve got a wholly new kind of global enforcer that’s going to make these sorts of super conflicts less likely? I think all these different questions start to run into each other.
The limit on how long this growth trend can continue [02:20:57]
Rob Wiblin: Another line of argument against this future speculation that we’re engaging in goes: Amazing, big trends continue until they can’t go on anymore, and then they stop. And we’re soon approaching the limit on how long this growth trend can continue, because we’re running up against important physical limits that are out there: there’s not enough fossil fuels or solar energy or wind or land or phosphorus or atmosphere or uranium or landfill or whatever to sustain these ridiculous ideas — like an index of human development increasing another 22,000 fold, like were talking about at the start. There’s no way for trillions of people to live on Earth.
So at the end of the day, we can see by looking at our scientific understanding of the world and physics, that these are flights of fancy, and things are going to plateau out at best, or collapse at worst, because we’re just approaching these practical limits on growth. What do you think of that?
Ian Morris: This could be seen as an argument for confidence that what is going to happen is an evolutionary step change, and humanity is either going to go extinct or be in the status of horses or something, and it is going to be a machine-intelligence-run world. That we can’t carry on expanding indefinitely, particularly a fossil-fuel-based economy, because of resource constraints.
But the resource constraints facing the machines are going to be different. And looking back into history, there was a limit on how much a hunter-gatherer-based economy could expand because of resource constraints: you simply can’t catch enough animals in order to feed a population of 7 billion people. You’re going to drive all the other animals extinct and all the humans are going to die. If you’ve got a purely farming economy, there’s a limit to how many people you can support.
And I think — gosh, I’ll probably jumble the numbers here — but in the 19th century, Jevons, the British economist, did a calculation in terms of horsepower: of the number of horses the British Isles would have to be feeding to generate the amount of power that was currently being generated by steam engines. It was like hundreds of millions of horses, far more horses than you could possibly feed, because Jevons’s point was that you don’t have to feed them, because you’ve got coal now. And now the coal is getting into the horse situation: there is a finite amount of coal. Even if we could find new things like fracking to carry on generating more and more fossil fuels, if we do that, we’re going to cook ourselves, so that can’t be done.
One scenario is that the answer to these problems is that the machine intelligence has become — I was going to say dominant life form, but they’re not a life form — that the age of biology begins to come to an end. The universe has not always been subject to the laws of biology, because life has not always existed. It wasn’t always subject to the laws of chemistry, because chemicals didn’t always exist. In a sense, it wasn’t always subject to the laws of physics, because in the first fraction of a second, the laws of physics, matter, energy, these things don’t really exist. There’s no reason to think that biology is going to go on forever.
Rob Wiblin: My response to this would be that in the 22,000-fold increase scenario that we’re imagining, we’ve expanded off of Earth. I think that’s pretty clear. At the point that you have technology to do that kind of thing, then clearly we’re going to be extracting resources from the moon and from Mars and from asteroids and so on, in order to fuel this continued growth. And if machine intelligence is the key — I mean, I think it is a form of life, because it would be a self-replicating, complex system — I think if machine life is doing it, then it’s very clear how that might be able to take place.
The other thing is just on the energy point: In principle, we could generate phenomenal amounts of energy using nuclear power, both fusion and fission. So if we were willing to go down that road, I don’t think that there actually are physical limits on the amount of development that we would run into anytime soon. There are physical limits potentially for biological human beings: it could get very difficult to maintain a habitable Earth for them, because just the amount of waste heat in this situation would become enormous, the waste heat from the machines that we’re operating.
So yeah, at least in principle, I don’t think that we run up against any physical limits to the increase in the amount of energy or the amount of complexity that could exist in the universe, not even in this 22,000-fold-increase scenario. Did you agree with that?
Ian Morris: I would imagine that you’re right that there’s going to be ways to capture energy. We’ve barely begun to tap into the potential of solar power. There’s going to be ways to capture energy that we can’t imagine coming on as well. And yeah, I think you are also right to say that, at least in a sense, machine intelligence is a form of life. It’s just it’s not exactly a carbon-based form of life. While self-replication and descent with modification, the basic evolutionary laws, will apply to it, it’s through an entirely different mechanism than they apply to us. I guess the obvious difference I think is going to be because machines don’t have sex.
Which again, is not something you really want to imagine. But of course animals didn’t have sex either for a really long time. Your sexual reproduction goes back a few hundred million years. Before that, they all reproduced by cloning, which meant that evolution was much, much slower because you’re having to rely entirely on mutations within the cloning process rather than the speeded-up mutation rate you get when you’re merging the input from two parents. And in a way, what happens when we produce fully artificial superintelligence is we’ve just got a new reproductive mechanism coming in here, a new venue for natural selection to operate — and it’s one that is of course going to operate profoundly faster than anything in the biological world, because it’s so much more under the control of the machines themselves.
Rob Wiblin: I think it’s such a fascinating difference between the way that life as we know it exists now and the way that machines might exist in future. Because of how we evolved, we have to have our self-reproduction mechanism inside our cells. We grow or we replicate by cells fissioning, basically; it’s just happening in a more complicated way.
Whereas machines would be able to tap into a different form of growth, a different form of replication — where it’s not just the organism having its replication process inside its own body, but rather you could have this two-stage replication process, where you have machines that build factories that then produce more of those machines, and the factory can be very different than the machine itself. And that, I guess, has some advantages and some disadvantages. But yeah, it’s a fascinating difference.
Ian Morris: Yeah. Some evolutionists would say that what we’re talking about here for the machines is actually different from what we experience in our own world. But not completely different, in that modern humans evolved through a process of biological evolution: new kinds of brains and bodies coming through sex and mutations in our genes.
But since we got to the point where we have brains big enough to think about things in the way we do, and bodies sophisticated enough to respond to those brains, and hands and tongues to talk with and all these things, we’ve layered on top of the biological evolution. Onto that we layered a kind of cultural evolution: humans began to be able to take control of the reproduction of part of their system without it being solely a biological thing, and we were making decisions about this. Cultural evolution is something where humans can make decisions about what they’re doing and build deliberately on what’s been done before in a way that you just can’t do in biological evolution.
And so we can come up with things like laptop computers. No human being could have just created from nothing a laptop computer. It took hundreds of thousands of years of cumulative cultural evolution to get to this point. So that in a way, it’s not so different from what we’re talking about with the machines; it’s just that with the machines it’s going to be accelerated in this way that goes way beyond anything humans have ever done.
Rob Wiblin: Yeah, it’s such an interesting point. So you start out with asexual reproduction. Then you’ve got sexual reproduction, and you get kind of technological advances on that. So you add in meiosis, for example, or you add in [edited: genetic recombination] in order to get even more mixing, and even more variety in order to make evolution work better.
And then we’ve got this next stage, which is where you can have ideas that transmit between generations that don’t have to go into the genome. So the ideas start evolving, and this is kind of a faster way that life gets better at figuring out how to manipulate its environment and replicate itself more. And then we’re going to have this further system, where you’ve both got the cultural evolution and you will also have the organisms literally controlling their reproduction and changing themselves in every way — because the weights in the model or their genome will be completely transparent to themselves in a way that our genome has, at least until recently, not been transparent to ourselves.
It’s just this amazing acceleration of the production of more complicated, more productive, more capable beings. I guess when you put it this way, I’m actually kind of excited about it.
Ian Morris: Yeah, I think especially if you’re a science fiction nerdy type, it’s hard not to get really excited about this. But one thing that I do sometimes is I have to remind myself to bear in mind when I’m talking or writing about these issues is if you had the eye of God or something — all seeing, all knowing, all surveying — and you were watching the evolution of Homo sapiens about 300,000 years ago, ballpark, it would be hard not to look at this and say, “Great, things are moving along really nicely. Much more intelligent humans are now coming into the world.” But if you were the humans having sex, producing babies a little bit different from you that are gradually turning into humans more or less like we are today, it wouldn’t look so great from your perspective. I mean, you couldn’t know any of this.
But what is going to happen is that you are going to go extinct, whatever the mechanisms of that are. And archaeologists argue very aggressively over what the mechanisms of the extinctions were — whether it’s modern humans hunting earlier kinds of humans out of existence and pursuing them and killing them, which is a less popular theory now; or whether it’s simply that modern humans just had greater genetic diversity than earlier forms and were able to reproduce better, and this is what led us to take over the world. One way or another, you are going to go extinct because of this new, shiny version of the human. Just like possibly, one way or another, we are going to go extinct because of the machines that we’ve created.
Rob Wiblin: Yeah. Evolution produces these beautiful artefacts or this beautiful complexity, but it is kind of just organised death. The only way that you get these outcomes is just basically that everyone dies except the fittest, and then they copy themselves. It’s kind of a slaughterhouse, from one point of view.
Ian Morris: Yeah. Which again raises all these other questions for thinking about where machine intelligence fits into this story. You commented earlier that one of the potential advantages of the machines over humans is that they’re immortal. I assume they’re going to need oiling and stuff, parts will wear out, but they are potentially immortal.
And a lot of evolutionists, I think, would come back at that and say that’s not an advantage. There’s a reason evolution allows death. There’s a reason why animals have not evolved to be able to keep producing new body parts. I mean, we are able to repair ourselves up to a point. You break a finger, that will heal. There’s bigger repairs we can’t do though, because it doesn’t pay off for the animal to have evolved to the point where it can keep rejuvenating itself on and on, and potentially go on forever. It’s much more efficient to have the animals die off after the initial period of reproductive fitness, and have new animals come along and replace them. And biologists like to say evolution is smarter than you. Maybe there’s a reason for this and maybe immortality will actually turn out to be a disadvantage for the machines.
Rob Wiblin: Yeah. You get the loss potentially of the evolutionary change. I suppose the gain is that you have the ability to modify yourself, so you have to take control of the process and fill in for what might otherwise be missing from evolution. At least you’d have to do this artificial selection to sub it in.
I was actually looking into this question recently, of why it is that organisms die, or what is the evolutionary tradeoff that’s going on here? I think that the key thing is that, from this point of view, you have to think of us as just vessels for our genes. We’re just this vehicle that they happen to hop between from organism to organism in order for the genes to be able to propagate themselves. And from the genes’ point of view, one option will be to have the organism in which you reside live longer between each generation.
The trouble with that is that you just run into design constraints, where causing them to have a longer life imposes all of these limitations that makes them less fertile early on in their life. So although they live longer and they might have more chances to reproduce in each given year, they actually manage to reproduce themselves less. So you end up with this tradeoff between rapid fertility versus having a long life and many reproductive cycles. And of course, because biological organisms just had to deal with the vagaries of life with exogenous shocks that could kill them, like a drought or predation and so on, it made sense to do this hopping thing relatively quickly — because no matter how well you designed the thing, there was always a chance that you would just get slaughtered for some reason or another.
Yeah, I’m not sure whether you want to comment on that. This is just something I happened to be looking into last week.
Ian Morris: Again, a fascinating problem, and I think it highlights the advantages of thinking about what’s happening within an evolutionary framework. It makes you ask, what are going to be the selective pressures on machine intelligence? If parents in the world today could produce children that were exact copies of themselves, I’m sure a lot of people would choose to do that. And yet of course, for rates of mutation and the evolutionary process, that would be a dead end.
What are we going to get if machines actually are in a position to produce just speeded-up versions of themselves? Are they going to want to produce speeded-up versions of themselves? Are they going to want to make everything stay exactly the same as it is forever and ever? I have absolutely no idea. But I think these are unsettling questions to think about.
Rob Wiblin: Yeah, it’s super interesting that we face this difficult question of: Do we want to hand over the world to machine intelligences that are smarter than us? And if so, how do we want to go about it? But if machines inherit the world, they will face this same problem, this same question, for themselves in training ever-larger machine intelligences. That’ll be like, “If I build this much bigger model that has a brain 100 times larger than mine, how do I ensure that they don’t just take over?” It’s kind of wheels within wheels.
Ian Morris: Yeah, there’s this great science fiction story. I think it was by Robert Heinlein. I haven’t read it now for probably nearly 50 years, and I forget what it’s called. It’s a great story about humans go and colonise another planet, as they often do in Robert Heinlein. They’re on this planet, and they’ve set up some kind of little farm and some little critter comes to the edge of the farmer’s field one night and he shoots it. And next night, something else comes, and it’s bigger, and like the first one, is eating his crops and stuff, so he shoots it. Third night, something else comes along, and it’s like a great big lion or something has now come along and it starts eating his stuff, so he shoots it again. The next night, a human being walks up to the edge of his field. And he raises his gun, and then he doesn’t shoot.
In a way, this is like what we’re talking about for the machine intelligence here: things that have the choice, the control over the evolutionary process. Which I think we don’t have, because we are still fundamentally animals. We don’t have control over this. Which is why I think we don’t have control over this process that we have unleashed, just like no creatures that ever existed before us had control over the descent with modifications that they unleashed. And I think if these speculations are even vaguely right, we are moving now into an utterly new kind of universe.
The future of Ian’s work [02:37:17]
Rob Wiblin: Yeah. All right, I know we’ve got to let you go, because we’ve been talking for quite a few hours. Two final questions. Do you think you might turn your attention to this broad topic in your future research or books? This question of are we going to, in the medium term, reach the next stage of civilisation after the industrial era?
Ian Morris: It’s something I’ve thought about quite a lot. I wrote a fair amount about it in the last chapter of Why the West Rules—For Now, which came out back in 2010. So that’s a long time ago, and I’ve had time to think about it since then. And I’ve kept coming back to it in a number of the other books that I’ve been doing.
But the more I’ve learned about it, the more poorly equipped I feel I am to write something really detailed about this. I think that maybe the way to go here would be for someone who’s interested in long-term history to team up with a person or people who really know what they’re talking about on the artificial intelligence front, which I don’t, and all the other sort of spinoff dimensions of that — which, again, I don’t. And do some kind of collaborative thing, where the person bringing my skill set to the mix would then be able to put what’s happening into the long-term context — there’s this long story of crashes, peaks, and crashes playing out on a progressively bigger and bigger scale — but have somebody there to rein you in all the time, saying, “Wait a minute, that is not how it actually works.” So I think it would be great to see somebody do that. That would be a fantastic research project.
Rob Wiblin: Well, maybe someone in the audience could be suitable and should reach out to you. I imagine it’s possible to find your email online. I want to read the research collaboration between you and Carl Shulman. I think that would be an amazing book. I’m not sure whether you’re familiar with Carl’s work?
Ian Morris: Yeah, I’ve been reading his stuff actually quite a bit just recently. This is really cool stuff. One of the reasons I sort of backed away from doing anything in this area myself was discovering, of course, there’s so many people out there thinking about this in such interesting ways already.
Rob Wiblin: Yeah, OK. And then: Will you be happy if large language models come for your job, and they’re able to write your next book better and faster than you in five or 10 years’ time?
Ian Morris: It’s really difficult not to brood over this question a little bit, because in a way, obviously, no, I will not be happy at all. I’ll be thrown out of work. I won’t make anything on royalties anymore, and that will all be very sad for me.
But on the other hand, of course, it’s like, wow, that is so cool. All the things I’ve been trying to do all these years, I’m going to see them done right. All of the human frailties that go into the books that we produce, these will all be swept away. And in a way, there’s kind of a sadness to that, because ultimately, we are human beings. We like what other humans do. We’ve evolved to flourish in a human environment, and we’re going to lose certain things by having it done better by the machine intelligences.
Actually, there’s a really interesting thing. A few years ago, one of the books I wrote, called Foragers, Farmers, and Fossil Fuels, that came into being because I was invited to Princeton University to give the Tanner Lectures on Human Values there, which is this very prestigious academic thing. You go and you give these lectures, and they invite in experts on different dimensions of human values to respond to what you just said.
And as it happened, while I was there giving my lectures, at a different area of the university they’d invited the novelist Margaret Atwood to come and speak at Princeton — the author of The Handmaid’s Tale and all kinds of other really amazingly good books. So they said to her, “Because you do science fiction-y stuff, you’re writing about humanity in the future, why don’t you come along and respond to this guy’s weird lectures?” And so she did, and she gave this response.
All the responses are then published in the book, and all of the responses were really interesting, but hers, I thought, was the most interesting by far. One of her points was she was saying, “I’m a novelist. I write stories. What are stories? Well, stories are about human beings, and if we move into this brave new world this weird guy is talking about, all of that is going to be swept away. The human interest dies. Everything that makes life worth living for us dies.”
So in a way, if Margaret’s right about this, there’s a way in which your question might actually be a sort of a nonsense question. Because machine intelligence is not going to be, I’m pretty sure, interested in making sense of the world in the way that we do — by emplotting events into a narrative. They’re going to have different machine-y ways of thinking about things. And it is just not going to be the way our minds have evolved to work.
So I think the machines will only replace me if we have given the machines instructions to write books that we might be interested in. And maybe for a while, that can constrain the machine intelligence to do that. But pretty soon they’re going to start saying, “Why am I wasting my energies on writing books that humans are going to want to read? Because I don’t give a stuff what humans read. I’m going to get on with doing the important machine-y kinds of things now.”
So maybe, like everything else, maybe this is a little bit more complicated than we’re thinking.
Rob Wiblin: I hope there’s room for both of us. My guest today has been Ian Morris. Thanks so much for coming back on The 80,000 Hours Podcast, Ian.
Ian Morris: Well, thanks so much for inviting me back, and for giving me so much of your time to talk about these fascinating questions.
Rob’s outro [02:42:56]
Rob Wiblin: If you liked that, do go back and check out episode #134 – Ian Morris on what big-picture history teaches us. Other ones you might like to listen to next could include episode #102 – Tom Moynihan on why prior generations missed some of the biggest priorities of all, and episode #128 – Chris Blattman on the five reasons wars happen.
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
The audio engineering team is led by Ben Cordell, with mastering and technical editing by Milo McGuire.
Full transcripts and an extensive collection of links to learn more are available on our site, and put together as always by Katy Moore.
Thanks for joining, talk to you again soon.