Transcript
Cold open [00:00:00]
Anders Sandberg: There is a kind of time constant for how likely a civilisation is going to be around for, there is a kind of half-life for civilisations. But the risk of a civilisation collapsing doesn’t seem to increase with time. If there was some kind of decadence building up, then you should expect that over time it became more likely that it crashed.
So, civilisations probably collapse because of bad luck rather than there is something bad building up. Now, why do we have this bad luck? Is it just that it’s very unlikely events that conspire to bring things down, or is it that there is something intrinsic? And even worse: bad luck is rather hard to defend against. You can imagine a Dyson sphere covered with rabbits’ feet and horseshoes, hoping to ward off bad luck. But that’s unlikely to work. Probably the best way of warding off bad luck is having multiple copies, having backup civilisations — and if one crashes, the other ones shake their heads, pick up the pieces, and resettle that part of space.
Rob’s intro [00:01:01]
Rob Wiblin: Hey listeners, Rob Wiblin here, head of research at 80,000 Hours.
Today I’m back with repeat guest and audience favourite Anders Sandberg, talking about many things but in particular the work he has done on a hopefully forthcoming book called Grand Futures, where Anders considers what living things might one day be able to accomplish in our universe given the laws of physics.
We talk about:
- Whether there’s a best possible world or we can just keep improving forever
- How you could improve what happens when two galaxies collide with one another
- The impediments to AI or humans making it to other stars
- How the universe will end a million trillion years in the future
- The grabby aliens theory
- Whether civilizations get more likely to fail the older they get
- The best way to generate energy that could ever exist
- Black hole bombs
- The likelihood that life from elsewhere has already visited Earth
- And a dismaying number of other things.
OK without further ado, I bring you, Anders Sandberg.
The interview begins [00:02:18]
Rob Wiblin: Today I’m speaking with Anders Sandberg. Anders is a Senior Research Fellow at the Future of Humanity Institute at Oxford University, where he looks at low-probability, high-impact risks; the capabilities of future imaginable technologies; and very long-range futures. He has a background in computer science, neuroscience, and medical engineering — but honestly seems to have some level of amateur interest in almost every area of science that I’m aware of.
To give you a sense of that, here are the titles of our two last interviews with Anders: episode #29 – Anders Sandberg on three new resolutions for the Fermi Paradox and how we could easily colonise the whole universe and episode #33 – Anders Sandberg on what if we ended ageing, solar flares, and the annual risk of nuclear war. Anders is maybe actually best known in the broader world for doing a deep dive into what would happen if the entire Earth were replaced with blueberries. It’s not what you think.
But for many years now, he has been working on a more serious academic book called Grand Futures, in which he plans to address questions like: How good could the future be? How much might be achieved? And what do we need to do in order to get there?
Thanks for coming back on the podcast, Anders.
Anders Sandberg: Thank you. It’s delightful to be here again.
Grand Futures [00:03:19]
Rob Wiblin: I hope to talk about how good the future could be and ask about possible impediments to complex life and machines spreading beyond our solar system. But first, tell us about the vision for this book, Grand Futures.
Anders Sandberg: So it all actually began in a hailstorm on a Dutch beach. I had been giving a talk at a motivational meeting for a leadership school, and the theme of the day was “What’s your vision?” And we were supposed to be having these silent walks along the beach to think about our vision, whether that was for a startup or nonprofit, and I realised I probably need to write a book. I realised that I have a lot of research all over the place, I have a lot of things to say — and I can spread it across a lot of papers, or I can try to put it together into one big vision. And there is a reason you want to have that big vision: to give hope, to give cohesion, to actually say that this is a direction we might want to be going in, or at least some directions that might be worth exploring.
And that’s how I started. I started that afternoon, sketching out some chapters that ought to be in the book, and then I kept on going. And it’d been a few years since then.
Rob Wiblin: When was that?
Anders Sandberg: This must have been back in 2016. It was November and really rough weather on that beach. But that got me started. And the first chapter I actually wrote happened probably the next year. I spent some time thinking about what should be in this kind of book? What are all the possible chapters? And then I tested it out on various friends and they told me, “Anders, cut out the first third. That is about how to solve all the world’s problems. That’s a book on its own. It’s an interesting book, but you’re not going to write that.” So this is actually two-thirds of a much bigger hypothetical book that I would probably never be finished writing.
Rob Wiblin: Yeah. On life, the universe, and everything. What are some of the questions that you hope to address in Grand Futures? And maybe what reaction do you hope it might inspire among people once they one day get to read it?
Anders Sandberg: I want to try to figure out, if we get our act together and survive, then what are the different kinds of limits to what we would do as a maturing civilisation or some future super-civilisation? What does the universe really allow in terms of the good life or grand goals? And of course, there is an interesting value question here, of what we mean by “good” and “grand.” I’m leaving that until relatively late in the book.
I’m mostly looking at the physics, like what could you do if you really want? So typical questions are: How rich could they become in terms of material wealth? How sustainably could they live on Earth? What about settling the solar system? What about settling the galaxy? What about settling the universe? Where are the constraints there? How much can we make matter do? And what about energy? How much energy is there for us, and what could we use it for? How much entropy is there, and what can we use that for? How much computation can be achieved? And so on and so on. So I’m trying to see where the limits are.
Now, the problem is, of course, predicting the future is very hard, especially since we don’t know what future people will have as goals. So we can only say, inside this vast space of possibilities, some things that they will achieve. And it sometimes is interesting to know, are we close to the edges? Are we very constrained? Or is it that there are some directions where we could just do a lot more?
Rob Wiblin: I suppose the book, at least the parts that I’ve read, are quite heavy on physics and engineering and chemistry, because you’re trying to see — within the bounds of science, within the bounds of what we understand about the universe — what would be the very best possible outcome from various different points of view.
So if humanity just stayed on Earth for a long time, what might be accomplished in the billion years that we have left? If we could get to other places in the galaxy, then what might we be able to accomplish? And is that possible? Is it actually possible to spread out to the galaxy? And what might that look like? What kind of technologies would you use, and how would you get energy, and for how long? All of these kinds of things. So if you’re into speculative science fiction stuff, or hard science fiction, then I think that this is going to massively scratch the itch for you.
Anders Sandberg: I’m hoping to make it at least useful for a science fiction author so they can look up “What’s the performance of my wormhole or my space drive?” To check in a table in chapter 20: yes, it’s on line five.
Rob Wiblin: Yeah. I think this might be the academic book that launches 1,000 science fiction novels. A question that came in from a listener, though, is: “What’s the decision relevance of Grand Futures for the typical 80,000 hours podcast listener? Is this mostly just a fun exercise that might be relevant to people in a million years’ time, but not so relevant now?”
Anders Sandberg: I think that’s a very important question. I think one reason I started on this was I realised we need to write about hope. A lot of my research is about existential risk and global catastrophes and other dreadful things. Quite often journalists ask me, “But how do you sleep at night?” And I usually explain actually quite well, because I’m thinking that I’m doing my part to reduce some of this risk. But the deeper answer is that I’m really optimistic. And you have to be optimistic about the future to want to save it. If the future actually could be very grand, we have a very good reason to save it. But there is another decision-relevant part, and that is: What do we need to achieve different forms of grandness? What kinds of values are at stake?
So there are some forms of ticking clocks in this universe. We’re kind of running out of certain resources. Deuterium, for example: most of that was made in the Big Bang and it’s mostly being consumed now. Eventually, we might run out of it. The galaxies are moving apart because of the expansion of the universe, and in a few hundred billion years, you will not be able to reach other galaxy clusters. If you need to do that, or if you want to do it, if there is some value in going to these other galaxy clusters, the clock is ticking and you actually need to get your act together. Of course, it’s a rather slow tick, so it’s not super urgent. But it’s interesting to investigate this domain, because there might be other clocks that are ticking faster. So knowing how much time and space we have to move inside is actually quite valuable.
I also think there are many nontrivial questions about coordination. There are some problems that we probably need to decide on relatively early on, to set up standards and practices so when we expand outwards, whether that is in time or space, we actually have ways of keeping things together. After all, if you have an intergalactic spacefaring civilisation, one side will not be able to tell the other side what it has found, or the deals it made with some aliens it found, until a billion years later — at which point you might end up with very inconsistent negotiations.
Similarly, there might be very long-term projects where we want to transmit information to the very far future. How do we do that? And can we set up contracts so I can make a deal? Like, “Wake me up in a billion or a trillion years, when the universe is more to my liking,” and I can be fairly certain that I do get woken up at the right time?
So the book is really about these big things. What does it take to achieve them? And that then allows us to focus on the things we really care about, what we need to start researching or constructing now. Quite a lot of it is going to be making the tools to make the tools to make the tools, or making the instrument to figure out some of the big questions so then we know what tools to make.
Potential amazing futures [00:10:54]
Rob Wiblin: Let’s talk about some of the things that you discuss, some of the questions that you try to answer in Grand Futures. There’s so much stuff in this. The draft is 1,400 pages, so there’s an insane amount of stuff in there. And I must admit, I didn’t actually get to read the entire thing. I think we’re just going to kind of jump around all over the place.
Anders Sandberg: It’s more fun that way.
Rob Wiblin: It’s more fun and also would have been a lot of work trying to figure out how to structure this in a cleverer way.
The first one is: What are some futures that you think could plausibly happen that are amazing from various different points of view?
Anders Sandberg: One amazing future is humanity gets its act together. It solves existential risk, develops molecular nanotechnology and atomically precise manufacturing, masters biotechnology, and turns itself sustainable: turns half of the planet into a wilderness preserve that can evolve on its own, keeping to the other half where you have high material standards in a totally sustainable way that can keep on going essentially as long as the biosphere is going. And long before that, of course, people starting to take steps to maintain the biosphere by putting up a solar shield, et cetera. And others, of course, go off — first settling the solar system, then other solar systems, then other galaxies — building this super-civilisation in the nearby part of the universe that can keep together against the expansion of the universe, while others go off to really far corners so you can be totally safe that intelligence and consciousness remains somewhere, and they might even try different social experiments.
That’s one future. That one keeps on going essentially as long as the stars are burning. And at that point, they need to turn to actually taking matter and putting it into the dark black hole accretion disks and extracting the energy and keep on going essentially up until the point where you get proton decay — which might be curtains, but this is something north of 1036 years. That’s a lot of future, most of it long after the stars had burned out. And most of the beings there are going to be utterly dissimilar to us.
But you could imagine another future: In the near future, we develop ways of doing brain emulation and we turn ourselves into a software species. Maybe not everybody; there are going to be stragglers who are going to maintain the biosphere on the Earth and going to be frowning at those crazies that in some sense committed suicide by becoming software. The software people are, of course, just going to be smiling at them, but thinking, “We’ve got the good deal. We got on this infinite space we can define endlessly.”
And quite soon they realise they need more compute, so they turn a few other planets of the solar system into computing centres. But much of a cultural development happens in the virtual space, and if that doesn’t need to expand too much, you might actually end up with a very small and portable humanity. I did a calculation some years ago that if you actually covered a part of the Sahara Desert with solar panels and use quantum dot cellular automaton computing, you could keep mankind in an uploaded form running there indefinitely, with a rather minimal impact on the biosphere. So in that case, maybe the future of humanity is instead going to be a little black square on a continent, and not making much fuss in the outside universe.
I hold that slightly unlikely, because sooner or later somebody’s going to say, “But what about space? What about just exploring that material world I heard so much about from Grandfather when he was talking? ‘In my youth, we were actually embodied.'” So I’m not certain this is a stable future.
The thing that interests me is that I like open-ended futures. I think it’s kind of worrisome if you come up with an idea of a future that is so perfected, but it requires that everybody do the same thing. That is pretty unlikely, given how we are organised as people right now, and systems that force us to do the same thing are terrifyingly dangerous. It might be a useful thing to have a singleton system that somehow keeps us from committing existential risk suicide, but if that impairs our autonomy, we might actually have lost quite a lot of value. It might still be worth it, but you need to think carefully about the tradeoff. And if its values are bad, even if it’s just subtly bad, that might mean that we lose most of the future.
I also think that there might be really weird futures that we can’t think well about. Right now we have certain things that we value and evaluate as important and good: we think about the good life, we think about pleasure, we think about justice. We have a whole set of things that are very dependent on our kind of brains. Those brains didn’t exist a few million years ago. You could make an argument that some higher apes actually have a bit of a primitive sense of justice. They get very annoyed when there is unfair treatment. But as you go back in time, you find simpler and simpler organisms and there is less and less of these moral values. There might still be pleasure and pain. So it might very well be that the fishes swimming around the oceans during the Silurian already had values and disvalues. But go back another few hundred million years and there might not even have been that. There was still life, which might have some intrinsic value, but much less of it.
Where I’m getting at with this is that value might have emerged in a stepwise way: We started with plasma near the Big Bang, and then eventually got systems that might have intrinsic value because of complex life, and then maybe systems that get intrinsic value because they have consciousness and qualia, and maybe another step where we get justice and thinking about moral stuff. Why does this process stop with us? It might very well be that there are more kinds of value waiting in the wings, so to say, if we get brains and systems that can handle them.
That would suggest that maybe in 100 million years we find the next level of value, and that’s actually way more important than the previous ones all taken together. And it might not end with that mysterious whatever value it is: there might be other things that are even more important waiting to be discovered. So this raises this disturbing question that we actually have no clue how the universe ought to be organised to maximise value or doing the right thing, whatever it is, because we might be too early on. We might be like a primordial slime thinking that photosynthesis is the biggest value there is, and totally unaware that there could be things like awareness.
Rob Wiblin: OK, so the first one there was a very big future, where humanity and its descendants go and grab a lot of matter and energy across the universe and survive for a very long time. So there’s the potential at least, with all of that energy, for a lot of beings to exist for a very long time and do all kinds of interesting stuff.
Then there’s the very modest future, where maybe we just try to keep our present population and we try to shrink our footprint as much as possible so that we’re interfering with nature or the rest of the universe as little as possible.
And then there’s this wildcard, which is maybe we discover that there are values that are totally beyond human comprehension, where we go and do something very strange that we don’t even have a name for at the moment.
In the first one, the big future, what sort of stuff might people do in this very big future with all kinds of beings? Are there any maybe underrated or underappreciated options for what people could do with all of that time?
Anders Sandberg: I think one underappreciated thing is that if we can survive for a very long time individually, we need to reorganise our minds and memories in interesting ways. There is a kind of standard argument you sometimes hear if you’re a transhumanist — like I am — that talks about life extension, where somebody cleverly points out that you would change across your lifetime. If it’s long enough, you will change into a different person. So actually you don’t get an indefinitely extended life; you just get a very long life thread. I think this is actually an interesting objection, but I’m fine with turning into a different future person. Anders Prime might have developed from Anders in an appropriate way — we all endorse every step along the way — and the fact that Anders Prime now is a very different person is fine. And then Anders Prime turns into Anders Biss and so on — a long sequence along a long thread.
But a more plausible thing that might happen if you have these resources is that you actually expand your memory. You can remember your childhood, you sometimes reorganise yourself, you become a sequence of different beings that have the right kind of memories and relationship across time. And this probably has to grow, otherwise if you’ve got a finite state space, eventually you’re going to just keep on repeating. So that is one thing: you actually would have self-design happening over very vast periods of time.
But another activity you might want to do is actually spreading life and complexity across the universe. And there is this interesting thing about both seeing what’s out there and then actually spreading the right kind of thing there. So we can totally imagine putting Earth-like life on terrestrial planets that don’t have it, and that’s presumably going to turn into biospheres a bit like our own.
But what about the other worlds? There might be other forms of complexity that could exist, that don’t yet exist. Could we design life that works in liquid nitrogen? It might be that emergence of life doesn’t naturally happen on those kinds of liquid nitrogen worlds, but we might be able to make it happen. If you think about Mercury, it’s not an environment that’s hospitable to any kind of life we know. But you could imagine making robots that are solar powered, that mine the surface, and you make a kind of robotic ecosystem.
I can totally imagine a future civilisation doing that as an art project or as a spiritual project, thinking that we need that complexity. We also design these self-replicating robots to actually be able to evolve. Normally, when you have your replicators around you, you definitely don’t want them to evolve and do stuff on their own. But in this case, you might actually want to have it as freely evolving as life.
There are interesting ethical questions here of course. Some environmental ethicists argue that even abiotic environments deserve respect, that actually the kind of shaped complexity that exists in a sand dune on Mars matters quite a lot, and it’s not improved that humans put their footprint on it. The problem is, of course, that footprint from the boot of one of the settlers from Muskville Mars, it’s also shaped complexity: it’s kind of a really weird outcome of evolution of brains of the African savannah that eventually end up producing spacecraft, rockets, and space settlement.
So it’s not entirely obvious how to play these different aesthetics and ethical values against each other, which might actually be a very big activity. I can imagine environmental impact debates about this megascale engineering and other projects being run super fast by superintelligent future entities, and they’re still going to be spending quite a lot of time trying to do the right thing, which might still elude them. It might still be way too much politics.
It’s a little bit like one little problem we’re going to run into in 1020 years: if the stars in the Milky Way keep on rotating like we do right now, they are occasionally running close to each other. So you get two body encounters, and typically that means a random exchange of velocity. And that means that sometimes a star gets escape velocity and actually leaves the galaxy — it flies out into outer darkness — and the other star loses energy and moves closer to the central black hole. So over these very vast time spans, the galaxy dissolves: it flings off some stars and loses angular momentum and energy, and other stars end up dropping into the black hole.
So if nobody does anything beyond this point, the galaxy turns very boring. It’s basically a black hole surrounded by a dark matter halo. This is something we probably might want to avoid. We might want to nudge the stars into the right kind of orbit. Again, imagine the arguments about what the right kind of orbits are. And there is this more imminent problem in the Andromeda Galaxy coming right at us in about 5 billion years. We need to think about how to handle this merger.
Now, these problems are interesting because they’re coordination problems, and they might require great coordination over very long distances, where civilisations might actually be utterly different from each other. So that’s another thing you might do. But I have this feeling that, just like life on Earth is mostly single-celled organisms — most of them are living down in the Earth’s crust or on the seafloor, and then there are a few more advanced organisms, and a few that are making lectures and thinking about “What does it all mean?” — I think this vast future is probably going to be full of a lot of simple creatures and simple minds, and then a few bigger ones, and a few super smart ones.
The super smart ones might even think that they are in charge because they see things most clearly, they know what’s going on. But there are way more minds on the intermediate levels that actually are hopefully having great lives, and they’re actually saying, “This is for us. Yeah, there are some super Jupiter brains running the galactic merger process, but we’re having so much fun in this solar system.” And then there are of course the smaller ones, and hopefully a lot of the counterparts of the microorganisms that actually constitute the vast majority of consciousness in the universe, that might not even have a clue what’s going on, but we’re still having a good life.
I think this is what happens if we extrapolate the current distribution in the Earth’s ecosystem. It’s not obvious that this has to be the case. Maybe it’s easier to merge everything together into one gigantic galactic supermind. Maybe the Borg Collective, where everything is part of everything, is the natural outcome. I think that sounds very unlikely, because it’s a very inefficient way of organising any form of information processing. You generally want to compartmentalise just because of computational complexity reasons.
I think there is a natural tendency to get this unequal distribution of how big the minds are. But the strength of communication, how they flicker between them, and even the kind of social aspects of this might be utterly different. Just think about the way corporations today relate to each other. Some are doing mergers, some are forming consortia, some of them are subcontractors, some of them are clients and so on. They form complicated supply chains, some of them form networks and lobbying groups, some of them even form cartels and conspiracies. I think we could perhaps see some similar complexity going on among future minds.
And that suggests to me that the future is going to be rather full of events. It’s very easy when you sketch a grand future to lose sight of that. There are going to be beings there, there are going to be individuals, there are probably going to be people — even though these people might be rather weird people. And there is going to be stuff happening on the local scale, corresponding to daily life events. It’s just that we can’t see very clearly what these daily life events are, but they’re probably going to be the counterpart of Mrs Brown going to fetch a cup of tea, even though this might be in a vast posthuman future.
Many people who are kind of put off from ideas of Grand Futures say, “There’s nobody there.” And it’s true: My book doesn’t deal very much with the posthuman Mrs Brown in a galaxy far away, because I can’t say anything about what she likes and what she does, or whether she’s a she, et cetera. But we can see that there is potential for futures that have room for that kind of complexity, that local detail. I personally don’t think these kind of slim futures, where everything is unified and identical everywhere, are particularly likely. I don’t think they’re stable, but that’s an interesting conjecture.
The (far) future of war [00:26:58]
Rob Wiblin: Yeah. Speaking of which, in the book you talk about violence and war a bit. If the galaxy is mostly settled, do you think there is likely to be or could be wars? And if so, what do you think that might look like?
Anders Sandberg: Yeah, I think it’s an interesting problem: Why does anybody go to war? There is actually a serious debate about the rationality of war and there is serious disagreement about the motives. It’s a bit unclear to me whether you would see advanced civilisations going to war. I think you sometimes can sketch out possibilities. You could imagine the radical negative utilitarian civilisation not wanting that other civilisation to have a lot of resources because they are actually causing pain and suffering, even though they are saying that on average they’re making things better. They would have a reason to try to remove resources from that pain-inducing civilisation, and they would make very bad neighbours.
Now, the really interesting question is: How much is there an attacker-versus-defender advantage in this kind of advanced future? Right now, if somebody’s sitting on Mars and you’re going to war against them, it’s very hard to hit them. You don’t have a weapon that can hit them very well. But in theory, if you fire a missile, after a few months, it’s going to arrive and maybe hit them, but they have a few months to move away. Distance actually makes you safer: if you spread out in space, it’s actually very hard to hit you. So it seems like you get a defence-dominant situation if you spread out sufficiently far. But if you’re in Earth orbit, everything is close, and the lasers and missiles and the debris are a terrible danger, and everything is moving very fast.
So my general conclusion has been that war looks unlikely on some size scales but not on others. It might be that as you move out into space, it becomes at first much harder. But then you learn how to move better over interstellar distances, which means that each solar system is actually easily accessible. And the solar system is hard to have several parties inside that fight each other. Once you reach the galactic scale, it might again take so much time to set up a conflict. But again, it might vary. It’s very unclear, and it actually depends partially on physics.
On the largest scales, the universe looks very defence-dominant simply because everything is moving slowly apart from each other, so you can’t even send light signals telling other parts of your civilisation, “We declared war on the Zorgons.” So it might be that the universe at the very largest scale is very peaceful. Even a doomsday weapon like false vacuum decay is only a local problem; it cannot actually destroy everything simply because everything is expanding apart.
But there is a good question: Why do entities go to war?
Rob Wiblin: If complex life or complex organisms spread through the universe, do you think they’ll mostly be biological like us, or mostly machines of some type? To me it seems like it has to be machines, because machines are just so much more flexible in the environments that they can occupy and the kind of energy sources that they can use relative to anything biological and anything that we could plausibly make using our current biological style.
Anders Sandberg: I think it’s true that with technology, by its definition, you can change the basis for it to fit whatever environment you want. So if you need something that works in a high-temperature environment, you start using ceramics and diamond. But you can also probably build robots out of ice on the outer planets in the solar system. Indeed, you can put dopants into ice to make it electrically conductive. People have demonstrated a fairly primitive electrical motor that could probably be built out of ice and a few pieces of metal. So you could imagine adapting technology to very different environments.
And my feeling is that while humans and life might very well go out and settle the solar system, travelling over interstellar distances generally doesn’t seem to be very good for our kind of life. If you want to get there fast, you need a rather small spacecraft that can be accelerated to enormous velocity and handle a lot of radiation: not good for life. This is where you want to send your nanomachines. You can imagine slightly bigger things using solar cells — but again, not very good for getting people over. You could imagine a generation ship, but they’re super unwieldy and still require horrifyingly big engines.
You might have island-hopping between comet nuclei in the Oort cloud. You could actually use local resources there to build space habitats that are self-sustaining, use fusion from deuterium in the ice. And you could imagine a very slow diffusion of biological humans that way across the Milky Way. But that’s a very slow thing. If anybody invents robots and sends them up at relativistic speeds, these island-hopping humans in the comet cloud will always find the robots taking over all the solar systems they arrive in.
Rob Wiblin: Yeah, I’ve said this before, and maybe the best response I’ve gotten is: Couldn’t you send very complex machines very quickly to other star systems, and they would then unpackage themselves and start replicating, and they could redesign a very complex civilisation over there. And then at some point, they could use instructions that they’ve brought with them on how to reconstitute human beings, if they had a sufficiently advanced level of technology. Maybe you just sent over a few embryos or something for them, and then you kept it in perfectly cold storage. And then at some point, they can actually rebirth humans over there, and the humans never really had to actually travel there except in the form of a few cells. Does that make sense?
Anders Sandberg: It makes total sense. I think that’s the most likely way we get biological humans in other solar systems. And it also raises this interesting question: What is the crucial technology here? And it turns out that it’s not just that you need this artificial womb to allow the humans to develop, but also the nannybot — a robot that can actually act as a good parent so you get humans growing up. This is nontrivial, but if you have good enough general intelligence, it seems doable. Quite often, people thinking about settling space think that we need to use these artificial wombs, but they are not doing most of the work. It’s the nannybots that really matter.
Rob Wiblin: Why is that? What are the nanobots doing? I don’t get it.
Anders Sandberg: Well, once you have a newborn infant, they still need somebody to care for it. They need somebody to talk to it, to actually show it how to grow up as a person.
Rob Wiblin: But why do they have to be nanobots?
Anders Sandberg: Sorry, not nanobots. Nannybots.
Rob Wiblin: Oh, nannybots.
Anders Sandberg: The nannies. The nanobots might be doing the low-level stuff, but it’s the nannies that you really care about.
Rob Wiblin: On the offence/defence balance, I think I read a thing from Gwern some years back, where he was supposing that possibly offence could be stronger than defence in wars between different systems or wars across space, because from one location you could plan your war and then fling some object incredibly quickly at a planet, or at beings that are somewhere around a different star, or just at beings anywhere. And then if you fling it fast enough, then they don’t have very much time to react, and wouldn’t be likely to see this thing coming. Of course, a relatively small asteroid that hits Earth would just be potentially absolutely devastating and destroy almost all complex life on it. What do you make of this possibility that there could be a pro-offence bias in wars in space?
Anders Sandberg: That worry has shown up in a number of papers, and indeed even been used by some authors to say that’s why we must not go to space: because once we spread out, we get this security dilemma and since there is a strong offensive balance, we are all going to be shooting at each other, and this is horrible. So that’s why we must keep on the same planet forever.
I don’t think that is valid. Those relativistic asteroids, they could be very devastating. They’re very hard to see coming, of course, because they’re moving so fast, but you need to aim them at something where they do damage. So if you’re sitting on a planet, you’re very vulnerable, because it’s very predictable where that planet is going to be when the asteroid comes into the solar system. If you’re in a space habitat orbiting the planet, it’s already slightly tricky, because you sometimes do stationkeeping and you might be moving around — so if you’re 100 metres away from where you were supposed to be, the asteroid might actually totally miss you. And if you’re in a spacecraft that can manoeuvre freely, you’re even more impossible to hit, because space is very big.
So I did some calculations in the book for using Dyson spheres as weapons. After all, you get the total energy output of a star, and you can imagine putting phased-array lasers on the surface of a Dyson sphere to get essentially the ultimate magnifying glass, like attacking a poor ant. And you can literally boil off planets many light years away. This is devastating, but the people on the spacecraft kind of navigating around randomly: you can’t see them because you’re light years away. Even if you have a good telescope on your Dyson sphere, you’re going to only see where they were literally several years ago. So you have no clue where to aim your giant laser. And for it to be really devastating, it needs to be fairly narrow. If you just shine your general starlight at the entire solar system, they’re just going to say, “That’s a beautiful bright star in the sky” — that’s not doing any damage.
So one of the interesting things I found while writing the book is that the vast distances in space actually seem to do work. They actually make the universe much safer, but you need to be mobile. Just sitting in the same spot means that you’re fairly vulnerable. And if somebody has inside information and maybe can direct something like a missile that is actually targeting you, you might still be in trouble. So there is this deeper question: What’s the ultimate missile? I haven’t figured that one out yet.
But there’s still certainly a lot more things one can estimate. I do think we can actually get the physics of the limits of war. There seem to be tradeoffs, for example, between information and the amount of destructive energy you need to use. If I know exactly where you are and what you think and what you believe in, I can send you a nice letter and convince you to be on my side. If I know a little bit less, I can at least fire a laser exactly at your heart. If I know a bit less, let’s fire off a machine gun in the general direction. If I know even less, I need to blow up your general vicinity. I need to use more and more energy in a very undirected manner to harm you, while if I have a lot of information, I can do much more. So this is also why I’m less worried about big bangs and explosions and more about information weapons, computer viruses, very clever memes: they might actually be the most dangerous weapons of future warfare.
Rob Wiblin: So you’ve got this issue that if you’re on a planet, or something that’s extremely big and hard to manoeuvre and very predictable, then potentially you could get an offence bias. Although I suppose we could imagine other defences. The thing’s not actually going to be travelling at the speed of light, presumably, so maybe you could have sensors that would detect it. If you can see something early enough, then very minor changes to its trajectory, like firing a laser at it in order to divert it, could be a reasonable defence.
But I guess a general theme running through a lot of your work is that we think about the surface of planets as the habitable place. But in actual fact, most life is probably going to be machines in future, and machines can exist just as well, or maybe even better, out around asteroids, or not inside this very intense gravity well around a planet. The surface of a planet is actually just a very small amount of the surface area that is available across space.
I used to play this computer game called Stellaris, and there was a civilisation that were called Void Dwellers that lived in space. They were real weirdos that had very unusual technology, because I think they couldn’t go onto planets because their bones were too weak. But yes, Void Dwellers probably is the most likely future for where most life is going to end up, because just in terms of accessing resources and matter, it’s easier to do it if you deconstruct a planet rather than merely live on the surface of it. Plus there’s just way more surface area on asteroids than there is surface area on planets. And if you’re on a small thing, then you can just change your trajectory at any point in time, and you’re extremely hard to target — indeed, probably impossible to target from sufficiently far away.
Anders Sandberg: Yeah, it’s the kind of classic idea in space settlement. Gerard O’Neill famously said, “Is the surface of a planet the right place to have an industrial civilisation?” It’s pretty obvious what he thought the answer was. His vision was these big, nice, rotating space habitats. And they are interesting because you could argue that’s a form of ecosystem, a form of life — it’s a weird technological ecosystem.
And indeed, in some of the writings of space enthusiasts, it has been suggested that this is the next stage of life, literally macro life, which is a kind of symbiosis between a technological system cradling this internal ecosystem that contains humans and plants and other things, but it also acts as an organism. It takes in energy in the form of sunlight or nuclear power, grabs matter in the form of asteroids and then in turn repairs itself using that, and creates copies of itself. Now, you can imagine a more technological version of that without the squishy biological parts, but I just generally would put it all under the heading of “life.” It’s something that can replicate itself. It’s something that can evolve either through random evolution or just deliberate design or any other form.
And I think it’s also likely to try various niches. We are kind of used to the inner solar system, where a lot of volatiles are around the big planets, but on the lighter places like the moon and partially Mars, they have disappeared. So it’s kind of a dry place — hard to believe, given that we’re on Earth, but generally it’s fairly dry — and there is a lot of energy, and you also can get easy access to metals. Go outside the asteroid belt and you’re getting things wetter. There is more and more ice, but it’s hard to get those metals because they’re mostly buried in the cores of the big giant planets. On the other hand, you can get lots of volatiles, so you can build things out of ice and other materials. When you go to the really far reaches, like Pluto and the Kuiper Belt, again, lots of ice and only a few percent of dust that you can then extract carbon and metals from.
So you get different styles. You could imagine different ecosystems, and also using the different energy densities. If you’re close to the sun, you get a lot of energy very easily, but it’s also hard to keep yourself cool. You might also want to use high-temperature materials. If you’re out of the Kuiper Belt, the energy flow might be fairly weak. But on the other hand, very little gravity, very little stuff. You have a lot of ice. Maybe you can be a filigree structure of ice moving very slowly. And these are the things we can envision right now. They’re going to look pretty silly eventually, once we get out there.
Rob Wiblin: If far greater than human machine intelligence just turned out to be physically impossible for some reason, what fraction of all value that we might get in one of these big futures would be lost, relative to a universe in which that actually was possible?
Anders Sandberg: Somewhat surprisingly I don’t think it’s necessarily losing out on that much value, unless the real value resides in doing super smart things. Maybe it is that you need to search for these vast spaces of philosophy and science to find the true values and what you truly should do — and if there is no superintelligence, we might be just fumbling around and never finding what we actually ought to do with the universe.
But the kind of automation you need to actually have a big impact on the universe is not that dramatic. I can imagine that you want to have automation that allows you to do engineering and mining and production in space. You want to have self-replicating robots that can pick apart asteroids, and then you can use that to set up Dyson spheres. You can already do a lot of megascale engineering just using a little bit more technology. These robots don’t even need to be even human intelligence, I think. You just need them to be like many animals. After all, we have a lot of animals in the world that are building various complex structures and doing a decent job without being particularly interesting conversationalists.
So I don’t think superintelligence is necessary to get a lot of value. Of course, that assumes that value resides in something we do recognise as value. There might also be these weird things that we might then want to explore. Biological modification of whatever it is limiting superintelligence, we might want to find ways around that. Maybe it’s not possible to have artificial superintelligence, but maybe biological superintelligence is possible. I’ve certainly heard that view. I think it’s crazy. But in this thought experiment, maybe it’s not crazy and might be worth pursuing.
The biggest impediments to complex life spreading across the universe [00:44:01]
Rob Wiblin: What do you think is the biggest practical or engineering impediment that a complex life, including machine intelligence, might face in trying to spread across the galaxy and potentially even between galaxies?
Anders Sandberg: So if you want to go fast, I’m always worried about dust grains. Any relativistic probe moving close to the speed of light running into a dust grain: that dust grain will explode like a small nuclear explosive, and that’s not going to be particularly fun for the spacecraft unless you have a lot of shielding. So the faster you go, the more problems you have from these dust grains. Not to mention that the interstellar gas is actually acting as a proton beam. If you’re moving past this, it’s almost like you’re getting bombarded by that. So that might put a kind of speed limit.
If there is a lot of gravel out there, you might not be able to go that far before running into something. So you might need to go slower or have more shielding, or you need to use a lot of redundancy. It actually sets a kind of distance limit. It’s amazing when you look at the Milky Way at night, if you’ve got a really clear sky, you see these dark bands, and that’s dust clouds. And between you and the galactic centre there is always a dust particle. There is enough to actually hide all the light going through. Most of that dust is so fine that it’s not much of a problem for a well-built space probe. But there might be more gravel hiding there, and we actually don’t know how much there is because it’s hard to observe. We can observe a fine dust because it’s got such a big surface area, but the bigger dust grains are very tricky. There are a lot of assumptions among astronomers about them, but we simply don’t know.
Rob Wiblin: OK, so the tradeoff here is if you go really fast, then whenever you hit a dust particle, it’s more explosive. So you can have a very small thing that goes very fast and you hope it doesn’t hit dust because it’s got such a small surface area; or you can make many different ships and hope that one of them will get through because by chance it won’t hit a dust particle; or you could go slower — so you can have a bigger thing that goes slower and then it’s more likely to hit dust particles, but it’s not so explosive when it hits them.
I suppose the ultimate constraint across these things is determined by the hardness of the shield in front of you. If you could come up with an extraordinary material that was barely damaged or barely deformed by these collisions with dust, even at near the speed of light, then you could go super fast, and you could even have a very big thing going very fast. But that would require some material science that’s better than what we have now.
Anders Sandberg: Yeah. And I think we can even put some constraints on whether it’s ever possible, because this is a typical move I do in the book. We know the strength of molecular bonds: we know roughly how strongly atoms can be linked to each other. So if you want a very powerful shield, and here comes something with a lot of energy, you can calculate how much the bonds can withstand. And it turns out, beyond a certain velocity, you’re not going to win because you get so much more energy in that impinging dust particle, which is not going to exactly behave even like an object hitting armour — it’s more almost like somebody sent a particle beam at it. It’s a small nuclear explosion.
So going slow, I think, is another promising thing — but that requires, again, a nonhuman approach. Maybe you have cryogenically frozen people, but if you go really slow, there is a lot of radiation, et cetera. You probably want those robots to do it. But now you need a system that can still work well after a very long time in space.
And I think that is an interesting challenge in engineering: How do we make ultra-reliable systems? Right now reliability is not something we optimise in most situations, because it’s cheaper to buy new stuff. And I think this is going to be true for a long time. We can probably recycle our objects much more readily on Earth than we actually want to have something ultra reliable. But if you can’t recycle it, or rather you need to recycle yourself, because you’re a spacecraft out in interstellar space, now you need things that don’t go wrong very often, or can repair themselves really well and very reliably. And this is a fascinating engineering challenge.
Rob Wiblin: What’s the strongest possible bond that you can get? Is this like a covalent bond, or is it some kind of metal? What’s the limiting factor?
Anders Sandberg: The strongest chemical bond you can get is between carbon and oxygen in carbon monoxide, so that’s kind of pointless because it doesn’t form anything else. But then we’ve got the diamond structure, which is quite close to the limit.
There are ways of cheating. So there is something called Landau atoms, and that happens probably on the surface of neutron stars: when you have very extreme magnetic fields, they make the electron orbitals around the atom turn into cylinders, and they can overlap in really weird ways. There have been some papers arguing that neutron stars actually have little whiskers of iron atoms lining up and forming these super strong chains. There are other papers arguing that this doesn’t happen because of complicated reasons. There might be a few tricks like this. I don’t think it’s great for armour, but it might be great for other weird engineering.
Rob Wiblin: I see. Yeah. It seems hard to maintain that on the surface. So we actually have a decent idea of the toughest materials that are physically possible, excluding exotic materials, what those look like?
Anders Sandberg: That’s also a good question. What counts as an exotic material? I would say that molecular matter we understand fairly well. We’re still getting surprises like the superconductor. There is a lot of magic going once you start organising matter in the right way, as microchips and living beings demonstrate. But when you get to things like quark matter and neutron star matter, the possibilities look bigger, but they’re also very hard to maintain. None of them look like they’re stable at zero pressure, which I think is a quite big requirement for building anything useful in space.
Simulations and grabby aliens [00:49:49]
Rob Wiblin: What do you think are the chances that we’re living in a simulation and why? It’s relevant here because if we’re living in a simulation, then probably these things aren’t going to happen, because things might be reset, or people might come in and tell us that we’re living in a simulation before any of this gets around to happening.
Anders Sandberg: So I don’t think that thinking “we live in a simulation” is useful from a decision-relevant perspective. Because if we are in a simulation, what should we do differently? If we knew that we were in a simulation but had no information about the world that was outside or what the purpose of a simulation would be, you still don’t get any decision-relevant information. If you know that the purpose of a simulation is to simulate this or that, then you might say, “I’m going to sweeten up my simulators and do things that they like” — you actually have something you might do differently. But as long as we don’t have any such information, it doesn’t tell us anything.
The main thing might be that if we think we’re simulated, we should expect a smaller future. But again, that assumes, for example, that computational resources in the outside universe are really limited. Maybe the outside universe is so big that you can simulate a super-civilisation covering our universe, and it’s just a screen saver in that universe. So you have to make assumptions to get any decision out of a simulation argument. And this, in my opinion, means that it’s not actually telling you very much.
The interesting part is if we start making simulations — or even more interesting, encounter aliens that made simulations — we might then have an increased likelihood of thinking that we are in a simulation, either our own or maybe the aliens’ bootleg simulation of humanity to figure us out diplomatically. It can get very weird, but generally I don’t think you get much out of a simulation argument.
It’s also worth noting that Nick Bostrom’s simulation argument has three branches. It says that one out of these three awkward things must be true: Either we’re in a simulation, or we have much bigger existential risk, or posthuman super-civilisations never really simulated the past. These other two legs are not as popular as “we’re living in a simulation,” but I think they’re important — because if you buy the simulation argument, you should be more worried about existential risk, and you should also perhaps be a bit more concerned that maybe future civilisations are very ethical about how they simulate the past or simulate conscious minds. And that might be interesting to inquire into too.
Rob Wiblin: I guess one reason that people raise the simulation idea is that in the context of talking about complex life continuing for trillions of years, and there being these enormous numbers of minds over all of this time, we seem to be living at this really surprising point in the history of the universe. Because if life spreads through the universe, the overwhelming majority of beings will live in this totally different world very far in the future — well, not necessarily that far in the future, but in a world where complex life is spread across most of the accessible universe. So our position will seem shockingly strange and really early. What’s your favourite explanation for how it is that we find ourselves in this unusual and, in some sense, arguably kind of privileged position? Have we just gotten super lucky, or what’s going on?
Anders Sandberg: I think this is a very important and tricky question. It’s also worth noticing that the Stelliferous Era, where there are stars, is going to last maybe 10 to 100 trillion years — and we are in the first 13 billion years. Again, what’s going on here? Why are we really early? There I think you can make an argument that most of the biosphere years you could imagine in the future are going to be around little red dwarf stars that might not be as habitable as we currently think they could be. So maybe actually we are close to peak habitability for organic life in the universe, and we shouldn’t be too surprised about that.
But still, if technological civilisation spreads, then of course those red dwarf stars are going to be totally good real estate. And you could argue that maybe this is evidence that actually nobody’s going to spread across the universe. Actually, this is it. This early part of the Stelliferous Era is where intelligence shows up, and maybe you can’t spread for some weird reason across the universe.
But another interesting answer, which I’m rather fond of, is Robin Hanson’s grabby aliens idea. I’m particularly fond of it because I almost had the idea but didn’t. I had all the pieces — I have a chapter in the book where I’m talking about alien intelligence, various explanations, expansion patterns, and all of that — I had all the pieces laid out in front of me. But Robin actually was the one putting it together, and said if civilisations start spreading out, presumably in the areas where they have spread, new intelligent species don’t arise. It’s just going to be whoever had gone there, and whatever they do. We are not in one of those zones.
Now, if you look at the history of the universe, you have this kind of phase transition of a universe with no intelligent life spreading; a relatively short period where there is a fair bit of intelligent life in transit, expanding out; and then eventually they meet each other and all parts of space are now settled. That means that we are in this kind of weird position that we’re quite close to that limit. And if there are many hard evolutionary transitions to get to intelligence, you should expect intelligence to show up as late as possible in the history of a biosphere. I have some papers to that effect, so I’m totally in agreement with this.
In that case, we should expect to be relatively close to this transition. This transition is still probably billions of years long, so we’re talking astronomical timescales. But I like the grabby aliens argument because it both explains why we haven’t seen any aliens — the aliens that are quiet are hard to see, they’re not expanding, they’re just sitting there enjoying life; and the expansive ones, we haven’t met with them yet because we just started expanding about this time, and we might start noticing them in a billion years or so when we might also be expanding — and this also explains why we are around now.
It still has this big problem: Why aren’t we part of some posthuman super-civilisation after we contacted the grabby aliens in a few billion years? And maybe the answer is maybe we all form one big group intellect, and out of the trillion human beings that ever existed, the group intellect that exists forever after that time is just one of us. The probability is one in a trillion of being the group intellect. So we found ourselves being among the more normal boring humans before contact. That might be an explanation, although I’m not convinced by it. A more deep answer is probably, yeah, I’m not certain anthropics always is a reliable guide here. Many of these anthropic arguments are great starting points, but as soon as you get a piece of evidence, that weighs much more strongly than this nice reasoning.
Rob Wiblin: Yeah. I’ll stick up a link to a YouTube explainer video about the grabby aliens idea. It’s basically this idea that at some point the universe does get taken over by aliens or some sort of life expanding and taking over matter very quickly for various different reasons. And we have to come before that because otherwise all of this space would have already been taken over. I think it’s a bit more complicated than that; it’s been a little while since I watched the video.
But in terms of explaining why we don’t live in the post-grabby era inside the grabby alien civilisation, you’re saying that one explanation might be that there’s only a handful of extremely big minds in that scenario. And so, in fact, there’s more beings like us that are very small than just this handful of extremely big minds. In fact, it’s not so shocking that most beings end up being more like us.
Anders Sandberg: But in order to avoid that shocking conclusion, now we end up with another shocking conclusion: that the future is full of very big minds.
Rob Wiblin: Right. You were talking about anthropics, which is this question of what you can learn from your existence…it’s a little bit hard to explain exactly what it is.
Anders Sandberg: So the simplest form of anthropics is just observer selection bias. For example, most people doing surveys are academics, because academics tend to do surveys much more often than normal people, which means that you get this weird bias that whoever is making the survey is going to be an academic. Quite often, it turns out that on average, your friends have more friends than you. It’s another funny thing, because we have a skewed distribution of the number of friends we have, and usually we know one or two people who are these extreme networkers and they have more friends than we do, because we’re usually not extreme networkers. So these biases are well known, and a problem when you do science, but they’re not too weird. The thing that gets weird is when your existence might be subject to an observer selection bias.
I have one paper about anthropic shadows, where we point out that if there is a big disaster, like a giant asteroid hitting Earth, it’s very unlikely that an intelligent species evolves just after that, because the ecosystems are in disarray, there is not much life in the biosphere, et cetera — especially if it’s a total disaster and all life gets wiped out. So if you find yourself to be an intelligent observer, you can’t have a giant meteor impact in your recent past. You actually get this weird, magical-seeming effect that you will find that your planet, even if it’s a dangerous universe, has been missed by asteroids recently. And that’s not because your existence has this magic asteroid-deflecting power — it’s just that those rare lucky planets that weren’t hit by asteroids and have observers; they were randomly lucky places. And the observers, they get a totally wrong perspective on the world.
It takes a lot of subtlety to think well about these things, and I think most of the time we are slipping up, even academically, when doing it. So I don’t think it’s a good thing to build too much on anthropic thinking. It feels like it should give us a lot of stuff, but quite often it’s so rickety that you should get any piece of evidence other than that.
Chances that aliens have actually visited Earth [00:59:51]
Rob Wiblin: A listener wrote in with this question for you: “There are surprisingly credible sources claiming that the US government knows that there are aliens visiting Earth, and more generally, the idea of unidentified aerial phenomena (UAP), that’s entered the mainstream. Congress has actually been holding hearings about this, which would have seemed a bit crazy 10 years ago. Have you looked into any of this? And what do you think are the chances that life from other solar systems might have visited Earth?”
Anders Sandberg: I looked a little bit into it, and I’m not particularly convinced. So, UAPs: Why are we seeing these blurry, weird things? There could be a lot of different reasons for that, and people immediately latch on to one possible explanation: It’s aliens. Why aren’t they talking about angels, or superintelligent squid from the bottom of the ocean? There is a very long list of possible explanations, including the super boring: there are optical effects in the complex lens systems on modern warplanes.
In some cases, footage of UAPs have turned out to have very weird natural explanations. Like in one case, it was a Batman-logo-shaped balloon up among the clouds. What’s the probability of even seeing that from a plane? That’s kind of low. There is a lot of strange random stuff. So when you see something strange, you need to update your beliefs. And if you try to be a good Bayesian about it, you need to check what hypothesis is this compatible with? So if I see a blurry spot of light moving very fast, it both fits with aliens having a super-advanced spacecraft, but it also fits quite well with some weird problem with my optics — as well as a long list of the other weird possibilities, ranging from the squid over to that I’m actually hallucinating.
Now, if I see a little green man on my lawn telling me, “Take me to your leader,” suddenly a lot of those other explanations go away. Not all of them. The probability of me going crazy is still embarrassingly high. So I should probably ask my friends, “Do you see that little green guy too?” And if they all agree, then the probability of all us going crazy simultaneously is low. There is still some possibility for a prank or something, but you need rather specific evidence. Seeing weird things moving around doesn’t tell us very much. And I think, unfortunately, we latch on to this explanation.
The fact that there are hearings and there are surprisingly credible sources saying this, I think these credible sources are an interesting thing to check. How likely is it that they know what they’re talking about? Because there has been a lot of very crazy stuff going on in the US intelligence and military establishment too, driven by people with various bees in their bonnets about particular threats.
So I’m not terribly convinced by this. The really interesting issue is, of course, it’s still not implausible that advanced civilisations exist. And if they wanted to hide, could they hide from us? And I think if you’re an advanced [enough] civilisation and have your act together, you could hide really well. So in that case, why would we be seeing blurry things moving around? On the other hand, you could also imagine that maybe you had an advanced civilisation, but there are teenagers taking the saucer out for a spin — and they are trying to keep a non-interference activity going, but there are these people messing around, which would of course also explain a lot of the stupidities with many of these UAP observations.
But I don’t think that sounds super plausible, actually. I think it’s a bit more binary than that. Still, I think it’s worth recognising that the world is strange, and full of a lot of unlikely and strange things. The bigger our world gets, the more things just out of sheer randomness that is just simply unbelievable will just keep on increasing. So it’s going to be hard to filter all of this.
Rob Wiblin: Yeah. I’d assumed that it had to be the instruments malfunctioning or people having optical illusions or hallucinations or whatever. That just seemed like the obvious answer. The thing that made me do a double take is people claiming that you’d have multiple pilots in different planes seeing the same thing, or multiple different independent instruments would all be showing an object moving at some insane speed. And then you’re like, wow. It’s strange. It calls us for some investigation or some explanation, more than just one person seeing something, which is totally not credible.
Anders Sandberg: Yeah. And that multiple-witness thing is important. It gets back to Hume’s old discussion about should we believe in miracles? And he essentially made the same argument I did, but then noted that if you have a lot of credible witnesses that all see this — and especially if some of the witnesses don’t even want to believe this, but they still are forced to conclude that they saw this weird thing — now we have a reason to update.
Rob Wiblin: The funny thing is, it’s not as if the idea of there being aliens that have come to the solar system or to the Earth, that that a priori is such a strange thing. What is strange is that we haven’t seen massive signs of that, and that all we would see is this tiny sign of it. If there were tonnes of aliens around, we’d be like, obviously there’s just aliens, like, why would we think that there wouldn’t be if there’s life all over the place? But why would it be this tiny amount where we just occasionally see these craft? That’s the thing where it’s so hard to come up with a great explanation for why that would be the situation.
Anders Sandberg: Yeah. I think it’s also worth noticing that there are weird things out there that we’re sometimes missing. One of my favourite examples is of sprites and phantoms above thunderstorms. What happens when you have big thunderstrikes is that you actually get plasma clouds going up into the ionosphere, forming various weird red patterns. And they have been observed for a long time by airline pilots, and ignored because it was bad for your career if you reported weird stuff showing up on top of lightning storms — because that was a sign that you were hallucinating, and now your licence could disappear.
So everybody more or less agreed in the aerospace world to not see those things until eventually astronomers, astronauts, and amateur photographers started getting credible photos of them. And now they’re kind of a mainstay and established that they’re a real thing. We can do science on them. I found it interesting that here we had this code of silence about something people were obviously seeing and must even tell each other, “Yeah, you’re not seeing that thing” for a long time. I have a feeling that there is plenty of other stuff like that in the world that we’re just missing as a civilisation.
The lifespan of civilisations [01:06:24]
Rob Wiblin: A listener wrote in with another question that’s a bit related to the book. It was a question of do civilisations eventually decay and become more likely over time to break apart. “I saw that you’d published a book chapter titled ‘The lifespan of civilizations: Do societies “age,” or is collapse just bad luck?’ but I couldn’t get the book. What’s the answer? Do societies get more likely to collapse the longer they last for?”
Anders Sandberg: I don’t think so. And that is actually the point of that chapter, which is a spinoff from my big book, because when I was going through the calculations of how to move galaxies and do all of this stuff, I realised that maybe the big limitation here is not physics, but society. If you need to have a project team that keeps the move of a galaxy going for a billion years, how likely is that to last? I mean, most organisations don’t last very long in the present.
And indeed, if civilisations inexorably collapse after a while because they age and become decadent, then maybe that is the fundamental limitation of how grand of futures we could possibly have. So I started reading macro history, and realised macro historians make very compelling stories about why civilisations rise and fall and why history has a certain shape, but they’re all different and they’re all kind of contradictory. So I became a bit nervous about trusting any of them.
So then I just took a lot of data and started doing curve fitting to try to see the survival curves. And the thing I found that was the best fit I could find for civilisations was exponential decay. There is a kind of time constant for how likely a civilisation is going to be around for, there is a kind of half-life for civilisations. But the risk of a civilisation collapsing doesn’t seem to increase with time, which is the important part. If there was some kind of decadence building up or maybe some environmental debt or something else, then you should expect that over time it became more likely that it crashed.
Or you could have that there may be some childhood disease of civilisations, that when they first show up they have a high likelihood of crashing. We don’t see that. That might partially be that we have a selection bias: that we don’t think about stuff that crashed immediately as a civilisation. But this seems to apply also to other forms of polities, like kingdoms in Europe and various political states. In the case of corporations, it’s kind of well known that they also have a fairly constant hazard rate, except for the startup phase where they’re very vulnerable. It’s fairly constant, except for the very oldest corporations in the world that tend to be very stable: typically a Japanese inn at a hot spring or some brewery or something exploits that resource that people always will want to have.
So using this data, my conclusion seems to be that civilisations probably collapse because of bad luck rather than there is something bad building up. Now, that is still an interesting open question: Why do we have this bad luck? Is it just that it’s very unlikely events that conspire to bring things down, or is it that there is something intrinsic? And even worse: of course bad luck is rather hard to defend against. You can imagine a Dyson sphere covered with rabbits’ feet and horseshoes, hoping to ward off bad luck. But that’s unlikely to work. Probably the best way of warding off bad luck is having multiple copies, having backup civilisations — and if one crashes, the other ones shake their heads, pick up the pieces, and resettle that part of space.
Rob Wiblin: Yeah, it’s a super interesting question. I guess we’re used to the analogy with humans, where over time we get more and more likely to die because our bodies are not sufficiently good at repairing and regenerating themselves. Over time, the repair mechanisms break and then the damage starts to accumulate at an ever increasing rate, and so you become quite likely to die of old age between 70 and 100.
Anders Sandberg: Yeah, and we like making that analogy. A lot of people talk about the flourishing of civilisation or a young civilisation or an old civilisation — and we quite often anthropomorphise societies and civilisations way more than is good. Rousseau was talking about “diseases of civilisation,” and he was literally thinking that some bad things in society were like a literal disease in the body of a civilisation. Once you start thinking like that, of course ageing seems to be reasonable. But it’s worth noting that a lot of multicellular life doesn’t age.
Rob Wiblin: Yeah. I think a lot of people have this natural intuition that societies, over time, get gunked up with rubbish processes. And there’s a bit of a morality aspect to this that’s like, they become too conservative and they can’t change with the times. And you definitely hear this about companies as well: that old companies, the management can’t keep up, and so things are more likely to crash. But it sounds like, in actual fact, the regenerative processes are roughly balanced with the degenerative ones. In fact, they don’t age like humans, and they’re not particularly likely to die after some specified period of time.
Anders Sandberg: Yeah. And there are still interesting questions here to answer. It seems like rules would grow over time. Most of us complain about our organisation being rather sclerotic. I’m at Oxford, 800 years of history and we have a rulebook that is literally 10 centimetres thick. But then again, after 800 years, why is it just 10 centimetres? Why shouldn’t it be one metre thick? And the answer is probably that you reach a steady state.
There was actually a study looking at Stanford University’s regulations, about the century of growth of rules, and eventually what happens is that you reach this steady state where you’re adding some new rules, but you’re also compressing old rules, as some have become irrelevant. But you still get this big mess and most people complain about it. We have complaints from the Middle Ages, from kings, that the laws are incomprehensible and not written in plain English: “Can’t you do it in a sensible way?” Well, it didn’t help that the king didn’t like it. The laws just kept on being messy. But there are these weird balancing factors.
And other systems might be really good at regenerating, like cities: they seem to be almost better at regenerating themselves than companies. Indeed, there is a surprising lack of cities that have died. There are ghost towns and a few abandoned cities, but a lot of cities are still going. Jericho, which is arguably the oldest existing one, started not very far after the Ice Age, and it’s still full of people.
Rob Wiblin: Yeah, that’s super interesting. I haven’t really thought about that. I guess we all know of cities that have gone into decline, but the benefits of agglomerating people in a given location, and sometimes the benefits of some particular locations where people have chosen to put cities, apparently are sufficiently great to offset some of these degenerative processes that might eventually cause people to leave because things have broken. Because London’s been here for a very long time, where I am.
Anders Sandberg: And London is interesting because, yes, it started out as a small village that the Romans conquered and turned into Londinium, and then it has had its ups and downs and disasters. But of course, the more trade you have there, the more reason there is to keep on going there. You get more governance, another reason to show up there. And it becomes self-maintaining. If it burns down, well, you still want to rebuild it because a lot of people like that place. And then it becomes more and more ingrained.
And sometimes you get these absurd situations where people really struggle to keep something that we perhaps should pull the plug on, but you still, for cultural reasons, find too valuable. Think about Venice. It’s absurd. There was a good reason to build a city in a lagoon at the end of the Roman Empire, because you didn’t get invaded. But that kind of lost its power by the time of Napoleon. Now it’s threatened by climate change and sinking and all the other things — yet we agree that it’s so awesome that we want to keep it going, even at a fairly big cost, because losing Venice would be a tragedy. So even the idea of a city might keep it afloat.
Rob Wiblin: Yeah. I was reaching for the term “network effects.” The network effects of a city and the positive feedback loop that keeps people going where other people are is quite powerful. And it can be enough to offset quite strong reasons why people might want to relocate a city, if only they could coordinate to do it.
Cosmological engineering that’s worth trying [01:14:45]
Rob Wiblin: What’s a crazy piece of cosmological engineering — that is, kind of altering stars or planets or galaxies — that you think has a real shot of happening? Which I guess in this context means more than a one-in-a-billion chance.
Anders Sandberg: I don’t know. I think there is plenty of cool cosmological engineering. One thing I’m interested in is setting up really complex gravitational collapses — getting actually a lot of stars to collapse to make a black hole to spec that has a lot of angular momentum. So you might want to, for example, briefly have a torus-shaped black hole. The reason for that is that if you do it just right, maybe, just maybe, you can get a wormhole. Although this is very problematic because there is some theorem, the topological censorship theorem, that actually prevents black holes from being ring-shaped long enough for anybody to fly through them.
But there are probably other forms of cosmological engineering that are worth doing. I do think managing galactic mergers is a good idea. Basically, you want to nudge all the stars in both galaxies to get together so they collide just right, so all the stars find a partner and get to be a binary system, so you keep all the kinetic energy. So the Milky Way and Andromeda might be moving towards each other, and instead of just sloshing through each other and turning to chaos, it turns into this perfect little dance where all the stars are gathered together. And all the enormous energy — because we’re talking about the kinetic energy of two big galaxies — gets stored for future use as a kind of gravitational engine. It would be the ultimate choreography.
Rob Wiblin: What sort of mechanism would you use to change the trajectory of each of the different stars to shift where they’re going and how fast?
Anders Sandberg: So in my book, I work a lot with aluminium foil in this case, not because it’s the best and most plausible method, but this is the one we actually can calculate very easily and don’t need to assume any impressive stellar engines. The basic idea is that if I have a very thin piece of aluminium foil reflecting starlight, I can actually use that to move the star. Imagine that you put a hemisphere of aluminium foil around the star, so all its starlight was going out in one direction. That’s an exceedingly wussy rocket. But it turns out that it’s enough to nudge stars. So over a span of a few million years, you can get two body encounters with another star, and now they give each other gravitational slingshot, and you have, of course, set up things so they go exactly where you want them.
Rob Wiblin: Hold on. So you stick a piece of aluminium foil on one side of a star, and so it’s reflecting the light that’s hitting it. Why doesn’t that just cause the aluminium foil to fly off really quickly into space, far away?
Anders Sandberg: Yeah, exactly. You need to do this carefully, but basically the star is pulling on the aluminium foil and trying to, of course, make the aluminium foil fall down onto the star, and the aluminium foil is pushed away by the starlight. Now, you can’t probably use a big sheet of hemispherical aluminium foil. You actually have a lot of free-flying things, so you get something like a half of a Dyson sphere.
And it’s even more funny than that, because imagine along the equator, you have aluminium foil tilted at 45 degrees, so starlight hits it and then pushes against it and goes off in one direction. Now you have that gravity essentially gets altered for the aluminium foil, because the star is pulling on it, but you also get a little force there, so you need to set it up right. So it’s orbiting in an orbit that actually doesn’t correspond to the normal orbit we see in space, where stuff is just held together by gravity — this is called a “non-Keplerian orbit.” And closer to the poles of this hemisphere, you also have, essentially, aluminium foil kept aloft by the starlight and pulled back by gravity.
So it’s a very funny system. It’s slightly finicky, but it’s kind of standard classical mechanics. It’s an interesting control engineering problem to add little fins to these ones so they can do the stationkeeping, and microchips so they know where they’re supposed to go. But it’s not requiring any fundamentally weird physics. You probably want a much more oomphy stellar engine. You could imagine having a big asteroid with big ion engines on it. And again, the asteroid is not orbiting the star — it wants to fall into it — but you turn on the ion engines and they send out beams of ions pulling in the other direction. The gravity will also make the star move in the direction of the ion engine slightly. Again, it’s a very tiny effect.
And you can make an engine that actually drags the star in a direction you want. This is a very wussy effect, unless you have a very heavy engine and a lot of oomph. So you might want a Dyson sphere to just power this whole weird contraption. It’s probably even more effective if you could get stellar matter as a rocket, but now we’re really getting into the realm of stuff where you’re getting very wishful engineering. I’ve seen papers talking about, “We build an orbital ring, and from that beanstalks to pump up stellar matter.” And when I check the calculation, I realise I don’t believe those numbers. And I’m the guy talking about rearranging galaxies, and I’m still not believing those numbers. This looks like this is kind of far outside the envelope.
But the aluminium foil approach is fun. That’s called a Shkadov engine, because that shows a minimum intervention that is just a nudge, but it’s enough if you do these gravitational slingshots to really get stars up to speed — we’re talking about change of speed for tens of kilometres per second.
And then you can do a lot of interesting choreography with other stars. Binary stars are like gravitational batteries: they contain a lot of kinetic energy. And if a star flies past in one direction, it gets extra energy. The binary system kind of gets closer together, more tightly bound. If you fly in the other direction, you slow down the star and the binary picks up the other. You can also get them to do more complicated three-body interactions if you really want to send them off, or break up the binary, or keep the interloping star in a parking orbit until some other star arrives.
So it looks like, if you’re clever, you can actually rearrange the galaxy on a timescale of a few hundred million years this way. Yes, you need an enormous, ridiculous amount of aluminium foil and a lot of careful planning, not to mention error correction. You need to re-nudge a lot of things as things get a little bit out of whack. But it looks like physics allows this weird activity, and that might be quite useful for handling gravitational mergers and making the galaxy a bit more neat so it doesn’t dissolve in the far future.
Rob Wiblin: I’m glad I’m not the project manager on that one.
Anders Sandberg: Yeah, you can imagine the team meetings, especially with the delays in the video calls from the outer parts of the galaxy.
Options for extracting energy once the universe is very cold [01:21:35]
Rob Wiblin: After the era of stars — the Stelliferous Era, as it’s called; I love the word “stelliferous” for some reason — so after most of the stars are burned out, and the universe is kind of getting very cold, what options remain for extracting lots of energy to do things?
Anders Sandberg: At that point, there is still a fair bit of fusion energy you could get, because there are a lot of brown dwarfs that are still hanging around. They just were too light to ever turn into a star. So in theory, you could mine them for hydrogen and burn that if you have a fusion reactor.
The funny thing is that also, in the really long run, they are also randomly occasionally bumping into each other and forming little red dwarf stars. That’s a very inefficient process, but over very long time periods it actually does happen. But I think intelligent life would not be patient enough for that.
So what you probably want to do is that you burn the fusible elements, either in your fusion reactor or by dropping them on top of, for example, a white dwarf star or a neutron star. This has a bit of a limit, because once you add enough, the white dwarf star collapses gravitational and turns into a supernova. So there is that slight environmental problem.
The best method, in my opinion, is to use black holes. I’m very fond of black hole power. And I am assuming that maybe in a few trillion years I’m going to be dealing with protesters saying, “No black holes in our neighbourhood,” and “Don’t build that power plant, Anders.” But they’re actually lovely. Black holes have accretion disks when they suck in matter. Or rather, it’s not that they suck in matter — that’s kind of a picture we get from science fiction — they’re just an object with gravity like anything else. But what happens when you put a lot of junk around a black hole? They form a disk, and the friction between parts of the disk heats up the matter. That means it radiates away energy and gets more tightly bound and slowly spirals in. There is also some angular momentum leaking out at the sides where some dust gets thrown off.
The effect of this is that the potential energy of that junk — and it can be anything: burnt-out stars, old cars, old space probes, planets you don’t care for, et cetera — gets ground down, and the potential energy gets released as radiation. So now you can build a Dyson sphere, a very big one, around this whole system, and get all of that energy.
How much total mass energy can you get? It turns out it’s almost up to 40% for a rapidly spinning black hole. The exact limit depends on where the inner edge of the accretion disk is, because eventually you get close enough that you essentially fall straight in without releasing any more energy, and that gets trapped inside the black hole. Now, converting 40% of the mass energy of old cars and space probes into energy is kind of astonishing: that is way more effective than fusion. So actually, the stars might not be the biggest energy source around. We might actually be able to make the galaxies shine much more if we dump things into black holes and gather that energy.
There are also other funny ways you can get energy out of black holes, something that is called “superradiant scattering.” And this is one of those really weird effects. I remember Toby Ord showing me Kurzgesagt’s video about black hole bombs — it’s totally worth watching because it’s going to explain this much better than I can do — and he said, “Anders, this can’t be right, can it?” And I just smiled and showed him the relevant pages in my book — which are, of course, much less entertaining, but had way more equations.
Basically, the trick is, if you have a rapidly spinning black hole and throw matter by it in just the right way, you need to dump some of it into the black hole, but some of it will escape and it will have more energy. And that’s great. You can extract a little bit of energy from the black hole. If you do this with light, you can have a lightwave getting partially sucked into the black hole, but coming out intensified. And at this point, you put up a mirror. Again, aluminium foil to the rescue! Although you might want something heavier in this case, because then you bounce that light past the black hole again and you get even more. So you set up essentially a disco ball around the black hole, and now you can get a lot of energy out of it very rapidly — in fact, so much energy so rapidly that I think you can’t use a normal material to hold this in. But it looks like it’s a very powerful energy source, but as the title “Black Hole Bomb” hints, you might get too much too fast. You need to be rather careful about this.
Rob Wiblin: Yeah. Got to sometimes take the lid off the pot, I suppose.
Anders Sandberg: Yeah. When stuff boils over near a black hole, it’s not fun for anybody. Incidentally, this kind of stuff is also a reason to look for weird spectra close to big black holes in the universe: to see if there are any super-civilisations around. If you find reflection or emission spectra from hot tungsten or tantalum hafnium carbide, then you have a kind of hint that that’s not natural — that’s somebody extracting energy from the black hole.
What is the upper limit of value? [01:26:31]
Rob Wiblin: Let’s talk about a paper that you published in 2021 which is related to this question of how good could the future be and what could conceivably be achieved. The paper was titled “What is the upper limit of value?“. What did you try to answer in that paper?
Anders Sandberg: So it kind of began when Will MacAskill was opening a conference by a talk, and he offhandedly mentioned that of course economic growth has to eventually end, because if you have 1% growth and it goes on for a few hundred thousand years, eventually you just get ridiculous numbers. And I started wondering, are those numbers actually ridiculous? And I gave him a look, and he gave me a look, and there was this moment where we both totally understood what the other were thinking. I started kind of calculating away while loosely listening to the rest of his talk, and I ended up tweeting a bit about it. And David Manheim, my coauthor, was at the back of the room, and started tweeting back. So in the coffee break, we already had this loose idea for the paper.
At first we started thinking about economic value. The standard thing people say is that of course economic growth can’t go on forever, because there are material limits: there is only so much stuff in the world. And given that I tend to think about big futures, first of all, we can recycle and reorganise stuff using maybe nanotechnology and biotechnology — so there is actually way more stuff available even on Earth than most people think. And there is a big universe outside: it’s a big finite universe we can reach, but still, the amount of stuff is astronomical. But a sensible opponent would say that exponential growth will always outrun any finite number relatively soon, so that’s not going to actually work.
But there is a more serious problem. The Mona Lisa is worth a lot of money. Why is it worth that? Well, we’re willing to pay that to have access to the Mona Lisa — and that has very little to do with the atoms in the Mona Lisa. In fact, if I switched around the atoms in the Mona Lisa from some other atoms, I would probably decrease the value of the painting because it’s no longer the original. The fact that it’s the original Mona Lisa makes it very valuable — and that value resides in our minds. We are willing to pay a lot for the Mona Lisa, at least if we like it and have a lot of money.
So the thing about economic growth is that you could imagine a world that doesn’t change very much materially, but we’re appreciating it ever more. We would be willing to pay more and more for the world, and you would still have economic growth that seems to go on forever. So that was the start of the paper.
Rob Wiblin: OK, so the background is you could ask the question, what is the upper limit of mass? Or what is the upper limit of the energy that we could get out of the universe, given an unlimited amount of time and the best technology that could ever be created? And I suppose that probably it would have a clearer answer, unless physics turns out to have some big surprises for us. But here you want to say not how much matter is there that we could grab, but rather how much value could we get from that? And I suppose value is this subjective concept. So it’s kind of how much wellbeing could we get out of it? Or how much preference satisfaction could we get out of it? What idea did you have of what value is?
Anders Sandberg: Exactly. We started with economic value, but that is relatively uninteresting compared to the other forms of value — preference satisfaction, pleasure, or whatever truly is valuable. And it used to be that asking how much energy is there is in one kilogramme of matter was a nonsense question: matter is matter and energy is energy. Then Einstein showed up and said that actually, they’re the same thing — and it turns out that it matters quite a lot that you can turn matter into energy in the nuclear reactors.
A bit later the idea was how much information could there be in one kilogramme of matter? And again, at first it seems like a non sequitur: information and matter have nothing to do with each other. But actually, you need matter to encode information. And as we developed theories about quantum field theory applied to information, et cetera, it turns out that there are interesting limits here on how much information you can get from one kilogramme of matter.
So that of course, leads to the question of how much value could you have with one kilogramme of matter? One way of reasoning about this is to think about brains. So I have 1.4 kilogrammes of brain, hopefully, and that can represent some form of value. There is the biggest value I can think of and maybe the biggest value I can feel or experience in some sense. It’s some complicated organisation of my neural activity, and that is linked to how much information I can store in my brain. And there is an upper limit to that, simply because it’s a finite amount of information storage. So that actually puts an interesting upper limit.
Now, you might say that something might be more valuable than you can name. And there is an interesting issue in the paper on how you encode value. Because obviously you could imagine a computer memory that just has a number representing value, and for every bit you add, you get twice as many numbers you can represent. So you can represent very big numbers and you also get a lot of numbers, but there is always going to be that biggest one.
What really matters, of course, is that if I get offered the Mona Lisa and Raphael’s fresco and the Academy of Athens, and I have a choice between them: Which one do you want to buy? I’m going to have to evaluate them. I like the Academy a little bit more than the Mona Lisa. So I’ve done this comparison of these enormous, vast values — whether that is in terms of money or in terms of how much I actually appreciate them as wonderful pieces of art. Now, imagine that we have some artwork that is just saturating my sense of value. I can’t really compare that to another artwork that also saturates my sense of value. Both of them are just incomparable. They’re just at the top.
You might also say, yeah, but this is kind of normal value, isn’t it? You’re thinking about some big number here, but a human life is worth more than any amount of money in most ethical systems. In practice, we have to do tradeoffs, and that makes us feel very bad. But if everything is fine, we can always say that you can always choose one human life above any amount of money. But you can, of course, encode that one too. It just still keeps never requiring information. I need to keep a tally of how much money or how much pleasure there is and how many human lives are at stake. And human lives always have priority.
And then maybe there is something that has priority above human lives. Maybe posthumans are even more valuable: a single posthuman is actually more valuable than any amount of humans. I don’t think that is the case, but maybe as a weird ethical system it is. So you could actually even have this ladder of values going up.
But a finite brain with a finite representation capacity still can’t make these comparisons. It still ends up having these biggest numbers or biggest representations that would essentially correspond to, “This is infinite value for me. I can’t find anything that is higher.” And of course at this point you say, yeah, but we have more brains than your brain, Anders. We can actually have several brains representing things. But again, there is only a finite amount of matter and energy in all these brains that can represent value. So it could very well be that from some kind of outside moral standpoint, there are values that are so big that no brain can represent them, but they just exist out there in the universe. The brains themselves can’t compare them; we can’t choose between these super big values. We are just going to be always confused, and say, “Well, they feel about equal to me.”
Rob Wiblin: OK, so the argument is something like valuing is a process that requires information to be encoded, and information to be processed — and there are just maximum limits on how much information can be encoded and processed given a particular amount of mass and given a finite amount of mass and energy. So that ultimately is going to set the limit on how much valuing can be done physically in our universe. No matter what things we create, no matter what minds we generate, there’s going to be some finite limit there. That’s basically it?
Anders Sandberg: That’s it. In some sense, this is kind of trivial. I think some readers would no doubt feel almost cheated, because they wanted to know that metaphysical limit for value, and we can’t say anything about that. But it seems very likely that if value has to have to do with some entity that is doing the valuing, then there is always going to be this limit — especially since the universe is inconveniently organised in such a way that we can’t get hold of infinite computational power, as far as we know.
Rob Wiblin: Yeah. So the disagreement I got into with people on Twitter was more along the lines of thinking about it from an economic growth or technological advance point of view. You know, for a given amount of matter — no matter what goal you’re trying to accomplish, no matter what thing it is that you value, that you’re trying to maximise for — there’s going to be some optimal way of structuring and organising all of that matter and energy in order to accomplish that goal and produce that value that you want. And so there’s just this fixed upper bound that we’re trying to get towards, and we can get incrementally closer and closer, asymptote up to that level, but that sort of sets the maximum. So at some point, growth either has to slow down very rapidly as you approach that limit, or it just has to stop entirely once you actually hit the optimal configuration that there is.
And people responded, saying, “Why does there have to be a best way? Maybe there isn’t a single best way, and in fact, we could just keep improving the way that we’re organising things and eking out a 1% improvement on it forever.” To me, that sounds a bit crazy, because in that case, you would end up being able to produce an infinite amount of value, an infinite amount of the goal would be accomplished with a finite amount of matter. So that’s kind of counterintuitive to me. Do you want to talk about it from this more engineering standpoint, maybe?
Anders Sandberg: Yeah. I think the tricky part is finding that optimum. Again, every bit of extra information doubles the search space. So if I have two kilogrammes of matter to organise into the best thing ever, that’s not just going to be twice as many things to search through, but depending on the number of atoms, something like two to the power of 10 to the power of 32 times search space. That’s astonishingly, horrifyingly big.
Now, normally engineers don’t feel totally dumbfounded just because they have a big rock in front of them and are supposed to do something, because mostly we use a very simple search; we have simple approaches to making high-value products. And quite often it consists of taking previous products in a catalogue of electronic components and mechanical pieces and putting them together in some clever way. So we combine things. And we’re searching through this enormous space of combinatorial explosions that emerge when you can put a lot of stuff together, but most stuff doesn’t make any sense. Most electronic circuits you can write down, of course, are total nonsense. And most electronics engineers don’t write down nonsense circuits because that’s boring. They’re trying to make something that acts as an amplifier or a radio.
So that kind of search is interesting, because it’s an intelligent search, and quite often it can get quite optimal by using good theory. If you go into antenna engineering, you find people doing awfully clever things to make great antennas with various properties, and they use various sophisticated theories to figure out what we need to design and then put things together like that.
There are other domains where we don’t have great knowledge of what’s going on. So it’s more trial and error. The mediaeval cathedrals were to a large degree built using trial and error, but also model building. And then, of course, you copy the successful cathedral in the neighbouring town and try to make a slightly larger one. So there are ways of searching through these spaces, but they’re not necessarily that effective, because the space of possibilities is so vast.
But here is the funny thing: in some domains, I think we can get very close to the optimum. We can kind of prove that we don’t have many more percent to go. For example, information transmission. We know the speed of light is a limit. Can we make faster transmission through optical fibres? Well, it depends on the refractive index. So we have a tradeoff: you usually want a high refractive index to keep the light confined very well, but at the same time, that slows things down. So what happens is that you decide on the tradeoff, and most of the signals in these fibres run about a third of light speed. But you know how far you could go, and if you really wanted to get it faster, you’d use a laser through vacuum instead. So the problem in getting these best products might be that in some domains, the search might actually go on for a very long time, but in other domains, we will be done relatively soon.
Now, getting back to that Twitter argument, because I think it’s an interesting one: Where are the domains that seem to be complex? Well, there are obvious things like art, and complicated things like pieces of software, and systems where the goals are not even well defined — like peace in the Middle East, or how to live a joyful life. In these cases, it might be that you can keep on innovating quite a bit and even do significant innovations for a long time. I have a feeling that many of the ones I named actually turns out, from a posthuman perspective, in a few million years, to be actually pretty easy. And then the posthumans go on about the real problems they’re having because they are seeing other complicated problems.
We know in mathematics that there are certain theorems that have proofs that the shortest proof is astronomically long. We can kind of prove that the proof is hopeless, but we can’t actually get the darn proof. And in most cases we would not even care, because that very long proof is utterly uninteresting and boring. It’s more cool to know that it’s a very long proof. So computational complexity is something that matters much more than we normally think.
Right now, I’m engaged in a little bit of thinking about its economic implications for optimising economies. There are other limits there that are surprising and confusing. I think when it comes to the limits of technological growth, I have a feeling that eventually we’re going to be close to that. We have most of the primitive pieces we can put together to do anything we need. So if you suddenly realise I need to put together something that can do this, you can easily manufacture it.
But what kind of spacecraft is the best one? What is the friendliest building you can make? People are still going to keep on innovating that, and actually having it very open ended, because the goals keep on changing as you advance the technology.
Rob Wiblin: Is there any deeper question here about the nature of the universe? I suppose one point of view or one expectation you might have is that if you’ve got some goal — like you’re trying to travel between two stars within a not unimaginable amount of time — we will just figure out the relevant principles, and we’ll design the best thing, Or at least we’ll figure out on a blackboard what is the best possible material that you could use for this: given our understanding of physics, what is the best possible engine design? And there’s not going to be any surprises, there’s not going to be further surprises, because we basically have solved science, more or less.
Another view would be that material science is going to remain full of surprises, potentially, and in a finite amount of time, in the amount of time that we have in the universe, we’re never actually going to be able to complete things and figure out, in principle, what is the optimal way of designing a spaceship.
It feels like there might be some kind of quite interesting underlying differing intuition that people have here about the nature of the world.
Anders Sandberg: I do think these are profoundly different intuitions, and they matter. The late Peter Eckersley made a bet with Toby Ord many years ago, that by the time we meet aliens, technological progress would have ended: we had essentially invented everything that we needed to invent. And Toby thought this was the case, Peter didn’t believe this was the case, and both were fairly confident about it. And it’s a really good question, because we have evidence of both kinds.
So when thinking about making the best possible interstellar spacecraft, it seems like if I want to design a solar sail, and I use a laser [to propel it], the design space is not necessarily super large and there might actually be an optimal solution. Maybe you want to use lithium fluoride and certain wavelengths and certain setups and you can find the optimum. It might also be that you say, actually, I want a rocket. And because of that, I need a very different design. Suddenly the design space is much bigger, because now you need all the rocket engine stuff, and that’s got many more dimensions and it’s much harder to search through. Or you could say, I want a generation spaceship, which means that suddenly you have a space habitat. It needs to be self-sustaining well enough, it needs to maintain a society stably for a very long time, and it needs to have a lot of very big engines.
These three ships, in some sense, optimise for very different things. Which one is the best? Well, it depends a little bit on what you want to do. If you want to get to the destination fastest, that laser-propelled solar sail is probably going to be the best one. If you want to send a fairly large-ish but not enormous cargo, the rocket might be better. But generation ship is good if you actually want to send people. The problem here is, of course, there is no best answer to what you want to use it for. That depends on who you are and what your aim in settling the universe is.
You can find optimum when the problem is very well defined. There is probably a material that conducts electricity at a given temperature the best. And one day we will just have this in a future version of a physics textbook; you just check that this needs to work at 200 degrees Kelvin; it’s this obscure material and it has these properties. Of course, any real engineer will say maybe that’s too expensive, or maybe it has other awful properties. The perovskite solar panels people are working on right now seem to be great as solar panels, except that many of them use lead and cadmium, not very environmentally good. If we put a lot of lead and cadmium into the environment, we actually need to replace those elements in the solar panels, but still keep the perovskite properties that are lovely on their own, and so on. These tradeoffs are tricky because there might not be a right or wrong; it might just be what you value the most.
Similarly, think about collecting all the energy from a star using a Dyson sphere. How do you do that? Well, you put solar panels around it and convert it to electricity. Or you put mirrors heating up little elements and use thermal engines, which actually is more effective because you can use more sunlight. Then you start thinking about the cooling and you realise, I actually want cooling systems that are much bigger than the Dyson sphere, and I need to transport waste heat efficiently from the inner part to the outer part. And when you do that calculation — I did it in my book — you end up needing 27 Jupiter masses of hydrogen, at which point you go, “Wait a minute, I can’t get 27 Jupiter masses of hydrogen in the solar system. I can’t use this design; it’s useless unless I can very cheaply import Jupiters — which even on this scale sounds rather implausible. So I might have to settle for something that’s actually less effective.”
So I can imagine in the future, these future engineers having this design meeting about how to encase the sun in a Dyson sphere, and having big arguments about maximum energy output versus maximum efficiency versus minimum material use. Or some people saying this is going to be too expensive; we can use cheaper methods of building it. And in the end, nobody’s right or wrong here. It’s just what you set up as your goals as your society or a company or whatever it is.
Of course, we might then say maybe there is an optimal society that’s best. But again, the degrees of freedom in minds and societies seem to be so big that our chance of searching that one out or finding that general theory that tells us this is the best society look rather slim.
Rob Wiblin: Yeah. As you were talking, I was just thinking of different models that one might have of how one converges on the best possible design for accomplishing some goal.
In my mind, I have this model where we’re asking the question: What is the number that is closest to one? And I’m going to say one. We can’t do any better than that; it’s just one. And at some point we get to one; we say one is the closest number to one and then we’re done.
Another way you could think of it is, what’s the number that’s closest to one that’s not one? And then you could just keep going on. So you’ve got like 0.9, .99, .999, .9999 — and every time you add a digit, you get just a little bit closer. So you’ve never quite gotten there, but you can keep progressing ever closer.
An alternative model may be the people who have this idea that there’s so many different combinations of things that there’s almost an unlimited way that you could combine stuff, and the search space is so vast that I guess in their view, you can’t predict ahead of time what designs necessarily are going to do better than others. It’s more like saying, I’ve thought of a real number between zero and one, and that’s the right one. Can you predict it? And the thing is, there’s an infinite number of numbers that you could state between zero and one. And so you can always potentially get closer or further away from that number, but you’re basically never going to hit it — because there’s just an infinite number of choices that are available, and finding out that it’s not 0.2 doesn’t really help you at all.
Anders Sandberg: Yeah, a lot of it depends on how do you go about engineering this, how do you go about designing it, and it depends a lot on the kinds of constraints you have. For example, in actual engineering, you care quite a bit about cost. Certainly you could make a very useful device, often by using gold as a building material, but the price of that makes it awkward. And quite often you have these complicated tradeoffs — like this is a great material, but it has a low melting point, and that material has a high melting point but also awkward magnetic properties — and that complicates the design process. Now, real engineers have methods of handling this in many domains we understand fairly well, and obviously in the future we’re going to keep on developing these understandings in new domains. But that quite often takes a lot of effort.
When people started building metal bridges, it took several decades before people got good at it. There were a lot of horrifying bridge disasters as people were finding out the hard way about material strength, how things can shear and buckle in the wrong way. And eventually we got the standard methods to calculate it and now we’ve got really amazing computational tools for it.
But computer science is still kind of in the early bridge-building phase: we have a lot of horrifyingly bad software that bends and buckles all over the place, and we’re trying to develop better tools. But again, the software engineering world has advanced a lot since the ’90s, when I got started with computers — well, I started in the ’80s, but that was just hobby stuff — but in these decades, we still are rather far away from figuring it out. And indeed, we seem to be going slower than the bridge builders because in some sense software has much less constraints than bridges. Bridges are nice, they’re metal, they’re standing there across a river or a gorge or something and they’re fairly simple designs, while software can be almost anything.
How to produce maximum value [01:50:06]
Rob Wiblin: When we imagine a universe where a civilisation is trying to produce value, it’s very natural, at least for me, to think about it in consequentialist terms — so to think about how you can produce the maximum amount of flourishing or happiness or preference satisfaction or whatever. But of course there’s other theories of ethics and value — like deontology, where it’s about following rules, or virtue ethics, where it’s about cultivating virtue in the beings that are there. And there’s other people who don’t really think that there is any real idea of ethics; it’s just about doing whatever fits your subjective tastes. What might the universe look like if it were designed to create the most value from those other points of view about what’s best?
Anders Sandberg: Virtue ethics is probably the simplest, because you basically have this universe inhabited by virtuous beings that are — if we imagine that Aristotle was right about everything — now trying to live their life in an “excellent” way. They’re honing whatever it means to be that kind of creature. So if they’re human-like, they might want to have honest social relations; they want to be courageous — not too foolhardy, and not too cowardly, but finding the “golden mean” and so on.
This gets really weird, of course, when you start thinking about artificial beings. What is a virtuous robot? Well, it’s fulfilling its robot nature to the maximum. Now, if I designed it to do certain things, that seems to make things simple. But if it’s a general being that is actually having open-ended goals, they might actually be quite different and weird. And this of course does show up a lot, because we have a particular evolutionary past. We have a lot of little quirks just because we evolved the way we did, and some of them probably affect a lot of the virtues. After all, our emotions are quite densely tied in with virtue ethics. It seems to me that a different species that has slightly different emotional ranges or emotional states would end up with very different virtues.
Also, there is this very cool question that maybe there are also virtues on a group level and on a civilisational level. So Toby touches on this very briefly in his book The Precipice, where he talks about how maybe we should say that civilisation can be more or less prudent, that it actually is a good idea to ascribe a virtue to an entire civilisation. And I think there is something to that. Victor Hugo, the author, said that war is the vice of a civilisation and peace is its virtue. In some sense, this is what virtuous civilisations should be doing: they should try to maintain their internal peace.
Now, a virtue ethicist would say it’s not enough to just do nice stuff: you need to do it for the right reasons. There need to be reasons inside the civilisation to keep it on the straight and narrow. So we might imagine an honest and truthful civilisation where they do check their epistemic standards, they make sure that they’re making rational decisions, and that the science is done in an unbiased way. We might have a prudent civilisation where there are internal values motivating them to research existential risk and taking precautions against them.
A universe organised with this might actually look quite a lot like the utility-maximising universe, except that it’s less likely to really zoom off and expand very fast. The utility maximiser will tend to get a lot of utility, and most utility functions tend to assume more is better — whether that is more happiness or more computation or something — so the maximising mindsets tend to rush off. You could have a satisficing mindset and say it levels off after a while; you actually don’t need all the galaxy clusters to run at pure happiness, only a lot of them. The virtue universe, on the other hand, might not necessarily be quite as expansive.
But again, that depends on what the virtues are inside here, and we might discover new virtues. There is talk about environmental virtues, for example. The ancient Greeks wouldn’t exactly recognise them, because they didn’t have that much of an environmental impact. Well, strictly speaking, they did. They were cutting down all the forests in Greece, and even the ancient Greek philosophers kind of recognised that not everybody can have the wonderful standard of life we have here in Athens because there is not enough firewood — which was an accurate observation, but solved by finding other energy sources. You could imagine these environmental virtues apply on a larger scale. I’m not supposed to kill off species, but I don’t do that on an individual level — it’s rather that we, as a society, should not kill off species, and we need to do that in a joint form. Now, the interesting thing might be that an advanced civilisation might discover entirely new domains to be virtuous about — whether that is caring for solar systems and stars and maintaining the galaxy in a proper manner, or some weird posthuman virtues we can’t even imagine.
So that was the virtue ethics case. Now, the deontological case is interesting, because some deontology of course might actually just be some form of consequentialism in drag. Rule utilitarianism is very good at looking like deontology. And I can totally imagine that some deontologists actually have rules that are actually secretly maximising and doing things on a consequential level. But deep down the idea is that you’re not supposed to break important rules of conduct, and normally we formulate them on a societal level.
This is one reason why consequentialism and deontology tend to be more popular in public discourse than virtue ethics. Virtue ethics is lovely when you try to set up your own life and live a good life, but it’s very unclear how to run a hospital by virtue ethics. You want the best doctors and nurses; they should be excellent. But how do you manage things in an excellent way? It’s very individual. It’s not clear at all how much money you should be putting into the different departments. Meanwhile, the consequentialists will be coming up with a spreadsheet, and deontologists will say there are certain principles of medical ethics we need to apply here and they are going to constrain what you can do — but within those constraints, it’s kind of almost arbitrary. At that point you’re probably going to fall back on some kind of loose virtue ethics. You’re going to try to do something good or something that fits with other things.
So when you try to scale up deontology, it seems like it’s more about these big bounds on what you do with the universe. The most obvious one might be that maybe you shouldn’t be expanding too much. There is an interesting discussion about astronomical suffering risks. On one hand, if we want to avoid going extinct, we should be spreading out all over the place, because that minimises the probability of something going bad for all offshoots of a human species. But you also add a lot of potential for suffering here: if we terraform millions and millions of planets with lovely jungles and the ecosystems, there is probably going to be quite a lot of animal suffering there, and maybe inhabitants also having a bad day, so the total amount of suffering in the universe goes up a lot.
So already from a consequentialist standpoint, if suffering gets priority, this is really bad, and we shouldn’t be doing that expansion. The deontologist might have something similar to say, that maybe actually increasing the risk of astronomical suffering is an absolute no-no, and we must actually abstain from that. So we need to expand just the right amount to reduce risk, but not any more.
Now, the really interesting problem for the deontologist in this cosmological future is we better agree on these rules beforehand, because once you get separated by time and space sufficiently, it’s going to be very hard to agree on the rules. Of course, if you are a true believer, Kantian, you’re going to say that any rational being just sitting down and thinking carefully enough about the fundamental moral principles is going to converge to the same ethics. Yeah, right. I don’t believe that is the case. I might be wrong, of course: it might be that posthuman, Jupiter-brained moral philosophers all think alike because there is actually the one true way of doing it. But it might be that you actually end up with incompatible deontologies when you think things through. So there is a lot of space here for really interesting different takes on what we should be doing with the universe.
Rob Wiblin: It feels to me like once you start thinking about how we’re going to create the most virtuous being, and then we’re going to create an enormous population of trillions and trillions of these extremely virtuous beings acting virtuously, that maybe something has been lost about the spirit of virtue ethics. I didn’t get the vibe that that is what the virtue ethicists were trying to do, was to maximise the amount of virtue across the universe. That feels like a utilitarian take on virtue ethics that is a little bit strange.
Anders Sandberg: I think that’s true, and I’m probably naturally just drawn towards that. I’m just always going utilitarian, whether I want to or not.
Indeed, it’s very interesting to think about not grand futures, but humble futures — because a lot of people are totally cold to the idea of moving galaxies and having trillions of beings in some weird astronomical future. I usually express it like they want to have this nice little Cotswolds village — where their friends are playing cricket, they’re having tea with the vicar, and having sensible social relations with normal people. And yeah, it needs to be sustainable and peaceful and all of that, but you don’t need an entire galaxy to do that. I think there is a lot of truth to that. This is quite close to what most people think is a good life, and it’s certainly much easier to think about virtue ethics in that little British village, or whatever the Swedish or Chinese counterparts are.
The real question is, of course, would it be good to just have that? I tend to think that we are so uncertain about normativity that we should hedge our bets. I think it’s actually probably a better idea that some people are living in these nice little humble futures and others go off and terraform planets and build Dyson spheres and whatnot — because we might not know which one of these is the right one, but we might be able to get the right one by having a big palette of possibilities.
The real problem is when they impinge on each other: the nice little village might not want their night sky scarred by having megastructures flying around there, so there might have to be some deal about leaving the sky dark, et cetera. There are some people who are very upset that anybody in the world might be having fun in the way they morally disapprove of. So they have nosy preferences, and they’re of course going to be very annoying neighbours. And we need to resolve these kinds of problems.
That gets into this issue of how do you make a cosmopolitan ethics, especially if humanity becomes much more diverse? But I’m kind of cheered by the fact that the Amish seem to be doing pretty well. They are living in some sense in a humble world, deliberately making a humble society, but it’s also being protected by one of the least humble societies you can possibly imagine: the United States. And they have a right kind of relationship to the outside. Over the decades there have been interesting discussions about both how to prevent too many young people going off into the sinful outer world and realising that this is actually quite wonderful, and instead setting up things so they can both maintain each other. And it works, partially because the values of the United States and the rights catalogued in the laws and the rule of law can act to protect it. You can maintain humility and a humble future inside something much more grand.
I guess this might also be the solution for how to get virtue in these grand futures. It might actually start out at small nuclei. You actually don’t want to go maximise the universe: you want to ensure that this nuclear virtue, that if they’re really good and attractive, might expand — instead of saying first we optimise everything for it.
So this gets to one of my big things, and that is we need to have an open future. Existential risk is an ultimate closed future. It’s the end of history. But you can also imagine futures that are too limited, where there are too few possibilities and certain choices and options are not there. And I think we need to safeguard against those, even if they’re otherwise pretty nice futures.
Everything that is bound together will dissolve [02:02:22]
Rob Wiblin: What’s the most surprising or remarkable piece of science that you ran into while researching and writing this draft?
Anders Sandberg: The one thing that really hit me as super weird and profound is that if you have a universe that actually expands forever, eventually everything that is bound together will dissolve. So when dealing with the heat death of the universe, most thinking tends to be something like: eventually energy runs out; it gets very cold and very stable and very boring. But you could at least imagine a rock sitting around there essentially forever, unchanged. Already that was in doubt: Freeman Dyson pointed out that quantum tunnelling is actually going to kind of liquefy the rock over sufficiently long periods of time; everything will randomly move around.
But it turns out that there’s something even more profound going on, because the universe tries to minimise free energy, which is the energy minus temperature times entropy. So normal chemical reactions happen because you go from mixing something together, and they react and go to a lower energy state, and that happens spontaneously. Some weird chemical reactions happen instead because entropy increases so much, they might actually suck in heat from the environment in order to happen. A classic example is where you have cooling salts you add to water, they dissolve, the entropy goes up quite a lot, but the water gets cold. And this is how you make cold compresses when you have a sports injury.
Now, the interesting thing that happens in this very far future is that the temperature of the universe is still finite because of reasons we might get into later. But then the result is that you can move things apart, of course very far, because the universe has kept on expanding exponentially for a very long time. Which means that the entropy theoretically could go very high. That means that things that are bound together actually are energetically favoured to fall apart. This is a really weird thing, because this happens because of fundamental statistical mechanics. And when I first read it in the mathematician John Baez’s blog, I felt like, “OK, he’s a mathematician. It makes sense mathematically, but physicists don’t believe in this, right?”
Rob Wiblin: So this is going on now, but it just happens extremely slowly?
Anders Sandberg: It actually doesn’t happen now. And this is fun, because this is a little paradox that people started discovering actually a long time ago. When you mathematically calculate the properties of a hydrogen atom sitting in a heat bath, it seems like if you follow the math, it should lose its electron, it should spontaneously ionise, and the electron should be going off to infinity. And that sounds very weird, because we know hydrogen is a nice stable thing: hydrogen atoms sitting there in emptiness are not going anywhere.
And the way professors deal with the astute student who’s done the calculation and brings this up is to say, but does the calculation assume a finite-size universe around it? The size of a lab. And now you will find that it actually doesn’t happen, because the electron is going to be kept on hanging around the atomic nucleus and everything is stable. And of course, in reality there is other stuff in the universe that acts as a wall of a lab.
And this is actually where things get profound. One reason matter stays together is that there is other matter around — it’s kind of mildly pushing back. In theory, all the particles making us up could go off anywhere else. But there is other matter there, and it actually has the effect that, from a thermodynamic standpoint, things should keep together. But this is not true in the very far future in a very big expanded universe, and then stuff actually spontaneously falls apart if it’s around.
So that’s the kind of big, abstract, weird, slightly creepy fact. But there’s plenty of other lovely little details that I found. One of my favourite stories is Alexander von Humboldt’s parrots. And then there is the colour of oxygen crystals and the limits on rocket engines. There are many of these more concrete, down-to-earth questions that show up too. But that thing about why stuff falls apart in sufficiently big universes, to me, this is giving us a good reason to think that the future actually is limited and finite. But still, the amount of time it takes to get there is so big that we don’t need to worry too much.
Rob Wiblin: Yeah. So I thought that a big issue that would happen very far in the future, within like 10100 years, or whatever it is, that the universe is expanding and it’s expanding at an ever greater rate. And people are familiar with the idea that galaxies are getting pulled apart, so eventually other galaxies are receding beyond our ability to reach them, even if we travelled at light speed, because they’re moving away from us faster and faster.
But at some point that starts affecting, firstly, galaxies are getting pulled apart, but then eventually stars are also getting pulled apart, even within a galaxy. And then I suppose planets are getting pulled apart and then eventually atoms are getting pulled apart. And then there’s the idea that you’ve just got one atom that’s causally disconnected from everything else, and then it’s kind of lost the walls that were holding it together. So it disintegrates?
Anders Sandberg: Not quite. What you just described is called the Big Rip scenario. So one of the interesting things about cosmology today is that we know that there is something accelerating the expansion of the universe. It’s usually called dark energy because it mathematically shows up in the cosmological equations as this constant — the famous cosmological constant that Einstein first put in to prevent the universe in his model from expanding, and then realised that the universe expanded: “Drat, I need to drop it.” And then experimentalists realised that, actually, we need to add the constant with an opposite sign. It’s one of those little awkward facts. And it seems like this dark energy, or whatever it is, keeps the universe expanding, and the standard vanilla dark energy is just causing exponential expansion.
But if you tweak that theory — and given that we don’t really know what it is, of course it’s very easy for a theoretical physicist to write a paper tweaking the theory — you can get different behaviours depending on when you expand spacetime: How much more dark energy do you get? Is it just proportional to the volume, or is it growing slightly faster or slightly slower? And it turns out that with one little parameter, W, that if it’s less than -1, then you get an accelerating expansion — that also the acceleration accelerates, and it gets faster, and you end up eventually with a Big Rip. In a finite time, everything expands apart. And this is where it’s not just galaxy clusters slowly drifting apart, but then the stars get ripped apart, and eventually atoms.
Now, is this something to worry about? There are some people who seriously argue that this might happen in the next 21 billion years, which is kind of a scary prospect, at least from these ridiculous timescapes that we’re normally at. Just 21 billion years, that’s nothing. That’s next Tuesday. And the fun part is, of course, we have no way of telling, because all the measurements of this W puts it almost exactly at -1, and it’s kind of within the arrow bars. Mathematically, it’s much more elegant and reasonable if it’s exactly -1, but it could theoretically be different. And there is no easy way of making these measurements. But very few people in cosmology seriously think the Big Rip is the most likely scenario.
Now, even without the Big Rip, you have this drifting apart. The expansion of the universe makes all space expand. But if you’re a human, the molecular bonds making you up will keep you together. Yes, every morning, spacetime has expanded slightly and tried to separate your atoms, but they’re pulling themselves back together just fine. And the same thing happens with the solar system and the galaxies. It’s just that between galaxies and galaxy clusters, at this point, there is not enough attraction to actually keep them from drifting apart.
And this doesn’t get stronger over time, assuming the standard cosmological constant. In that case, we only get these little island universes of galaxies, separated by these exponentially growing voids. The Big Rip scenario means that you get a much more dramatic ending. There are some cosmologists who really like it because you avoid other horribly weird things.
Now, this is the corner of the book where I’m dealing with the cutting edge of cosmology, where we know we don’t know everything. There are many big questions. We’re still having this problem that we don’t know the Hubble constant, how fast the universe expands — because different measurement methods produce different values, and they’re outside each other’s arrow bars, which is called “cosmological tension.” It’s kind of embarrassing. Something is obviously wrong with how we’re doing the measures. Probably it’s just a measurement error or a measurement problem, but it might be telling us something profound.
But there are obvious deeper problems. What is dark energy? And I know some physicists who would say maybe dark energy decays over time, or it increases over time. We’re kind of free to assume either. Indeed, it turns out that it’d be very hard to measure what it’s doing, so you can come up with any theory. But of course, the more complex a theory it is, the less likely a priori it is that it’s true. So most people assume it’s probably just constant dark energy.
Will Anders ever finish the book?! [02:11:43]
Rob Wiblin: OK, let’s turn to the question of this book as a whole. We’ve just been sampling from lots of different ideas that are covered in there. It’s 1,400 pages long, the draft now — 33 hours of listing time, at a blistering pace that I tried to put into my audio-producing software. Many of the sections are only partly done, and there’s lots of analysis that I think is going to have to be checked by someone.
Anders Sandberg: Oh, yeah.
Rob Wiblin: Pretty speculative. And it feels a little bit like you’re trying to address almost all the questions that someone could put to you on this topic. Other people who’ve seen the draft have slightly referred to this issue as well, that the aspirations of this book are really momentous — grand in their own right. Do you think you might need to make it less ambitious in order to try to get it finished and published in finite time?
Anders Sandberg: I think so. I think one good sign is that I’m actually getting pretty tired of it. And that is very good, because that means I’m not constantly thinking that I need to add everything. But at the same time, there is this saying that “art projects never get finished, they just get abandoned.” What you want here is they should be abandoned to an editor and a publisher, so you actually get something.
The ambition I had was very much trying to cover what we know, somewhat rigorously, about the long-term future, and can say about it. Or even finding the relevant question we might want to investigate, because a lot of the more speculative parts I think we can answer one way or another. And that is a starting point, and then hopefully we can do future editions or sequels. The purpose is very much to get this started and have all these things in one place.
The problem is, I eventually realised I’ve also written a very weird textbook about our understanding of the world to some extent. This is very much an overview of physics and chemistry and biology and some parts of anthropology and even philosophy and ethics. You could probably use it as a very weird textbook. I’m not certain I would recommend it, but it would be really interesting as a course. And I think that is also what I want: it’s a multipurpose tool.
But yeah, it needs to be finished, especially since I’ve got another book competing with it, because I got dragged into writing another one in parallel, because I’m rather stupid.
Rob Wiblin: What’s that one?
Anders Sandberg: The working title is Surf’s Up, and I’m writing it together with Cyril Holm, who’s a professor of jurisprudence at Stockholm University. It’s about law, it’s about superintelligent AI, and it’s about Hobbes’s Leviathan and Nick Bostrom’s singleton scenario.
It’s basically about the question: If we manage to get good AI, and it’s aligned enough that we can survive with it, does that solve all our problems? Hint, hint, of course not. We actually do get new interesting problems, especially in the line of the social systems we construct. We are basically outsourcing a lot of cognition to the legal system and markets because we can’t keep it in our own mind, and we do rather well by spreading it out, but that can be replaced with software. So maybe a lot of parts of our society could be completely replaced by intelligent software, which seems to lead to all sorts of very disturbing consequences that we might not actually want.
So the question is: Is this true? Is it possible to actually outsource all that to software? And also, could we do something about aligning our society so that it’s actually quite nice to live in, even though it might be more effective to run everything by the big AI in the cloud?
Rob Wiblin: I suppose if you wait another year or two, then GPT-6 might be able to be an amazing research assistant to help you finish Grand Futures.
Anders Sandberg: That is the fear. I already tried to use GPT to help me write in Grand Futures, and I found that I have a problem: a lot of my text is rather dense in facts and numbers, and this is the one thing that GPT really does badly, because it tends to hallucinate.
So I asked it to finish a section about what is a breathable atmosphere for humans, and it came up with a whole bunch of numbers that I felt, do I trust them? I don’t see any references here. So I started checking them up and then of course fell into a very deep rabbit hole about the minimum and maximum viable oxygen concentration. So I certainly learned something, but it didn’t finish that paragraph faster for me. It works really well when you have text where you’re not caring too much about the truth value — where it needs to kind of convey a message in a nice way, but it doesn’t have to be perfectly true in every sentence. But Grand Futures is the kind of book where I actually want the facts and logic to match up rather strongly. But GPT is great for doing a good intro and outro for a chapter.
Rob Wiblin: Yeah. It seems like it’s not only getting better at writing text, but also getting better at distinguishing reality from falsehood over time. So it would be interesting to know at what point does it actually begin to have a grasp of what is real and what is not. Or can we train that in with reinforcement learning from human feedback? Like if you ask the right kind of factual question, it will respond, “I don’t know,” rather than just saying some nonsense.
Anders Sandberg: Yeah, and I think that is a very important aspect of any form of writing. How do you fact check stuff? A lot of the things in my book are based on standard physics. I’m trying to base it on as basic of physics as possible, checking with several different sources to make sure I’m not totally putting my foot in my mouth.
But then you get to some things that are much more iffy. We actually are at the boundaries of knowledge. So quite often my approach there is to actually list all the possible cases, which is one reason why my book is so thick. I literally have a big chapter about ways we could be really wrong about physics and everything else in the universe, where I try out what happens if various physical theories don’t work as we expect them to? How much effect does that have? And that is of course very fun, but you could make an infinitely large chapter because they could be wrong in an infinite number of ways.
But in most cases it doesn’t matter that much. If general relativity is slightly wrong, it’s probably higher-order terms and you still probably end up with black holes and all that gravitational dynamics. So that’s not very dangerous. On the other hand, the Lambda limit on erasing information is much more important for my argument. So any discrepancy over there I need to care much more about. And then, of course, if you get faster than light transport, now all bets are off. Suddenly you get time travel and time-based computing that is way more powerful than quantum computing. And the universe just turns totally weird.
Rob Wiblin: Yeah. Setting aside machine research assistance, do you need any human research assistance, maybe to help you quickly finish it off?
Anders Sandberg: I would probably need a small army of assistants fact checking things, but my big problem here is I need to organise them. I realised that I had people helping me and contributing a lot. They’re reading through, they’re giving me comments. They’re very valuable, and I’m deeply thankful for them, but I probably also need to have a lot of people just going through things, but now I need to take their input and incorporate it in an effective way, and I haven’t figured out the right workflow to do that well. So I have this weird limitation; I probably would need some kind of manager for that.
Rob Wiblin: Yeah. What do you think is the most reasonable or natural way for you to shrink the scope, or the scope of the ambition, in order to make it more completable?
Anders Sandberg: Part of the biggest problems for me are the chapters dealing with stuff that people have already dealt with in some detail. So the chapter about post-scarcity societies, there is a fair bit of writing about the economics of post-scarcity. There is even more interesting work on the future of manufacturing, especially in the light of atomically precise manufacturing. And that takes a lot of effort to review because there’s so much to say. Similarly for sustainability and near-term energy sources: whoa, there is a big literature. Same thing with the chapter about settling the solar system: the first part, about the near-term future — like, What life support do you need? How do you build a space habitat? — there is a bookshelf of things coming out of NASA about that, and that’s taking me a lot of time and effort to deal with.
It’s much easier when I realise that there are two papers in the entire field, and I get to write up what’s essentially a third paper. So the parts of the book that actually deal with things other people have been thinking about are harder to finish than the parts I’m kind of alone with.
Rob Wiblin: Yeah, that’s really quite ironic that it’s easier to come up with your own speculative theories to answer some question that no one else has dealt with than it is to summarise the existing literature on some more natural and prosaic question.
Anders Sandberg: Yeah, just think about the problem of sustainability and mankind’s relationship to nature: there is a vast literature about it, a lot of opinions, a lot of really good books and papers. And I don’t have time to go through them, yet I need to have a section that deals with that. It’s super annoying, and at that point I get annoyed and immediately jump to a chapter and start thinking about “Can I build a quantum barrier that prevents any particle from coming through?” I can show interesting quantum mechanical things, and everything is so much nicer than having to go through a big pile of books.
Rob Wiblin: I guess the risk of making a mistake in one of these chapters where you’re the only person who has ever contemplated this question is higher. But at the same time, when you’re pioneering an area and no one else has worked on it, then saying something that is wrong could still be a useful contribution, because it’s still moving things forward towards maybe someone else will fix the problem in future.
Anders Sandberg: Oh, yeah. And I’m also trying to make use of my imposter syndrome because I’m aware that I’m trespassing in a lot of disciplines where I have no real business. I’m not an astrophysicist, but I’m saying a lot of stuff about white dwarf stars — and that means that I’m trying to read from a lot of different sources and make sure that the stuff I’m assuming is very normal, it’s not out on a limb, it’s in all the textbooks. And then, of course, hopefully I can get a bunch of astrophysicists to fact check it. I have already been annoying some white dwarf star astronomers about weird questions about very cold white dwarf star atmospheres.
Rob Wiblin: There’s a saying that the best way to get a good answer on the internet is to give a wrong answer and then wait for people to correct it. And I can imagine the first edition of this book could inspire a whole lot of feedback that could then make the second edition a whole lot more precise.
Anders Sandberg: And I think that might be useful, because ideally you update and improve on it. Ideally, I want to have a repository of, What can we say about the long-term future in a useful way? What can we say about some of the limits? What are the updates? And I have several markers in the book where I’m mentioning “the current record of fibre optics transmission speed is…” and then I just put in a marker to fill in whatever it is going to be by the point the book is published.
And there are many properties that are just changing over time that need to be updated. I’m living in fear that we figure out what dark matter is and it has a big effect on the book. This is actually one of the main drivers for me to try to finish it fast now, so the cosmologists and astronomers don’t get me.
Rob Wiblin: Yeah. At some point, reality might start outpacing you.
Anders Sandberg: That’s also one of the benefits of trying to write about the very long-term future, because it doesn’t get obsolete quite as badly as writing about the next 10 years. But still, even that might not help.
Room-temperature semiconductors [02:23:09]
Rob Wiblin: This is a total aside, but have you been following all the excitement about the discovery of a possible room-temperature superconductor over the last few weeks? This has been a big distraction for me the last 48 hours. I’m just so excited about the possibility, even though I appreciate it probably isn’t going to pan out.
Anders Sandberg: I’m also excited about this. And even if it turns out that the LK-99 room-temperature superconductor turns out to be a flash in the pan and not working, it’s still great entertainment because this is also a demonstration of how science actually should work. People publish some findings, other people try to replicate it, and it’s also happening in the open. Of course, in this case, it’s happening more in the open than ever before, because some people are live tweeting what they’re doing.
And you get to actually see a lot of the grotty parts of material science too, and very interesting issues also about sensible people trying to figure out, “What do I need to do to actually prove that I have a superconductor in my lab?” I used to think that this shouldn’t be hard, after all: Put some electrodes on it, measure the resistance. If it’s zero, you’ve got a superconductor. Yay. Except that it turns out that there is both a little bit of resistance, even in actual superconductors: very annoying if you want to store energy for trillions of years and stuff like that I care about.
But there are also other properties that you might measure. One of the more obvious ones is magnetic levitation. But it turns out that there’s some materials that you can also levitate because they’re diamagnetic, the opposite of ferromagnetic materials we normally encounter. So you can get confused by the material behaving weirdly. And there’s some materials that have really odd conductance. So actually getting a measurement of zero resistance might actually not tell you you have a proper superconductor. So there are a whole host of other things to measure, and they’re fairly complicated.
So now the labs are competing to both make the material — and that might be hard, and we don’t quite know what needs to go into the crystals to make them really good — and also proving that we’ve actually got something that is a superconductor or that it isn’t. It’s really tricky, of course, because it might be that it’s a hard thing to make, so most people will fail. But then occasionally, randomly, you will discover it.
But I’m super excited. I think this is a good demonstration also of how much ordinary matter is still full of surprises for us.
Rob Wiblin: Yeah, some people are a bit frustrated by this LK-99 mania — I think the material they’ve nicknamed LK-99 — but I feel like most news kind of makes me feel depressed, whereas this at least is kind of fun. And I’m learning some physics and some material science along the way, and we get to have this kind of shared experience of hoping against hope that this thing is going to pan out. So it’s a guilty pleasure, but I think I’m going to keep indulging.
Anders Sandberg: I don’t even think it should be a guilty pleasure. I think this is actually what we should have more of. I would love it if more science was a bit like this. We got this interesting finding. We’re all running around trying to make sense of it and doing it in the open. It does happen a little bit in astronomy. For example, when a gravitational wave observatory detects a strong signal, they immediately send out an alert to all observatories about roughly where in the sky it came from: Please look in this direction. There are similar things for other forms of transients, like neutrinos, and astronomers are then zooming in and trying to find whatever supernova or collision or whatever that thing was.
And I think we should have more of it, because normally we only get reports about science when it’s done, or usually press releases from groups trying to make it done, and then journalists turning it into either “We have totally overthrown Einstein” or whatever the big theory is — and then you never hear about that finding ever again because it didn’t pan out — or it just becomes part of this is the way reality is. But of course, actual science is very much of a process. Actually seeing how people are trying to replicate things tells you something very important about the world, about science, about scientists — and I think more people should be involved.
Rob Wiblin: Yeah, that’s true. I guess people who might have thought that science was cleaner than it is are probably seeing people’s attempts to replicate it, but they’re having trouble making it, and then they’re not sure whether the measurements they got are really consistent with it or not. It shows how challenging science is in reality. From another point of view, it’s a useful lesson in why it could be useful to wait a little bit in order to see whether these things are really justified.
But the timelines for podcast recording and releasing are such that by the time this comes out, people are going to know almost certainly whether this has turned out to be legit or not. Would you want to venture a possible prediction about how likely this is to actually be real?
Anders Sandberg: I think I increased since yesterday, so now I would give it a 15% chance of being real. So I’m still not super optimistic, but there are theoretical papers suggesting it. And matter is weird: there is so much strange stuff that happens even in fairly normal materials, and this seems to be a fairly complicated material. So there is a decent chance of it being a proper superconductor.
Rob Wiblin: Yeah. It seems like many of the more serious material science folks are on the sceptical end. Is there some reason why it should be very difficult or really unlikely? Is there a strong theoretical reason in physics or material science for why we should think that it’s not really possible to create a room-temperature superconductor that is giving people just a very sceptical kind of prior whenever they hear that someone thinks that they’ve done it?
Anders Sandberg: There is, as far as I know, no good theoretical reason why you couldn’t have the superconductor going up to a fairly high temperature. The original superconductors, when people cooled down metals and found that they became superconductors, that theory requires a lot of very nice quantum states and it makes sense that they can’t persist beyond a certain temperature. And then came the liquid nitrogen temperature superconductors showing up in the ’80s, and kind of shocked everybody, that reality is more complicated. There are actually weirder ways matter can interact with electrons and form the right quantum states to make superconductivity. And at that point the floodgates opened. There is actually no good reason why you couldn’t have more complex materials doing this.
There has been, of course, a big recent debate about high-pressure superconductors. There is accusation of scientific fraud, and it’s very hard to test because you need to do diamond anvil experiments. And not everybody accepts the conclusions, and there is a great deal of animosity. But many people, from a theoretical standpoint, would say that doesn’t sound too crazy. And similarly, something that actually works at room temperature, that’s not much weirder than liquid nitrogen temperature from the standpoint of physics: -190 Celsius or 20 Celsius, both are kind of arbitrary temperatures. As long as you’re below the melting point of a material, why should physics care too much?
Rob Wiblin: Well, it’s going to be fun to see how things play out. My fingers are crossed.
Why is progress in science slowing down? [02:30:33]
Rob Wiblin: Why do you think we’re not getting more progress in science? I guess I have this anecdotal perception that there’s way more scientists than there used to be, but the hits that we’re getting just aren’t quite as good as the hits from the past. That it used to be the case that a single person sometimes would just revolutionise our understanding of things, and now it takes 1,000 people just to make a very minor improvement sometimes. Do you agree with that take? And if so, do you have a preferred theory for what’s going on?
Anders Sandberg: I’m somewhat worried that we’re not getting as much oomph in science as we ought to do, given all the scientists. And then I put on my cynical hat and say yeah, but having a lot of scientists doesn’t mean that we’re doing a lot of work. Indeed, one can argue that a lot of what academia is doing is a lot of relatively pointless things: it’s easy to write a paper that is good for your career but doesn’t actually advance the question. But there are many parts of science that are dealing with concrete problems. People are honestly trying to fix relevant questions in climate change or solar collectors or similar things.
I think one problem is that a lot of the low-hanging fruits have been picked. A lot of really simple technological ideas have been found, so you need to go further to make something. And now you need to search through a much vaster space of possibility, and you also need to spend a lot more time as an apprentice learning what’s going on in the field. Still, I don’t think that works really as an explanation. Certainly you need to learn more to get to the frontier in some domains today than maybe 50 years ago. But still, students are bright and we’re actually getting better at educating people; we’re actually seeing people jump ahead much more. I’m sometimes astonished by what I see in the undergraduate textbooks; they’re getting this stuff that was in the graduate textbooks just a decade ago.
So one problem might simply be that the incentives are not aligned right. In academia, you’re supposed to write these interesting papers, and you want to find a juicy problem and try to poke at it and maybe try to solve it — but that juicy problem doesn’t necessarily have to fix things. In engineering, it’s much more important that you actually solve a problem in a way that people can pay for and that you can build, and quite often you have incentives because somebody is paying you to do it. And similarly, you see a lot of problems here in that incentive structures do get misaligned in many domains.
But I don’t think this works as a great explanation, because you could imagine some universities or organisations setting the alignment right and just sweeping past everybody else and coming up with grand unified theories and the perfect energy sources, and we don’t quite see that. We do see interesting cluster effects, where you have some really bright person attracting other bright people and together they solve interesting problems really well. Sometimes you do see people opening up the door to a new domain and everybody rushes in and tries to write the first paper about it, but that is again relatively rare.
I think one big problem is actually that we have a lot of good insights, but since there are so many papers around and it’s so hard to actually keep track of it, we don’t notice these good insights because there is so much noise. There are a lot of scientists to convince, and they get sent papers by all the other scientists, which means that we actually have a serious problem in detecting what’s going on.
But I’m still confused about this. In my Grand Futures book I’m also thinking about what if we got AI? Does this fix the problem or not? And I do various models and end up even more confused, because to me it seems like it might be that you could generate artificial scientists doing much more science, and if the outputs of science allowed you to make more artificial scientists, you could have this finite-time takeoff: you get the kind of scientific singularity, and eventually you know all that is knowable about science in a finite time. But this depends on the recalcitrance of the scientific problems, and that is kind of independent of how useful it is to get more science for being able to build more AI scientists. It could be that the difficulty just keeps on going up, so actually it doesn’t go that fast; instead it levels off and you get a gradual growth.
And it seems that these two questions — how hard is advanced science and how much harder does it become as you advance into it, versus how much can advanced science help you build better scientists or science systems? — are totally independent: there is no reason why they need to be matched in any way. So either science is solvable in a finite time, or it might actually be something that keeps posthumans in a billion years going. And I honestly don’t know which one it is.
Rob Wiblin: I’ll tell you my pet theory for this one and see how you react. So some people have suggested this kind of social explanation for why it seems like we’re getting less science per scientist than we used to: that the incentives are much worse, that the grantmaking bodies make really boring grants now, and that there’s bad incentives in academia. And I can believe that all of that stuff has gotten worse. But it seems like the amount of amazing discoveries we’re getting per scientist has gone down a lot over the last 200 years. We’re talking more than tenfold decrease, perhaps. And it’s hard for me to fathom that almost everywhere these institutions have just been degrading at such a phenomenal rate. We all have our complaints about the university system. I think it’s not that bad. And people still want the glory, people still have the individual desire to make breakthroughs — at least some people, surely. So that can’t be the full explanation, in my mind.
It does seem like there is just this really natural, obvious explanation for what’s going on, which is that the human brain is kind of staying the same. It hasn’t changed that much in the last 200 years, or even that much in the last few thousand years. And the universe and the questions are kind of staying the same. But there are some questions and some problems that are very easy for the human mind, as it’s designed, to solve: we just kind of intuitively see what the solution is, and so it’s very low-hanging fruit. And Plato could make all of these discoveries in philosophy kind of single handedly. But as we pick all of the things that are natural for a human mind to notice and to realise and to fix, the stuff that’s left is just really hard for minds of our type, and it takes longer and more and more effort.
Another thing is our machinery, our tools for doing science, have improved a lot. But the problem is it’s all just kind of bottlenecked I imagine, in my mind, by the fact that all of this stuff has to pass through the human brain — which we’re not getting technological advances on in the same way as we are with all of our other tools.
So that’s my guess. And that’s what causes me to think that as we’re able to create new minds, to engineer new minds within machines, that could really plausibly lead to this renaissance, to this efflorescence in science — because suddenly, rather than try to reach an incredibly difficult insight for the human mind to grasp, instead you can design a mind or you can experiment with all kinds of different mind structures in order to produce the one that is able to most easily solve some problem. What do you make of that?
Anders Sandberg: I think it’s a possibility, and it’s a really cool one because we can actually test it relatively soon. People are certainly working on making AI scientists. And there is something very interesting, because even a fairly simple and crappy AI scientist, if it’s different enough and it demonstrates that it can solve a problem we cannot solve, we know that there is something to this. We now need to just scale it up. So this might be something we could test very soon. And it’s an interesting thing, because if that turns out to be true, everybody should essentially just jump onto it and let’s make a lot of AI scientists.
The real problem is, of course, translating their insights into something we can use. But again, it doesn’t have to be done by the AI scientist. Rather you have an AI explainer: you train AI systems to take whatever representations exist in the scientist AI and translate that into a human form that has the right properties of a mapping so we understand something. And it might of course be that a lot of science doesn’t fit into human minds at all. We just have to accept that that’s the way it is and it’s kind of mysterious to us. We can still make use of it; we can still ask the engineering AI to take that in and build that starship or whatever.
It might still be rather humbling if that is true, because in the past we tended to assume everything could fit into human minds. If you read Greek philosophers, it’s very clear that they believe that there are no unanswerable questions about the universe. Some of them were a bit mystical, but most of them were fairly confident that if you can clearly state a question, there must be an answer, and it must be findable if you just give it some thought. If you’re not able to find the solution that’s just because you’re distracted by something. And that keeps on recurring for a long time. You find it in the Enlightenment view of progress, which is again very much based on the idea that we can probably solve all the problems we can formulate. So if we formulate some problems about how to improve the world, we will be able to find the solution using the scientific method, and ta-da, the world is going to be better.
Now we know that things are not necessarily that easy. There is this weird landscape of difficulties, of problems. I think this is one of the coolest findings coming out of the computer revolution, actually. Theoretical computer science is not so much a theory about computers as the difficulty of problems. And we know that there are problems that are undecidable, in mathematics [Gödel’s undecidable](https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems. We know that there are some problems that have solutions, but you can’t find them in less than exponential time, which means that we’re not going to find them. Except that some of them can be approximated in clever ways, so we can find a solution actually relatively rapidly. The travelling salesman problem takes exponential time if you try to brute force it, but you can approximate it and get it an answer in polynomial time — which is why FedEx and all the other logistics companies are solving it every day and making a lot of money by getting within a few percent of optimal solution.
So it might very well be that we have to live with this idea that the world contains some complex stuff that we cannot deal with, and then there is other stuff that is tantalisingly messy: we can almost but not quite get it, or we can get it but with a lot of effort. And then there is other stuff that just implodes and turns out to be ridiculously simple.
Rob Wiblin: Yeah. My bastardised understanding of an opinion of the physicist David Deutsch is that he thinks that human minds are these very general understanding machines, and that in principle we could understand everything that there is to be understood. And I just don’t know why that would be the case. I guess I should go back and read his work, which I haven’t done, so that’s on me rather than him.
But I would just think that the human brain is so finite. You might have a better sense of what the constraints are, but don’t humans have a short-term memory that’s only seven things that you can hold in your mind? You can chunk them, but you can’t put an unlimited amount of stuff in each one of those seven chunks. And I intuitively think in three dimensions, but then imagining a hyperspace with more than three dimensions is a real struggle for me. And presumably there are other things that could be imagined that are just super unintuitive to my mind, given how it’s been wired.
Anders Sandberg: I think Deutsch’s idea about being a universal explanation machine, there is a lot of truth to that. We have this ability to construct a great explanation, and we can also construct things that help us with our explanation. The most obvious one is pen and paper: a lot of math doesn’t easily fit into a human mind, but give us a blackboard and a piece of pen and paper and a few minutes of time, and we can solve stuff that actually you cannot solve just by thinking.
Now, the tricky part here is that universality. It’s a bit like Turing computability. So Alan Turing’s work early on did this amazing thing of showing that his Turing machine was equivalent to almost any other form of computation. It can simulate other Turing machines. And there is this equivalence class where every computer in that class can simulate every other at a certain cost, and this cost is typically a relatively small factor. In theoretical computer science, of course, a relatively small factor can actually still be astronomical and totally impractical, which I’m going to return to.
But the interesting thing is that that big class of computations has this interesting power of it can compute some things but not others, but it’s a vast set of computations. Now in practice, the fact that this computer I’m using right now to record this is equivalent to a computer built out of seashells that move around according to certain rules in the sand on a sufficiently large beach doesn’t matter very much — because that set of seashells on the beach is such an inefficient computer that it’s not going to be able to do the fast Fourier transform needed to record video and audio. It’s really a bad audiovisual computer, even though technically it’s equivalent. So you can construct computers out of almost anything, but most of them are very bad computers. The thing that happens with Moore’s law is that we’re going closer and closer to the limits of physics of making a particular kind of computer that’s equivalent.
Now, if we think about ourselves as universal explainers, are we likely to be the best possible universal explainer? Not really. At this point, Deutsch would intervene and say that we can, using our understanding of the universe, remake ourselves into better explainers. Maybe we get to be cyborgs, connect ourselves to supercomputers and think bigger thoughts — which might be a possibility. But without doing that, the fact that in theory I could understand any explanations that is understandable might still mean that I’m equivalent to those seashells on the seashore: I’m still not going to be a very effective explainer or understander.
So I do think the real question is: Can we get enough efficiency in some of these systems? And that is partially a physics question, but partially also a very profound philosophical question about what is the space of problems that we might want to touch? How big are explanations for some of the problems? It’s possible mathematically to construct problems that have enormously large explanations that we simply cannot get. Most of them are, of course, uninteresting examples. But there is this nagging doubt that maybe some problems we care about, like maybe Goldbach’s conjecture in mathematics, also are like that. Maybe there is an answer, but the answer can’t fit into a human mind.
Retrocausation [02:45:27]
Rob Wiblin: I’ll let a listener have the final question: “What’s something that Anders believes that is not widely believed or accepted in the AI x-risk community, or in the futurist community more broadly?”
Anders Sandberg: I have a weird suspicion that retrocausation might actually be a thing.
Rob Wiblin: Can you explain that? What is that?
Anders Sandberg: So normally we have causation moving forward in time: cause is prior to effect, and we tend to assume this is totally reasonable. But it’s really weird when you start thinking about how does this actually work physically? It’s kind of unclear. And even the direction of time is a pretty profound issue. Why do we get that? Because all the microphysics is actually time-reversible, yet we see an inexorable progression, and usually people make a nice hand wave and say entropy increases. We go from a low-probability state to a more likely one, and that’s why we see an arrow of time. But that doesn’t explain this causation part. And there are models of quantum mechanics where actually the future affects the past.
I have this weird suspicion that maybe retrocausation is more of a thing than we normally think. I don’t like saying this. In many ways, this is my coming out as somebody who’s got this queer belief, and I’m not entirely comfortable with it, but I still think there is something interesting about that, that our way of thinking about time might actually be fundamentally rather weird and wrong. And it might be that the way the history of the universe is built is not so much that we have a single state marching forward in time, getting updated one second per second, but actually that there’s some parts that get calculated first and then we fill in the blanks. We reach some state, and that must have been made by some other things, and stuff is actually being done kind of on the fly. The universe might be actually way more weirdly constructed than we normally think.
Rob Wiblin: My guest today has been Anders Sandberg. Thanks so much for coming back on The 80,000 Hours Podcast, Anders.
Anders Sandberg: Thank you.
Rob’s outro [02:47:32]
Rob Wiblin: If you like that you might want to go back and listen to Anders’ earlier episodes on the show:
#29 – Where are the aliens? Anders Sandberg on three new resolutions to the Fermi Paradox. And how we could easily colonise the whole universe.
#33 – Oxford’s Anders Sandberg on solar flares, the annual risk of nuclear war, and what if dictators could live forever?
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
The audio engineering team is led by Ben Cordell, with mastering and technical editing by Milo McGuire and Simon Monsour.
Full transcripts and an extensive collection of links to learn more — those are available on our site and put together by Katy Moore.
Thanks for joining, talk to you again soon.