Transcript
Rob’s intro [00:00:00]
Rob Wiblin: Hi listeners, this is The 80,000 Hours Podcast, where we have unusually in-depth conversations about the world’s most pressing problems, what you can do to solve them, and what did happen to all those horses after we invented cars?
I’m Rob Wiblin, Head of Research at 80,000 Hours.
The last months have been a crazy time for advances in artificial intelligence.
Over the last few years I’ve become increasingly confident that the future is going to be shaped in a major way by what sort of AI systems we develop and deploy as they approach and then exceed human capabilities in crucial areas.
That has always seemed like an extremely commonsense idea to me, but it’s now pretty apparent to people across society who until recently were hardly paying attention.
Unfortunately we’ve only had two episodes about AI over the last six months, due to the substantial lag in between conceiving of an episode, finding the right guest, recording the conversation, then editing it and finally releasing it.
But that is about to change.
In today’s episode, Luisa interviews Tom Davidson, a senior research analyst at Open Philanthropy, whose job is figuring out when and how society is going to be upended by advances in AI.
If you’ve been wondering when you might be replaced in your job by an AI model, and when AIs might be able to do everything humans can do for less than what it costs to feed a human and keep them alive, this episode will help you think about that more clearly and maybe know what to expect.
In particular, Luisa and Tom discuss:
- How long it will take for AIs to go from being able to do 20% of the work humans are doing, to being able to do all of it.
- What underlying factors are driving progress and how much each contributes.
- Whether we should expect progress to speed up or slow down.
- How much computer hardware is used to train models and whether it can continue increasing at the blistering rate it has for the last 10 years.
- When AI systems might be able to do scientific research and what implications that would have.
- When we might expect progress in AI to noticeably increase GDP growth, what that could look like, and what the new bottlenecks might be in an economy where AI systems are doing most of the work.
- And plenty more.
Tom’s expectations for the future are exciting or alarming, depending on how you want to look at them.
Regular listeners will have heard me do plenty of interviews on AI over the years. But they tend not to focus on my opinions, so it’s possible people don’t have much sense of where I stand.
In case you’re interested, I think the chances that you and I are either killed due to actions taken by AI systems, or that we live to see humanity unintentionally lose control of its future, are greater than 10%.
Looking at surveys and polling, it seems like both AI experts and the general public are converging on a view not too far from that. Naturally, if that’s right, it makes AI the issue of our time, and indeed one of the things we should most care about from a selfish point of view — or if we have children, care about from a parental point of view.
I’m an economist by training, and understand entirely how the Industrial Revolution ultimately raised incomes across generations — and that while factory automation was financially ruinous for many individuals, it didn’t result in persistent unemployment.
But despite understanding all of that, I am sceptical there will be paying jobs for children being born today, or if not them, then certainly paying jobs for their children.
And we’re just flying by the seat of our pants here, and haven’t really figured out a plan ahead of time about what we’re going to do as this technology just completely upends existing social relations and economic systems.
While I’d describe the overall situation that humanity finds itself in to be pretty terrifying, the fact that all kinds of different people are waking up to the risks here gives me hope that we can coordinate to prevent the worst. I’ll have more to say to expand on all that in future episodes.
But for now, I bring you Luisa Rodriguez and Tom Davidson.
The interview begins [00:04:53]
Luisa Rodriguez: Today I’m speaking with Tom Davidson. Tom’s a senior research analyst at Open Philanthropy, where his main focus is on when we might get transformative AI. Before joining Open Philanthropy, Tom taught science through Teach First at a comprehensive school in East London, and then was a data scientist for an education technology startup. Before all of that, Tom studied physics and philosophy at Oxford.
Thanks for coming on the podcast, Tom.
Tom Davidson: Thanks, Luisa. It’s a pleasure to be here.
Luisa Rodriguez: I hope to talk about how fast we might go from ‘kind of OK’ AI to AI that can do everything humans can do, plus how that will affect the economy and the world. But first: How worried are you personally about the risks from AI?
Tom Davidson: About a year ago, I sat down and spent a morning trying to figure out what my percentages of AI by a certain time are, and my percentage that there’s existential catastrophe from AI. I was focusing on the possibility that AI disempowers humanity and just takes over control of society and the economy, and then what happens in the future. A year ago, I landed at a number that was a bit above 10% for the probability that AI takes over by 2070.
Luisa Rodriguez: Right, OK. That’s already pretty high.
Tom Davidson: Yeah, that is already very high — and much too high. I think if I redid the exercise today, I’d be close to 20%. Compared to then, I think it’s just more likely that we develop AI that’s capable of doing that by 2070. I also think that it’s just pretty likely to happen in the next 20 years — which then makes the chance that it goes badly higher, because we have less time to prepare. Both those things mean I’d probably be at about 20% if I redid the exercise today.
Luisa Rodriguez: We’ll talk more about a couple of those things. But first, I’m curious if there’s a type of scenario you have in mind when you’re thinking about that 10–20% that makes you especially worried.
Tom Davidson: Yeah, I think the main scenario I have is something like, in the next 10 to 15 years, possibly sooner, we train an AI that is able to massively enhance the productivity of AI R&D workers — so people who are currently working to make AI better — and maybe it makes them, let’s say, five times as productive.
Luisa Rodriguez: So something like a large language model helps people working on AI R&D in particular to like code much faster or develop better algorithms or something, and it means they can work about five times faster?
Tom Davidson: Exactly. Then I think it won’t be long after that — because AI is then going to be improving more quickly — before AI is able to do everything that the current employees of DeepMind and OpenAI are currently doing for their jobs. At least the ones that can work remotely and do their work from a computer.
Luisa Rodriguez: Are you kind of distinguishing between the ones that work with physical robots, or the ones that work with, I don’t know, the mail room at DeepMind?
Tom Davidson: Yeah. I don’t know if this is true, but if there’s someone at DeepMind who is physically stacking the computer chips into the data centre, then that’s a type of physical work which I don’t think would necessarily follow immediately on. But I think most of the work is not like that.
Luisa Rodriguez: Right.
Tom Davidson: I think AI, especially most of the work for advancing AI, is stuff that you can do on your laptop. So first we get the 5x productivity gain. A little time later we get AI that can do all of the work that people at OpenAI and DeepMind can do. I really think the gap between those is not going to be very big; we’ll probably discuss a bit more about that later.
At that point, I think AI is going to be improving at a blistering pace, absent a very specific effort to slow down, which I really hope we make. But absent that effort, and absent coordinating to make sure that everyone is slowing down, I think like 1,000x improvement in the AIs’ capabilities in a year is a natural, kind of conservative default.
Luisa Rodriguez: Oh my god. Wow.
Tom Davidson: So in this scenario, it wouldn’t be long before the AI is just far outstripping the cognitive intelligence abilities of the smartest humans, and indeed even the smartest massive teams of humans working together. When you kind of crunch the numbers on how many AIs there are likely to be around this time, there’s going to be hundreds of millions — and probably many billions of human-worker equivalents, just in AI cognitive ability.
Luisa Rodriguez: Is that because there are different AI systems? Or because you’ve got that much brainpower being deployed through AI, even if it’s only like five AI systems or something?
Tom Davidson: It’s like running lots of copies of maybe a few AI systems. For example, I haven’t crunched the numbers on this, but I would guess that if you took the kind of computer chips that they use to train GPT-4, and then you asked, “How many copies of GPT-4 could we run using these computer chips?” — I guess that the answer is maybe a million. It might be a bit lower, actually — I’m not sure for GPT-4 — but I think by the time that we train the kind of AI that can fully replace all the human workers at an AI lab, I think that number is going to be more extreme. So I think that by the time we can train that kind of AI, you’ll be able to just immediately use the compute that you use for training to run 100 million of those systems.
Luisa Rodriguez: Right. That’s insane. And the reason for that is because it just takes many times more compute to train an AI system than it does to then run them?
Tom Davidson: Yeah, that’s exactly right. One way to think about it is that the AIs are trained on millions of days of experience so that they get to be as good as they are. And that if you’ve managed to train them for that much, then it’s kind of obvious that you can run millions.
Luisa Rodriguez: Got it. OK, so then we’ve got potentially millions of human equivalents of copies of AI systems running and doing the kind of work that humans could do at the human level or better. And then what?
Tom Davidson: So then, I think it’s likely that it won’t be long until we’re talking about billions of AI systems. It’s run-of-the-mill to see 3x efficiency improvements in AI at various different levels of the software stacks. You could get a 3x efficiency improvement in how effective the algorithm is, or a 3x efficiency improvement in how well the software runs on the hardware. And there are these various layers of the software stack you can make improvements on. I think it’s probably at that point — once you have hundreds of millions of AIs looking for these types of improvements — that you’re probably going to get very, very quick further improvements in AI cognitive ability.
Again, maybe we coordinate to go very slowly — but this is absent that targeted coordination.
Luisa Rodriguez: Maybe the default.
Tom Davidson: Yeah, maybe the default, scarily. And at that point, I think that if the AIs are misaligned — if they have goals that are different to what humans want them to do — and if those goals imply that it would be useful for them to get power so they could achieve those goals better, then I don’t think it’s going to be very hard for them to do that. Because it’s like if we had a billion really smart humans who want to take power, probably they’ll find some kind of way to do it: maybe they invent new technology; maybe they convince some high-up officials to give them control of the military. I’m not sure exactly how they’ll do it, but at that point, I think it’s kind of too late for us to be preventing AI takeover.
Luisa Rodriguez: OK. “AI takeover” — do you mind spelling that out?
Tom Davidson: Yeah. I think the thing that matters for AI takeover is that AI systems collectively end up in control of what happens in the future, and that it’s their goals and decisions that dictate the future path, and that it’s no longer sensitive to what humans want or are trying to achieve with the future.
Ultimately, I think it does have to come down to physical force, most likely. I mean, you could imagine a scenario where the AIs just convince humans [to do what they want], and that’s how they take over. More likely they end up having control of the hard military equipment, and that’s what allows them to establish their power and disempower humanity.
How we might go from GPT-4 to disaster [00:13:50]
Luisa Rodriguez: So a thing that I have to admit still confuses me is how we go from things like GPT-4 — which sometimes gets super confused and says silly things in a way that’s like, “Oh, you clearly misunderstood what I was asking for” — it’s really hard for me to understand the path from that kind of confusion, to a misalignment that’s just incredibly diverging from human values, so that the AI systems want to disempower humans.
One article I read, that just came out recently on Vox, was actually making the case that companies creating AI should coordinate to slow down. And it was walking through the case for why we might expect AI to be misaligned, and the example they gave just still confuses me. The example is something like: Let’s say you’ve got a super smart AI system. We’ve programmed it to solve impossibly difficult problems — like calculating the number of atoms in the universe, for example. The AI system might realise that it could do a better job if it gained access to all of the computers on Earth, so it releases a weapon of mass destruction to wipe out all humans — for example, an engineered virus that kills everyone but leaves infrastructure intact. Now it’s free to use all the computer power, and that’s the best way it’s able to achieve its goal.
I think I feel silly. I feel dumb. I feel like I’m missing something. How will it go from “I want to solve this problem for humans” to “I’m going to kill them all to take their resources so that I can solve the problem”? Why have we not ruled that kind of extreme behaviour out?
Tom Davidson: Great question. Maybe we can try and think about this system which is trying to solve these math problems. Maybe the first version of the AI, you just say, “We want you to solve the problem using one of these four techniques.” And that system is OK, but then someone comes along and realises that if you let the AI system do an internet search and plan its own line of attack on the problem, then it’s able to do a better job in solving even harder and harder problems. So you say, “OK, we’ll allow the AI to do that.”
Then over time, in order to improve performance, you give it more and more scope to kind of be creative in planning how it’s going to attack each different kind of problem. One thing that might happen internally, inside the AI’s own head, is that the AI may end up developing just an inherent desire to just get the answer to this math question as accurately as possible. That’s something which it always gets rewarded for when it’s being trained. Maybe [if we’re lucky] it could be thinking, “I actually want the humans to be happy with my answer.” But another thing it might end up thinking is, “You know what? What I really want is just to get the answer correct.” And the kind of feedback that we humans are giving that system doesn’t distinguish between those two possibilities.
So maybe we get unlucky, and maybe the thing that it wants is to just really get the answer correct. And maybe the way that the AI system is working internally is, it’s saying, “OK, that’s my goal. What plan can I use to achieve that goal?” It’s creatively going and looking for new approaches by googling information. Maybe one time it realises that if it hacked into another computing cluster, it could use those computations to help it solve the problem. And it does that, and no one realises — and then that reinforces the fact that it is now planning on such a broad scale to try and achieve this goal.
Maybe it’s much more powerful at a later time, and it realises that if it kills all humans, it could have access to all the supercomputers — and then that would help it get an even more accurate answer. Because the thing it cares about is not pleasing the humans — the thing it happened to care about internally was actually just getting an accurate answer — then that plan looks great by its own lights. So it goes and executes the plan.
Luisa Rodriguez: Right. That was really helpful, but I still feel confused about why it’s so hard to not give it some instructions that are just like, “Use whatever you need, but don’t hurt living things.”
Tom Davidson: I think we could definitely give it those instructions. The question is, inside its own mind, what is its goal at the end of the day? You could give it instructions “don’t hurt humans,” and it would read that, and it would understand that’s what you wanted. But if throughout its life it’s always been rewarded for getting an accurate answer to these math problems, it might just itself only care about getting accurate answers to the math problems. So it knows that the humans don’t want it to hurt other humans — but it also doesn’t care about that itself, because all it cares about is getting accurate answers to this problem. So sure, it knows that humans don’t want it to hurt other humans, and so it makes sure to not do that in an obvious way, because it anticipates that it might get shut down. But its knowledge of what humans want it to do doesn’t change what its own desire is internally.
[I should emphasise that this example is meant to illustrate the idea that an AI might know what humans want but not actually care; I don’t think this specific series of events is a likely way for things to play out. — Tom, after the interview]
Luisa Rodriguez: I suppose I understand why you couldn’t just give the system an instruction that didn’t also come with rewards. Is it impossible to give an AI system a reward for every problem it solves by not hurting anyone?
Tom Davidson: I think that would help somewhat. The problem here is that there are kind of two possibilities, and it’s going to be hard for us to give rewards that ensure that one of the possibilities happens and not the second possibility.
Here are the two possibilities: One possibility is the AI really doesn’t want to hurt humans, and it’s just going to take that into account when solving the math problem. That’s what we want to happen. The other possibility is that the AI only cares about solving the math problem and doesn’t care about humans at all, but it understands that humans don’t like it when it hurts them, and so it doesn’t hurt humans in any obvious way.
Luisa Rodriguez: Oh, right. This is a route to AI not caring about humans, but being kind of deceptive.
I guess maybe an analogy that really speaks to me is something like: If you were to punish a child for having ice cream before dinner, you might get them not to have ice cream before dinner, or you might create a thing where they have ice cream before dinner while hiding in the closet. It’s pretty complicated to teach a nuanced enough lesson to a child about ice cream and why they shouldn’t have it for dinner that doesn’t have any risk of the lying version. Is that kind of right?
Tom Davidson: Yeah, I think that’s a good analogy. I think with a child it might be somewhat easier, because you’re much more capable than them. So even if they ever did try and eat ice cream in secret, you’d have a good chance of catching them. I think the problem gets really hard when the AIs are much smarter than us, such that they could quite easily eat ice cream without us noticing. It’s really hard for us to give them rewards that stop them from doing that.
Luisa Rodriguez: Right. And something like we have a pretty good idea how children’s brains work: they work kind of like ours, but a bit simpler, and we have some idea of the ways they’re different — so we can make guesses about the types of motivations that will speak to them. Maybe it’s like, we know that our kids work similar to us in that they feel shame, and they’d feel shame if they were punished, and want to please us because that’s just pretty human. Maybe we have a better sense of how they’d respond to punishment.
Maybe AI systems are just so different to humans that we really have no idea — or at least we’ll have a less clear idea — of what other processes that they’re using, or things that they experience are like, and what kinds of behaviours those will push them toward.
Tom Davidson: Yeah, I think that does make it a lot harder.
Luisa Rodriguez: Cool. That’s super helpful. Is that basically the key scenario you’re worried about? That we train AI systems to achieve certain goals, but it’s hard to know what strategies they see as fair game. It’s hard to train them not to pursue harmful strategies. And then they eventually get super smart — maybe they are deceptive; maybe they’re just very convincing — and so they are able to get a bunch of power and really disempower humans. Is that what you see as the core risk?
Tom Davidson: Yeah, that’s right. I would emphasise that in the scenario as I described it, AI is improving really, really rapidly as it approaches and then goes through the human range. So when we’re talking about this example with the AI that’s trying to solve maths problems, maybe we’re thinking we’ll have a few years with it, trying out this kind of strategy, and then we notice it’s doing a little bit of hacking into computer resources, so we tamp down on that. But if this whole thing plays out over just one year — for example, we go from notably below human to superhuman systems — I think it makes the risks a lot more intense.
Luisa Rodriguez: Yeah, that makes sense to me.
Explosive economic growth [00:24:15]
Luisa Rodriguez: To move us on to your research then, some of the work you’ve done that most blew my mind was actually on what happens when we’re able to basically build AI that does roughly what we intend it to do. I think I naively would have guessed something like, “The world carries on as normal, but we use GPT-8 a lot in our jobs.” But you’ve looked into the hypothesis that not only will things not stay the same, they actually might change very, very, very quickly if AGI is so good that it causes explosive economic growth — which, in this case, you’re defining as the world economy growing something like 10 times faster than it has for the last century.
So to start, can you help me understand intuitively what it would mean for the economy to grow 10 times faster?
Tom Davidson: Sure. One way to think about this is to think about all the technological changes that have happened over the last 50 years.
Luisa Rodriguez: OK, yeah. That feels like a lot.
Tom Davidson: Fifty years ago, it was 1970. We had very basic digital computers around, but they weren’t being widely used; they weren’t very good. I don’t think the Internet was around. There’s loads of other improvements — in manufacturing, and in agricultural techniques…
Luisa Rodriguez: Medical care…
Tom Davidson: Exactly. Massive improvements across the board in the last 50 years. But probably the most striking is IT.
Luisa Rodriguez: Yeah, sounds right.
Tom Davidson: So what explosive growth would look like is that all those changes, rather than happening over the course of 50 years, they happened over the course of five years.
Luisa Rodriguez: We’re going to get the internet in five years, plus a bunch of other improvements.
Tom Davidson: Exactly. Rather than it taking 50 years to go from these really rubbish, slow computers that you could buy in 1970 to the awesome MacBooks of today, that just happens over five years. Similarly, rather than taking 50 years for you to go from rubbish phones to smartphones of today, that also have the internet and all these specialised apps, that again just happens over five years. You see the introduction of a new technology — and then very, very quickly you see it being refined into a super useful, human-friendly product.
Luisa Rodriguez: Wow. I mean, on the one hand, that sounds kind of incredible and exciting. On the other hand, it just feels like a super strange world to be getting so many new technologies every few years. Can you explain the idea behind why AGI might even make that possible?
Tom Davidson: So here’s the most basic version of the argument. You can make it more complicated to address various objections, but I think this version captures the core idea.
Today there are maybe tens of millions of people whose job it is to discover new and better technologies, working in science and research and development. They’re able to make a certain amount of progress each year. It’s their work that helps us get better computers and phones, and discover better types of solar panels, and drives all these improvements that we’re seeing.
But like we’ve been talking about, shortly after AGI, there’s going to be billions of top human researcher equivalents — in terms of a scientific workforce from AI. And if you imagine that workforce — or half of that workforce, or just 10% of it — working on trying to advance technology and come up with new ideas, then you have now 10 or 100 times the effort that’s going into that activity. And these AIs are also able to think maybe 10 or 100 times as quickly as humans can think.
And you’re able to take the very best AI researchers and copy them. So if you think that scientific progress is overwhelmingly driven by a smaller number of really brilliant people with brilliant ideas, then we just need one of them and we can copy them. They might be happy to just work much harder than humans work. It might be possible to focus them much more effectively on the most important types of R&D, whereas humans maybe are more inclined to follow their interests, even when it’s not the most useful thing to be researching.
All of those things together just mean that we’ll be generating 100 times as many new good ideas and innovations each year compared with today, and then that would drive the development of technologies to be at least 10 times faster than today.
Luisa Rodriguez: Right. How likely do you think this kind of growth is? Is it the default once we get AGI?
Tom Davidson: I think it is a default. You could give objections to the argument I gave, but I think it’s mostly possible to answer those objections. So you could say that discovering new technologies isn’t just about thinking and coming up with new ideas; you also need to do experiments. I think you can answer that objection by saying that’s right, we will need to do experiments.
Luisa Rodriguez: And that’s like testing a drug on humans, and maybe it takes five years or something to really check that it’s safe and effective?
Tom Davidson: Right. Or you’ve designed a new solar panel, and you want to test its performance in a variety of conditions. Or you’re running some experiments to see what happens when you combine these two chemicals together, because you’re not able to predict it in advance.
But if you have a billion AIs trying to push forward R&D, and they’re bottlenecked on needing to do these experiments, then they’ll be putting in a huge amount of effort to make these experiments happen as efficiently as possible. Whereas today we might be using the lab for 50% of the time we could be using it, and we might be just doing a whole bunch of experiments and then analysing it afterwards and learning a little bit from each experiment, but also not trying to cram as much into each experiment as is humanly possible.
If these AIs are limited on experiments, then they’re going to be spending months and months just meticulously planning the micro details of every single experiment, so that you can get as much information as possible out of each one. Kind of fully coalescing their theoretical understanding and all the current data and implications and saying, “Here are the key uncertainties that we need to address with these scarce experiments.” They’ll give the humans conducting the experiments really detailed and precise instructions, and set things up so the experiments are really unlikely to go wrong, and analyse the resultant data from 100 different angles to learn as much as you can from them.
I think that will go a long way to getting over the experimental bottleneck.
Luisa Rodriguez: I mean, even if you just think you use labs eight hours a day, but you could use them 24 hours a day, and then there are probably hundreds of other efficiencies like that that will all just add up to get to many times more efficient stuff.
Tom Davidson: Right. And the AIs can direct humans on what to do. You could be paying very high wages to have unskilled human workers work through the night to run these experiments directed by AIs, telling them exactly what to do when. You’ll be able to have those labs working around the clock if that’s what’s wanted.
In the longer run, robotics is already very good, and I don’t think it’s going to take too long once we have a billion AI researchers to design robots that are able to do the physical tasks that humans do. Doesn’t seem like we’re that far off at the moment. If eventually we just need more humans to build more labs or to run these experiments, I think it will be possible to have robots doing that work, and have the AIs directing it — again, meticulously planning what each robot is doing with its time, so we’re getting the very most out of each robot.
The example you raised about human experiments is a really good one, because that seems like it’s going to be particularly hard to speed up. There are still a few things that I can already think of that could happen there. Any psychology experiments that you’re wanting to do, or knowing how humans will react to a new technology or to a new scenario, then just studying all of the human data on the internet and doing in-depth interviews with humans could give AIs a really good understanding of how human psychology works. In the limit, they could scan a human’s brain and upload them to be a virtual digital person, if that human was willing to do it. It could then do experiments with them in a simulation much more quickly.
Are there any limits for AI scientists? [00:33:17]
Luisa Rodriguez: Really fast. I mean, that’s getting pretty weird and feeling very sci-fi, but that’s part of what we’re talking about. We’re talking about how we’ve got millions or billions of copies of AI systems that are as smart or smarter than humans, basically using all of their brainpower to innovate. It’s bizarre, but if you apply all of that brainpower, you’re going to get super, super fast improvements to technology.
Is this bottlenecked at all by ideas getting harder to find? Are there going to be limits that a human would hit upon, or humans have, and AI systems will also hit upon those limits? Or do we expect them not to because we’re talking about superhuman intelligence?
Tom Davidson: Ideas are getting hard to find. For me, the best zoomed-out example is just that our scientific workforce has been growing at 4% or 5% every year over the last 80 years. That’s a massive increase in that scientific workforce over 80 years. Many, many doublings. But actually, the pace of technological progress, if anything, has been slowing down somewhat over the last 80 years.
So on a high level, the explanation is that we’re using way more effort than we used to be, but the ideas are harder to find, so we’re actually slowing down a little bit, in terms of the pace of our progress.
Maybe the best illustration of that dynamic is in physics, where 100 years ago or so, you could have Albert Einstein — in his spare time as a patent clerk — come up with multiple very significant breakthroughs, and then almost single-handedly, or with a few collaborators, develop general relativity over the few years that followed, which is just a major breakthrough in our understanding of the universe. Whereas today, you have maybe millions of physicists with these huge machines at CERN that are making, to be honest, I would say, pretty incremental progress in advancing the state of knowledge in physics.
In terms of how it applies to AI, the first thing to say is that, even if that dynamic exists and it’s very strong, we would still expect a very significant, if temporary, increase in the rate of technological progress. Let’s say ideas are getting harder to find, but then suddenly, in 10 years, we’ve got a billion AIs working on it, rather than the 10 million humans. Well, even if our ideas are getting hard to find, then at least temporarily, there’ll be much faster technological progress. Then we’ll pluck even more of the low-hanging fruit, and eventually even these AIs get stuck.
Luisa Rodriguez: Right. Maybe eventually we’d still stagnate, but it’d be pretty crazy if you added millions of brains to the workforce and didn’t get a bunch more technological progress.
Tom Davidson: I mean, specifically if you made the workforce 100 times as big, then yeah, especially with the other advantages I discussed about running 10 times as fast and being really focused on the most important tasks, I think it would be really surprising if you don’t get at least a temporary increase.
In fact, I don’t think it would be temporary, because one of the things that AIs can work on is actually building more computer chips to run AIs on, improving the hardware designs for those computer chips, improving the algorithms that AIs run on, improving the designs of robots…
Luisa Rodriguez: Right. Just making themselves better scientists.
Tom Davidson: Exactly. We’ve been discussing how already the pace of progress — in terms of the algorithms and the hardware — is pretty fast. And I’ve already said that I expect it to be much faster once we have AGI. So really, this isn’t a constant-sized AI and robotics workforce we’re talking about here. If we choose to do so, then we [potentially] could have the size of that workforce doubling every year.
That means you can overcome this problem of ideas getting harder to find, because you’re not dealing with a constant or slowly growing workforce; you’re dealing with a workforce which is itself rapidly increasing in size. Even if ideas are getting harder to find, you’ve got a bigger and bigger workforce to find them.
You can actually model out this dynamic. You can take the best models we have where ideas are getting hard to find — you can say that ideas are getting hard to find on the one hand, but on the other hand, AIs are able to design better AIs and do all the improvements that I talked about — designing better robots, et cetera. How does that dynamic play out? It turns out that at least under these models, even if ideas are getting harder to find at a very steep rate, then you still are going to get the AIs and robots winning that race.
Luisa Rodriguez: That’s really wild. I guess those AI scientists might hit some limits. Do you have any ideas for what those might end up being?
Tom Davidson: It’s a great question. I think we are going to hit limits at some point. Eventually we won’t be able to design better technologies. Eventually we’ll have the best algorithms we can get for making AIs. I do think there are reasons to think the limits could be quite high. One interesting data point is that small animals are able to double their population size in just a few months. Even smaller animals like insects can double their population size in just days or weeks. That shows that it is physically possible to have a certain kind of biological robot that doubles its own number in the scale of weeks or months.
With all of this massive scientific effort that we’ve been describing, it seems possible that we’ll be able to design robots of our own that are able to double their own number, build replacement robots in a similar time frame. Currently, you can try and estimate: If [it] we used a factory to try and build another copy of itself, how long would that take? I haven’t seen a good analysis of this, but when I’ve spoken to people, they’ve guessed it on the order of months. So that also kind of supports this vague idea that it might be possible to get these robots that are able to build extra copies of themselves and double their own number in just a number of months.
What that all suggests is that we could have a kind of robot workforce which is growing at a really high rate.
Luisa Rodriguez: And by “robot,” I’m picturing physical bodies. Do you basically just mean an AI system that’s like working on science? Or do you think physical bodies end up being important because we’ve got to start automating some of these physical tasks in addition to the cognitive tasks?
Tom Davidson: I’m thinking of including the physical tasks when I’m talking about the robots. I mean, if these robots each weighed 50 kilogrammes, and we were able to produce as many robots in a year such that they weighed as much as all the cars that we currently produce in a year, then we’d be producing around a billion robots each year. So already the manufacturing capacity seems like it is theoretically there to produce a huge number of robots. And that’s before taking into account —
Luisa Rodriguez: That it’s lucrative, and so we want to create more. Yeah, unreal.
Tom Davidson: So I do think that this dynamic leaves us in a pretty crazy world, where the size of the AI and robotics workforce is growing potentially very, very quickly. And as a result, it’s hard for me to imagine technology really stalling out before we hit real limits, fundamental limits to how good technology can get.
There will be such limits; technology can’t improve forever to infinity. Ultimately, you’ve come up with the very best ways of arranging the molecules to get the desired technological behaviour.
Luisa Rodriguez: Right. Do we ever hit limits on just physical stuff? The stuff we make the robots out of?
Tom Davidson: I think we will. So I already said that car statistic, suggesting we can get pretty far in terms of the massive robots we could produce just with the manufacturing capabilities we already have. The Earth is massive; there are mountains with all these kinds of materials in them all over the place. If we run out of a particular material which is currently useful for building robots, then these billions of AIs we have will be working hard to find ways of doing without that scarce material. That’s been a common pattern in technological development: that you find ways to switch out of things that are scarce.
So it’s hard to rule out that there’s some material that we just absolutely need and we can’t do without, and that that bottlenecks things once we get to 100 billion robots or something. But it also just seems more likely to me that, given the abundance of materials that are in the Earth… It’s a big place. There’s lots of different stuff there. It’s not like we’ve mined everything there is to mine, or even close to it. We’ll be using all the best methods for recycling, and using things as efficiently as possible. It doesn’t seem to me like those kinds of bottlenecks are going to kick in particularly early.
Luisa Rodriguez: It’s not out of the question that we’d have the technology to explore resources in space. I’m adding more sci-fi here, but if we’re doubling technological progress every… I mean, it sounds like you’re talking about months at some point?
Tom Davidson: Yeah, I think months is plausible.
Luisa Rodriguez: OK, months is plausible. That might mean we aren’t limited by earthly limitations.
Tom Davidson: That’s right. There is an interesting dynamic there, where if we are doubling the number of robots really quickly, and we’re improving technology really quickly, then we’re not that interested in doing an activity which takes 10 years to bear fruit — because we’re used to our investments paying off with doublings every year. We’re like, “We could go to the Moon and get materials, but man, that would take so long.” If we just invest in everything we can find on Earth, we can much more quickly increase the…
Luisa Rodriguez: Just use it more efficiently.
Tom Davidson: Yeah. So that time delay becomes more significant when you’re already able to grow so fast. I imagine going to the Moon and the space stuff happens when we’re really kind of struggling to find ways to make use of the Earth’s resources.
This seems really crazy [00:44:16]
Luisa Rodriguez: It sounds like we’re talking about something like AI systems replacing humans in a bunch of sectors, during our lifetimes, and then our lives really change quite radically and very, very, very quickly.
I just find that super weird. I think my brain is like, “No, I don’t believe you. That’s too weird. I just can’t imagine that happening.” If we’re saying this is happening in the early 2030’s, I’ll be in my late 30’s, and all of a sudden the world would be radically changing every year and I won’t be working.
Tom Davidson: I agree it seems really crazy, and I think it’s very natural and understandable to just not believe it when you hear the arguments. That would have been my initial reaction.
In terms of why I do now believe it, there’s probably a few things which have changed. Probably I’ve just sat with these arguments for a few years, and I just do believe it. I have discussions with people on either side of the debate, and I just find that people on one side have thought it through much more.
I think what’s at the heart of it for me is that the human brain is a physical system. There’s nothing magical about it. It isn’t surprising that we develop machines that can do what the human brain can do at some point in the process of technological discovery. To be honest, that happening in the next couple of decades is when you might expect it to happen, naively. We’ve had computers for 70-odd years. It’s been a decade since we started pouring loads and loads of compute into training AI systems, and we’ve realised that that approach works really, really well. If you say, “When do you think humans might develop machines that can do what the human brain can do?” you kind of think it might be in the next few decades.
I think if you just sit with that fact — that there are going to be machines that can do what the human brain can do; and you’re going to be able to make those machines much more efficient at it; and you’re going to be able to make even better versions of those machines, 10 times better versions; and you’re going to be able to run them day and night; and you’re going to be able to build more — when you sit with all that, I do think it gets pretty hard to imagine a future that isn’t very crazy.
Luisa Rodriguez: Yeah.
Tom Davidson: Another perspective is just zooming out even further, and just looking at the whole arc of human history. If you’d have asked hunter-gatherers — who only knew the 50 people in their group, and who had been hunting using techniques and tools that, as far as they knew, had been passed down for eternity, generation to generation, doing their rituals — if you’d have told them that in a few thousand years, there were going to be huge empires building the Egyptian pyramids, and massive armies, and the ability to go to a market and give people pieces of metal in exchange for all kinds of goods, it would have seemed totally crazy.
And if you’d have told those people in those markets that there’s going to be a future world where every 10 years major technological progress is going to be coming along, and we’re going to be discovering drugs that can solve all kinds of diseases, and you’re going to be able to get inside a box and land on the other side of the Earth — again, they would have just thought you were crazy.
While it seems that we understand what’s happening, and that progress is pretty steady, that has only been true for the last 200 years — and zooming out, it’s actually the norm throughout the longer run of history for things to go in a totally surprising and unpredictable direction, or a direction that would have seemed totally bizarre and unpredictable to people naively at that time.
Luisa Rodriguez: I feel like I was introduced to it when I read What We Owe the Future, Will MacAskill’s book: there’s this thing called the end-of-history fallacy, where it really feels like we’re living at the end, like we’re done changing. We’re going to maybe find some new medical devices or something, but basically we’ve done all of the weird shifting that we’re going to do. And I can’t really justify that; it does seem like a fallacy. Presumably things are going to look super different in 50 years. And sometimes those changes have gone super fast in history, and sometimes they’ve gone super slowly — and we’ve got real reasons to think that we might be entering a period of really fast transition.
Tom Davidson: Yeah. If anything, I’d say the norm is for the new period to involve much faster changes than the old period. Hunter-gathering went on for tens of thousands, if not hundreds of thousands, of years. We started doing agriculture, and formed into big societies, and did things like the pyramids. Then people often think of the next phase transition as being the start of the Industrial Revolution, and the beginning of concerted efforts towards making scientific progress.
After we did agriculture, new technologies and changes were happening on the scale of maybe 1,000 years or maybe a few hundred years, which is much faster than in the hunter-gatherer times. And then today, after the Industrial Revolution, we’re seeing really big changes to society every 50 years. We’ve already seen historically that those phase transitions have led to things being faster. That, I think, is the default expectation for what a new transition would lead to.
Luisa Rodriguez: Right. It just feels weird to us because we’re pre-transition. Plausibly, whoever’s living 50 years from now will just be like, “Obviously that was coming. Those weird people, living in 2023, thinking that they’d made all the technological progress they were ever going to make.”
How is this going to go for humanity? [00:50:49]
Luisa Rodriguez: I’m struggling to imagine what it would look like, I guess. Because it’s plausibly going to be us. Like, what’s in store for us? Is it going to be good?
Tom Davidson: I think it could be really good. It could be really, really bad. It could be really good if we align AI so they’re always trying to help us, and help humanity do as best as it can by humanity’s own lights — and the benefits from AI and these new technologies are used to solve the world’s most pressing problems, and used to lift people out of poverty and give people the lives that they hope for themselves and for their children, and used to solve the problems of climate change or poverty or disease. I think it could go really, really well.
Luisa Rodriguez: And so the best case, where AI is really trying to help us, it’s still kind of unimaginable to me as a world. Maybe it’s just that I’m so biassed by the status quo, where I need to work — I need to work to live; I need to work to help solve problems. In this best case, is there unemployment? Is there unemployment for everyone? Is it a slow transition or a fast one? Does it make inequality better because no one needs to work and we all have enough things? Or does it make it worse because some people have to work? Do we have predictions about that that are worth making?
Tom Davidson: I think the default is that inequality would become greater because all of the wealth and useful work is coming from these AI systems, which I think by default will be controlled by a small number of people and companies.
In the very best case, then, you hope that wealth is equally distributed, or much more equally distributed than it would be by default. And it is true, I think, that there’s going to be so much progress made, if AI is aligned, that it will be very cheap to give everyone in the world the standards of living that are enjoyed by the very richest people today in terms of material comforts and health — and actually much better on those fronts, I think, after all this technological progress.
So I think if we can get the AIs to be really trying to help us, then even if we mess up a bit on things like the distribution of benefits, then I think things will still look pretty good, because there’s just so much to go around. If there’s a kind of universal basic income, you could just use 1% of the output that’s produced in a year to give everyone all the material and technological things they need to meet all of their material needs.
Luisa Rodriguez: Right.
Tom Davidson: In terms of work, I think it will no longer be the case that you can produce a higher-quality service or product than what an AI could do or a robot could do. One thought is that there will be some humans, maybe me and you, who just value human contact and hang out with other actual real humans. That could provide a kind of work for those who want it.
Luisa Rodriguez: A role for humans.
Tom Davidson: Another possibility is that we rethink the nature of work. We do work to help each other. Even though we know that AIs could do the work just as well, we’re still happy to do that because it gives us a sense of meaning. Or we kind of do creative things instead, like creative writing and drawing. Even though we know that AIs could do that better than us, it’s still enough for us to have a sense of purpose. I mean, people still play chess today, and still really enjoy it and get purpose from it, even though they know that they’re never going to match the best AIs.
Luisa Rodriguez: Right. Yeah, I guess I can imagine lots of people hearing about this future and being like, “No, I like the world the way it is. I like that humans get to make choices for ourselves as a society. I don’t want AIs making it for us. I like that I have to work, get to work.” I don’t know. I can imagine people being like, “No, I don’t want AIs to be making the art. I want humans to be making the art.”
Is there some chance that there’s like a movement that’s anti-AI growth, that stops this from happening, even though it’s theoretically possible?
Tom Davidson: That’s a great question. I do think it would be good for us to take this transition more slowly than is theoretically possible, that might happen by default if we don’t make specific efforts to go slowly. I think if people do try and delay or stop this, it could be a good thing, because I don’t think we’re prepared for that new world. I think it’s going to be very hard to permanently prevent this transition from happening.
Luisa Rodriguez: How come?
Tom Davidson: One way to think about it is that there is some kind of upfront starting cost to get this transition going. Let’s really simplify it: Today, if you spent $1 trillion, you’d be able to train AGI, and you’d have enough money left over to buy some manufacturing equipment for making robots. Then you could have your AIs do research into better robots and making better AIs, and that whole process could lead to you having even more AIs and even more robots. You could then grow your population of AIs and robots. And just with that trillion-dollar initial investment, you could end up with this massive AI-and-robot population, which is then able to just start doing the scientific work needed to significantly accelerate technological progress.
The thing that’s difficult is that that upfront cost will be falling over time. AI algorithms are improving; computer chips are improving. So the cost to training AGI, and then using it to build robots and to build more AIs, that ultimately results in making technological progress so you can sell more useful things to society that people want — that cost is going to be falling. Let’s say it was $1 trillion today. In the future, it’s going to be $100 billion and then it’s going to be $10 billion.
There’s going to be a lot of incentive to do this, because it’s going to grant whoever does it a lot of power. They’ll have all these AI workers that they can use to do whatever they want them to do, if they manage to solve the alignment problem. If they use it for designing new technologies, those new technologies could grant additional military power, or they could grant things that people all around the world desperately want — like curing illnesses, like preventing climate change, like understanding and solving mental health problems, like life extension.
Luisa Rodriguez: It’s not just economic incentives. It’s not just to get rich. It’s that all sorts of motivations are benefited from paying this cost to get this hugely productive AI scientist workforce.
Tom Davidson: Yeah. Whatever you want, you can probably get it much more effectively if you have a billion AIs and robots designing technology to help you get it.
And I think we can delay it. We can say, “OK, we’re going to be really cautious. Only a few people are allowed to train these systems.” And we try and convince the other countries to go slowly as well. But to think that even 100 years after it’s first been possible to train AI for $1 trillion, that still no one has done it and no one is using it to make scientific progress, even though the cost is now like $10 million, it’s really hard to imagine that we prevent anyone from doing this.
Luisa Rodriguez: All the different actors from doing it as it gets cheaper.
Tom Davidson: I think sometimes people, when they’re thinking about it, imagine that in order to get this 10x or 100x faster technological progress, we’d have to be making a real effort and really being super efficient and driven about it. But I think that’s not the right way to think about it. It’s more like, by default, all you need to do is ask your AIs and robots, “Please do these tasks for me, and if you need to make tech progress along the way, do it.” They will suggest the plans that involve making tech progress; they will get in contact with the labs and organise for the experiments to happen — you won’t have to do anything.
So I don’t think it’s going to require some kind of concerted pro-growth enthusiasts to really push for this. It’s more like you want stuff, the AIs are going to try and do the stuff you want, and whenever they make tech progress it’s going to go really well and it’s going to really help you solve your problems, and you’re going to just want to do more of it.
Luisa Rodriguez: Just enough time will pass, enough actors will think on it and decide to do it at some point.
Why AI won’t go the way of nuclear power [01:00:13]
Luisa Rodriguez: So I guess I buy that the incentives are there for eventually an actor to want to build this AI scientist workforce. It still seems like there have been enormously lucrative and beneficial technologies that we haven’t pursued. One example that comes to mind is nuclear power — which could help loads with climate change, and would also be, again, super lucrative, and yet we basically haven’t done anything like what we could do with it. Could there be something similar? I mean, it’s kind of stigmatised, is one reason we haven’t, and it’s really expensive, in particular the upfront costs — which maybe ends up true of this AI world we’re talking about?
Tom Davidson: Yeah, it’s a great example. I don’t have a good understanding of what happened, but I think there were some big catastrophes with nuclear power, and then it became very stigmatised. And the regulatory requirements around it, the safety requirements, became very large — much larger, really, than was reasonable, given that fossil fuel energy has damaging health consequences as well through air pollution. As a result, it just became kind of a mixture of stigma and the additional cost from all that regulation just prevented it from being rolled out.
But I do think there are a fair few very significant disanalogies between that case and the case of AI.
Luisa Rodriguez: OK, yeah. What are they?
Tom Davidson: One thing is that there were other sources of energy that were available, and so it wasn’t too costly to be like, “We’re not going to use nuclear; we’re going to use fossil fuels instead.” Even the green, climate-change-concerned people could think about developing solar panels and renewable energies. In the AI case, there is going to be no alternative: there’s going to be no alternative technology which can solve all illness, and which can grant your nation massive national security and military power, and that can solve climate change. This is going to be the only option. So that’s one disanalogy.
Luisa Rodriguez: OK, that makes sense.
Tom Davidson: Another disanalogy is the cost factor. With nuclear power, it’s become more expensive over time due to regulations, and that’s been a big factor in it not being pursued. But the specifics around these cost curves with compute and these algorithmic progress patterns suggest that the upfront cost of training AGI is going to be falling really pretty quickly over time. Even if initially, you put in loads of regulations which make it very expensive, it’s really not going to be long until it’s 10x cheaper. So permanently preventing it, when it’s becoming cheaper and cheaper at such a high rate, is going to be really difficult.
Third is just talking about the size of the gains from this technology compared to nuclear power. France adopted nuclear power and it was somewhat beneficial — it now gets a lot of its power from nuclear energy, and there’s no climate change impacts, and that’s great — but it’s not as if France is visibly and indisputably just doing amazingly well as a country because it’s got this nuclear power. It’s kind of a modest addition. Maybe it makes it look a little bit better.
By contrast, if one country is progressing technology at the normal rate, and then another country comes along and just starts using these AIs and robots a little bit, you’re going to see very significant differences in how its overall technology and prosperity and military power is progressing. You’re going to see that as countries dial up how much they’re allowing AIs to do this work, that there are then bigger and bigger differences there. Ultimately, advancing technology at our pace versus advancing technology 30 times faster, over the course of just a few years, becomes a massive difference in the sophistication of your country’s technology and ability to solve all kinds of social and political problems.
A last point on this difference is that the US did, in fact, invest a lot of money in nukes shortly after the development of fission power. When it came to a matter of national power, they were very happy to invest in the technology, despite the risks — which were clearly very high.
Luisa Rodriguez: All of the same risks. Right.
Tom Davidson: Yes. The incentives were out of whack and we didn’t get nuclear fission power. But when it came to this kind of military technology for which there was no replacement, countries were very keen to do it, and they made it happen. AI driving significant technological improvements across the board is going to be a huge source of military power, so it’s really hard for me to imagine that no one ever uses it for that.
Can we definitely not come up with an international treaty? [01:05:24]
Luisa Rodriguez: You’ve totally preempted my next question, which was: Can we definitely not come up with an international treaty that’s about how the downside risks of this technology at this scale are possibly huge, because AI alignment is so hard, and so we’re all agreeing not to go forward with it? I guess we had lots of reasons to do that in the case of nuclear weapons and we didn’t. We have lots of reasons to do that in the case of biological weapons, and we suck at it. We do not live in a world free of biological weapons or nuclear weapons.
Tom Davidson: I do think we should try, and I do think we can slow things down and we can increase the requirements and the safety efforts that are required, maybe to make it 10 times as costly or 100 times as costly to develop this technology. That is one thing, and that’s a big ask, and I think we should try and do as much of it as we can.
Even if we can do that, it’s a whole different thing to talk about permanently choosing to never develop the technology — even after we’ve made maximal efforts into making it safe, even after all of the safety tests are saying it looks like it is safe, even when millions of people are dying every year from illnesses which we know could be prevented if we allowed the AIs to do research into treating it. I think this permanently not going forward with using AIs and robots to make that technological progress — like I said, when it’s becoming cheaper and other countries and other companies might want to do it — that does seem like it’s just very unrealistic.
Luisa Rodriguez: Just implausible.
Tom Davidson: And maybe not desirable either, to be honest. After a certain point, we should take it really cautiously.
Luisa Rodriguez: Right. After 200 years of research into AI alignment, even if we’re like, “Ooh, this seems weird and scary and might change the world as we know it,” at some point there are going to be incentives for some actors, countries, or companies to try to deploy this technology at scale to solve problems like poverty and illness, and maybe also to get military advantages, to solve problems like climate change. And those incentives might just be so strong that even if we take our time, we’ll probably eventually do it. Someone will.
And once that happens, progress will become so quick that we’re looking at economic growth at a pace that’s still kind of unfathomable to me — this kind of thing where progress we’ve seen over the last 100 years happens in the next 10, and actually just keeps getting faster and faster. Is that kind of the picture?
Tom Davidson: Yeah, that’s the picture.
Luisa Rodriguez: That’s really weird. It’s scary. I guess it’s also quite hopeful. If I let myself hope for that good world where we use it to solve problems, I feel nervously, really excited. I guess we’ve got some real challenges to overcome first.
Tom Davidson: Yeah.
How quickly we should expect AI to “take off” [01:08:41]
Luisa Rodriguez: OK, let’s move on to a related topic you’ve been researching more recently. So you’ve just written the draft of a report on AI takeoff speeds that has some pretty alarming results to me, given everything we’ve just talked about. And just to get on the same page about language, what exactly do you mean when you talk about AI takeoff speeds?
Tom Davidson: Roughly speaking, capabilities takeoff speed is the question of how quickly AI systems will improve as they approach and surpass human-level intelligence.
Luisa Rodriguez: So the capabilities are like their ability to do things like drive cars or program new programs?
Tom Davidson: Exactly. So a fast takeoff speed could be that in three months, AIs go from mouse-level intelligence to significantly more intelligent than the smartest human. Where with a slow takeoff speed, that could be happening over decades.
Luisa Rodriguez: Got it. We’re just slowly making progress. Next year it’s self-driving cars, and then it takes another 10 years to get to programs that can write other programs.
Tom Davidson: Yes.
Luisa Rodriguez: So you’ve written this report, and I won’t give away the results yet, but it gives some evidence about how fast we might expect AI takeoff speeds to be.
But before we get to that, before you wrote the report, what did you think was the most compelling evidence that AI takeoff speeds would be particularly fast?
Tom Davidson: I think the most convincing argument is related to how humanity’s capabilities were improving much more slowly 50,000 years ago than they are improving today, and the attempt to draw the analogy to what might happen with AI capabilities. If we think that a million years ago, humanity’s cognitive abilities collectively were maybe doubling every, let’s say, 100,000 years — I don’t know; that’s just something to represent their slow increase in brain size and capacities — the exact number doesn’t matter. Whether you want to say it’s 100,000 or 10,000 years, it’s a very slow doubling size in their abilities.
Luisa Rodriguez: Right. Basically because slowly they’re evolving to have slightly bigger brains. We’re like adding a bit of prefrontal cortex, and the population is getting a little bit bigger over time — but grew very, very slowly. So collectively, it takes thousands of years to double.
Tom Davidson: Yeah, exactly. Whereas today, our abilities are improving incredibly quickly as a society, and our population growth is much faster, our command of technology is [improving] much faster. In the last 200 years alone, we’ve doubled the economy many, many times and doubled our ability to understand and manipulate the world very many times.
And if you think that there will be some analogue of that transition as we approach AGI, then AI is already improving really quickly. I would say it’s kind of doubling its abilities in less than a year at the moment.
Tom Davidson: If that’s the slow initial pace, then the new pace would be blisteringly quick.
Luisa Rodriguez: Unimaginably quick.
Tom Davidson: Yeah, exactly. The way I would think about it is that a million years ago, humans weren’t able to do science and to discover technological improvements much at all. They didn’t have access to this additional feedback loop of improvement — where you discover new technology, pass it on to the next generation, then they start in a better place and can discover even more technology, and can now support a bigger population.
There’s this whole feedback loop which arguably we couldn’t access a million years ago. We improved our cognitive abilities as humans a little bit, and then suddenly we got over this threshold where we can access this technological progress feedback loop — which then speeds up and speeds up as we develop agriculture, and then we develop written language, and we develop maths. We’re now even better at discovering new technologies.
The thought is, in my mind, that maybe there’s something similar that happens with AIs — where today they’re not that smart, and so they’re not able to access a certain feedback loop. I’m not sure exactly what that feedback loop will be, but at some point they become smart enough that there’s this additional feedback loop that they can use to improve their capabilities — kind of like how humans use technology to improve our capabilities. One funny thing about this argument is that it’s not clear what that new feedback loop might be for AIs. That leaves me puzzled over where to go with this argument.
Luisa Rodriguez: Right. For humans, we discover the scientific method, and we experiment on things, and we build computers, and now we can run programs that help us do science. But for AI, we don’t even know what kinds of science they’ll discover that’s beyond ours.
Is there some question about whether there even are higher orders of science that we haven’t developed, but that AI systems might, to kind of increase their own feedback loop?
Tom Davidson: Yeah, I think that’s right. You can throw out ideas for what that might look like. Maybe it’s that AIs learn to work together in a team in a way that is way more efficient than humans have ever done.
Luisa Rodriguez: Yeah. Or maybe they run simulations or something to learn about economics or something in a way that we can barely understand, because we only see the economies run in these weird real-world scenarios.
Tom Davidson: For example.
Luisa Rodriguez: OK, yeah. Are there any limitations to that argument, or do you just kind of buy it?
Tom Davidson: I actually don’t put that much weight on this particular argument. The main reason is that evolution was not trying to make humanity as a whole as capable as possible, and it wasn’t trying to make humanity as a whole good at science. From that perspective, it’s not actually as surprising that humanity went from really sucking at science to being really good at science in a fairly short time frame.
So here’s an analogy: Before 2020, we hadn’t made many COVID vaccines — not because we couldn’t, but because we weren’t trying to. We were focused as a society on doing other things. Then around 2020, it became really useful for us to make lots of vaccines. Then, lo and behold, the number of vaccines went up very dramatically. Now, that doesn’t mean that our abilities in vaccine-making suddenly went up — it just means that we reprioritised, and reallocated the resources we already had, towards making vaccines.
I want to say that’s somewhat similar to the way in which our ancestors, a million years ago, weren’t that good at science — but evolution wasn’t trying to make us that good at science. It was mostly trying to make us hunt successfully, feed our families. Science was maybe a tiny bit useful back then, because it maybe allowed you to discover something within your own lifetime, but it really wasn’t very useful.
Then I think more recently, more like tens of thousands or 100,000 years ago, it did become more useful for humans to do science and to be flexible learners. So it’s not that surprising that at that point — say, 50,000 years ago, where it was more useful for humans to be good at science — evolution then reallocated those cognitive resources of humans to being good at science. That kind of reallocation by evolution — from just foraging, and then reallocating those cognitive resources to doing science — is human society reallocating its resources to make COVID vaccines.
Luisa Rodriguez: OK yeah, that’s really helpful. Something like a combination — maybe of language, maybe of group living, maybe of some things that I don’t understand — made it much more beneficial to be able to learn new things, and learn a range of things, not just the same things, over and over again. A couple of tweaks in the brain was enough to make the brain that we’d been using for a very specific set of tasks become useful for a much wider range of tasks. And that wasn’t really fundamentally altering the amount of brain we have, but how we use it. That capacity already existed, and like you said, was repurposed.
Tom Davidson: So here’s how you would relate it to AI takeoff: you’d say that in the case of evolution, evolution wasn’t initially trying to make humans good at science. No massive surprise that it’s able to quickly make some tweaks that make humans good at science late in the day. But with AIs, and with AI development, humans will be at every stage trying to make AIs as useful as possible for doing economic tasks, helping with science and research. So we wouldn’t expect there to be this kind of overhang, where the AI has these abilities it’s just not using, because we would expect humans to be trying to coax those abilities out at every step of the way.
Luisa Rodriguez: If you’re constantly trying to coax abilities out, most likely you’ll only find ways to do it incrementally. As opposed to if it happened to be the case that, I don’t know, we found a billion computer chips on another planet and could just use them to train up a bunch of AI systems, then we’d expect the step change. But currently, everything is just increasing incrementally — we’re increasing chips incrementally, we’re increasing algorithmic progress incrementally — so it’s just going to keep improving at a kind of incremental pace.
Tom Davidson: Crucially, we have to think that we’re currently, at each step of the way, trying to use the most recent algorithms and most recent compute to actually get AIs to do, let’s say, useful science research. If we’re incrementally increasing the computing algorithms, but we’re not actually trying to get the AIs to do useful science research, then it could be that one day we decide to try and get AIs to do useful science research, and then suddenly we train them to reassign all their cognitive abilities to that task and we do get something that’s really quick. It’s a really important assumption here that, in some sense, the AI development ecosystem is kind of efficient.
Luisa Rodriguez: Yeah. And it’s using new AI capabilities to do cutting-edge research. As opposed to if there were only market incentives to make AI that, I don’t know, made these beautiful images like DALL-E, and no incentives at all to use AI to improve AI systems — then if one day we made a few tweaks to DALL-E and we were like, “Stop making pictures. Make programs instead.” But if actually it did have capabilities related enough that we could make that tweak semi-easily, then all of a sudden we’d have this system that could write programs really efficiently that we’d never had before.
Tom Davidson: Right. Or maybe a more probable scenario would be we’re just using all our AI resources to train these image-generating systems like DALL-E, and then we’re like, “You know what? Why don’t we just try using all those resources to train a science AI?” And then we pick the architecture to specialise it for science, we use the data to specialise it for science, we use all the compute that were previously pouring into these image-generation systems — and then suddenly we’re like, “Wow, our science AIs are amazing.” It came out of nowhere, because we hadn’t been trying in the previous years to do this at all.
Luisa Rodriguez: Yeah. Are we currently trying to make AI systems that are really good at science?
Tom Davidson: It’s a good question. The market doesn’t seem to me to be super efficient. I’ve been playing around with GPT-4 a bit recently, and to me it looks like GPT-4 is pretty smart. It doesn’t seem to me like its cognitive abilities have been really directed in the direction of helping to advance science, to be honest. So I do think that this argument could ultimately say that a faster takeoff is plausible and the mechanism could be reallocating the AI’s cognitive abilities towards science. I mean, GPT-4 is just trained to predict the next word on the internet. That’s a very different kind of task than the task of advancing science. I think that is a reason to expect a faster takeoff.
Luisa Rodriguez: More of a jump. Interesting. I haven’t heard that argument before.
Tom’s report on AI takeoff speeds [01:22:28]
Luisa Rodriguez: I want to now get to the report that you’ve written on AI takeoff speeds, which asks how quickly AI might go from pretty economically valuable to just extremely capable, maybe as good as humans. You define your terms pretty clearly in your report, so maybe we should start by doing that. Am I right in remembering that you are trying to answer the question of how quickly we’ll go from AI systems that can do 20% of human tasks to AI systems that can do 100% of human tasks?
Tom Davidson: Yeah, that’s right. In particular, I am restricting to cognitive tasks. That’s similar to what we discussed earlier: it’s any task that you could do remotely that doesn’t require you to be physically manipulating objects yourself. Because AIs don’t have physical bodies, that wouldn’t be included for them. It does include tasks like giving instructions to a human who’s doing a physical job — telling them where to move things, what to do with their arms, or potentially giving instructions to robots that are doing physical tasks — but it doesn’t include the physical motions themselves.
Luisa Rodriguez: Does it include things like driving cars?
Tom Davidson: Yes, it does.
Luisa Rodriguez: And that’s because driving cars is really a set of algorithms, and you can turn the wheel of a car?
Tom Davidson: It’s a good point that currently the way humans drive cars is by physically moving various levers in the car. But I think giving the AI the control of the steering wheel and of the pedals and the brakes is actually pretty trivial. The only thing that’s hard in practice about getting AIs to do driving is the cognitive parts of what the car should be doing at each point.
Luisa Rodriguez: Right. OK, so then an example of a task that isn’t included is something like helping people move house — like carrying the boxes in and out.
Tom Davidson: Carrying the boxes in and out would not be included. But telling them, “Here’s the best plan for moving. Here’s the order you should move the boxes in” — that all would be included.
Luisa Rodriguez: So that’s basically what you’ve done. You want to know, how fast do we go from 20% of cognitive tasks to 100% of cognitive tasks — can you actually clarify what it means for AI to be able to complete 20% of tasks?
Tom Davidson:Let’s say AI automates driving. What percentage [of cognitive tasks] is that? Is that 3%, because 3% of people do it? Or do we just look at a long list of tasks and assume that each takes up an equal percentage? What do we mean by 20%?
The way I’m currently thinking about that is looking at how much people pay for those tasks to be performed in the economy. So let’s take the driving example. Let’s say that drivers around the world are being paid $2 trillion a year for the work they’re doing — driving trucks and taxis and everything else. In that case, because $2 trillion is 2% of the global GDP, I would say that fully automating all driving would be automating 2% of all economic tasks.
Luisa Rodriguez: Then you’re saying, how fast will we go from we can do 20% — so, maybe it’s like replacing all drivers; maybe it’s replacing all journalism, because GPT-4 seems to be really good at writing; and a couple of other things — how fast you go from that chunk to literally all cognitive tasks, including science and AI R&D.
Tom Davidson: Now, one complication is that from year to year, the amount that is paid to people to perform each task might change. So in 2020, maybe drivers are paid $3 trillion a year for their work. Maybe in 2025, they’re paid $4 trillion a year, and that could change. So I’m pegging these percentages to the year 2020, as a kind of arbitrary choice just to make the definition unambiguous.
Luisa Rodriguez: OK, that makes sense. Wages change for all sorts of reasons, including AI taking over some jobs, but we’re still just thinking of what percentage of the 2020 cognitive economy is being automated.
Tom Davidson: Exactly. And it is really important to keep that in mind — because typically, when AI automates a certain task, it becomes really cheap to do that task.
Luisa Rodriguez: Right, so it becomes a much smaller fraction of the economy.
Tom Davidson: Exactly. You can end up thinking that AI is never doing anything, when it’s actually done almost everything. That is just an important thing to be aware of.
Luisa Rodriguez: Yeah, that makes total sense. Is there an analogy from the Industrial Revolution or something?
Tom Davidson: The best analogy might be agriculture. I think in 1500, basically everyone worked in agriculture.
Luisa Rodriguez: Right, it was 90% of the economy or something.
Tom Davidson: Exactly. All of GDP would have basically been agriculture. Today, I think it’s less than 5%. That’s because we’ve become really good at producing food with very little need for human labour.
Luisa Rodriguez: It’s not to say that fertiliser and trucks and really highly productive seeds aren’t contributing a bunch to the economy, but clearly they have. But were you to measure it as a fraction of the GDP that they are responsible for, it would be smaller, because everything’s just gotten so much cheaper because of them.
Tom Davidson: Exactly.
Luisa Rodriguez: Cool. That makes a bunch of sense.
How quickly will we go from 20% to 100% of tasks being automated by AI systems? [01:28:34]
Luisa Rodriguez: OK, so we’ve got definitions. And you’ve asked how fast will we go from AI systems that can do roughly 20% of the cognitive tasks that humans were doing as of 2020, and how quickly will we go from 20% of those tasks to 100% of those tasks being, at least in theory, able to be automated by AI systems. What was your result?
Tom Davidson: The conclusion from the report is, I guess pretty scary. The bottom line is that my median guess is that it would take just a small number of years to go from that 20% to the 100%, I think it’s equally likely to happen in less than three years as it is to happen in more than three years. So a pretty abrupt and quick change is the kind of median.
Luisa Rodriguez: Wow. And do you believe that in your bones? Does that feel very plausible to you?
Tom Davidson: Yeah, I do. Some quick things about why it’s plausible. Each year, once you take better algorithms and using more compute into account, we’re currently training AIs each year that have three times bigger brains than the year before. So, this is a really rough way to think about it, but imagine a three times smaller brain than humans — that’s chimpanzee-brain size.
Luisa Rodriguez: Right. Each year, you’re going from chimpanzees to humans?
Tom Davidson: It’s really hard to try and account for the effect of the algorithmic improvements, but on my best guess of what those amount to, each year we’re making the brains of AI systems about three times bigger.
And right now it’s humans that are doing all the work to improve those AI systems — as we get close to AIs that match humans, we’ll be increasingly using AI systems to improve AI algorithms, design better AI chips. Overall, I expect that pace to accelerate, absent a specific effort to slow down. Rather than three times bigger brains each year, it’s going to be going faster and faster: five times bigger brains each year, 10 times bigger brains each year. I think that already makes it plausible that there could be just a small number of years where this transition happens — where AIs go from much worse than humans to much better.
To add in another factor, I think that it’s likely that AIs are going to be automating AI research itself before they’re automating things in most of the economy. Because that’s the kind of the tasks and the workflow that AI researchers themselves really understand, so they would be best placed to use AIs effectively there — there aren’t going to be delays to rolling it out, or trouble finding the customers for that. And the task of AI research is quite similar to what language models are currently trained to do. They’re currently trained to predict the next token on the internet, which means they’re particularly well suited to text-based tasks. The task of writing code is one such task, and there is lots of data on examples of code writing.
Luisa Rodriguez: Oh, I see. I don’t know that much about coding, but is it basically also token prediction?
Tom Davidson: That is how current coding assistants work, I think: you start writing your code and they predict what’s going to follow.
One way of putting it would be that by the time that the AIs can do 20% of cognitive tasks in the broader economy, maybe they can already do 40% or 50% of tasks specifically in AI R&D. So they could have already really started accelerating the pace of progress by the time we get to that 20% economic impact threshold. At that point you could easily imagine that really it’s just one year, you give them a 10x bigger brain. That’s like going from chimps to humans — and then doing that jump again. That could easily be enough to go from 20% to 100%, just intuitively. I think that’s kind of the default, really.
Luisa Rodriguez: That’s terrifying.
Tom Davidson: Yeah, and I think there’s even more pointing in that direction. I think that already we’re seeing that with GPT-4 and other systems like that, people are becoming much more interested in AI, much more willing to invest in AI. The demand for good AI researchers is going up. The wages for good AI researchers are going up. AI research is going to be a really financially valuable thing to automate.
If you’re paying $500,000 a year to one of your human research engineers — which is lower than what some of these researchers are earning — then if you can manage to get your AI system to double their productivity, that’s way better than doubling the productivity of someone who works in a random other industry. Just the straightforward financial incentive as the power of AI becomes apparent will be towards “Let’s see if we can automate this really lucrative type of work.”
That’s another reason to think that we get the automation much earlier on the AI [R&D] side than on the general economy side — and that by the time we’re seeing big economic impacts, AI is already improving at a blistering pace, potentially.
Luisa Rodriguez: Yeah. Again, that’s really scary.
Tom Davidson: I completely agree.
What percent of cognitive tasks AI can currently perform [01:34:27]
Luisa Rodriguez: Do you have a guess at what percent of cognitive tasks AI can currently perform? It seems like we’re really far away from 20%.
Tom Davidson: Intuitively, I think it seems like we’re far from 20%, because AIs aren’t doing that much in the economy. If I looked at a list of the cognitive tasks people were performing in 2020 and what they were paid for them, it’s not as if AIs are ready to replace a big fraction of that labour. So that suggests that the 20% is far off.
I’m actually less confident that it’s far off than I used to be — than if we would have had this interview six months ago — just seeing GPT-4’s performance. Firstly just doing really well on a whole wide range of university exams and other formal tests, without having explicitly trained on that, and then me kind of playing around with it and thinking it just seems smarter than most people I talk about this stuff with. Most of my smart friends wouldn’t be this smart.
I’m thinking maybe, actually, if you just put some work into specifically applying GPT-4, you could automate quite a large fraction of the cognitive tasks. It does seem much more plausible to me that maybe you could get to 10% today, or within a year.
Luisa Rodriguez: Wild. That’s super interesting. Yeah, I was also blown away by GPT-4’s performance on the SAT, the LSAT. For anyone who hasn’t looked, we’ll stick up a link to those test results. I think it was performing much better than I ever did on my AP tests in high school, and better than I did on the GRE. It’s beating me, and I think beating out loads of other people already.
Maybe I’m putting too much weight on the fact that currently not that many people I know are using it to do anything. It sounds like your impression is that maybe it’s pretty close to being able to do a lot.
Tom Davidson: Yeah. To actually, in practice, replace 20% of the tasks that people do is actually a pretty tough thing to do. Because for myself, all the different parts of my workflow are very much tangled up together. I can’t easily take out a 20% chunk and be like, “GPT-4, do this chunk,” because it’s all kind of mixed up.
Historically, the way that automation has worked is that we’ve gotten a new technology, and then we’ve spent decades readjusting our workflows so that we can neatly parcel out 20% of our workflow for this new technology to do. The technology can be fairly dumb, because we’ve really neatly parcelled out that part of our workflow.
Luisa Rodriguez: Do you have an example?
Tom Davidson: Yeah. Let’s say moving over from paper records to computer records. People used to have to write down lots of paper records and maintain a filing system for their information. These days a lot of that work is done automatically by computers and data storage. At first it wasn’t that easy to immediately move over to the computers. It took decades, as people got rid of the paper stuff and got used to teaching their employees to use the computers, and got used to using their customers to fill out online forms rather than filling out the paper forms that they were using before. All that rearranging of workflows took a long time to happen.
One scary possibility is that if AGI is just 10 years away, then there won’t be time to do that rearranging of workflows that is necessary to get, say, GPT-4 to actually automate things in practice. The 20% automation ability won’t happen through some kind of dumb system that I’ve kind of parcelled out a nice thing for it to do. It will actually happen with a really smart system that basically understands my whole workflow well enough to be able to do 20% for me — which means that it could be pretty close to just being able to do all of it.
Luisa Rodriguez: Right. I mean, I’ve almost got that impression with GPT-4 and my job. We asked it, “How can you help us make The 80,000 Hours Podcast?” It was like, “I can help you come up with guests. I can help you write interview questions for the guest if you tell me what they’ve worked on.” It basically rattled off a list of things, and I was like, “As soon as it has a voice, that’ll be it for me.”
I think I thought it would help me with subtasks first. I thought maybe it would help me with generating titles, and maybe giving me summaries of people’s work so that I could read them a bit faster. But I think in fact, it’s actually going to be really great at the start-to-finish interview process, soon enough that I’ll just skip over all of that. I don’t know if that’s true, but it doesn’t seem crazy to me. Maybe that’s just another actual example of what you’re talking about.
Tom Davidson: Yeah. In that example, by the time it can automate 20% of the kind of tasks you’re doing, it can almost do all of it.
Compute [01:39:48]
Luisa Rodriguez: Right. Makes sense. So next, I want to dive into your methodology a bit more deeply for this takeoff speeds prediction. It’s such an alarming result that I have this urge to understand what’s going on a bit better. So that headline result, this prediction that AI takeoff might only take a few years, is basically based on an economic model that you made that tries to answer the question of whether you can get human-level AI just by increasing the amount of compute that we have to train AI systems — in other words, without paradigm-shifting algorithmic breakthroughs. To start us off, can you actually remind me what compute is?
Tom Davidson: So compute is a measurement of how many calculations you need to do or a given computer is doing. So let me give an example. The most common unit for measuring compute is a FLOP. A FLOP is adding together two numbers or multiplying together two numbers or dividing them or subtracting them.
Luisa Rodriguez: So a mathematical operation.
Tom Davidson: Exactly. Currently, when we develop AI systems, the way we do it is by doing loads and loads of these calculations — of adding things together, dividing them, minusing them — and by doing all of these calculations, that is how the AI decides what it’s going to do. And it’s also how we train the AI in the first place. So you could analogise these calculations to the firing of neurons inside our own brain.
Luisa Rodriguez: OK, so FLOPS are basically just mathematical operations, or things like “Are both of these things true?” or something like that. And then compute is something like how many of those calculations we can do?
Tom Davidson: So one FLOP is just one of those calculations. Some of the biggest language models today are trained with, I think, 1024 FLOP — so that is a million, million, million, million calculations. That’s how many calculations you need to do to train some of today’s big language models. So the amount of compute is another way of saying: How many calculations did you need to do it?
Luisa Rodriguez: Got it. Compute is the amount of computation you need to do and a FLOP is the unit of how many?
Tom Davidson: Yeah.
Luisa Rodriguez: So then you’re asking this question about AI takeoff speeds, and we’re just assuming that we increase [effective] compute — which is made up of machines, but also has to do with algorithms as well. Is it true that we need less compute if the algorithm is super efficient? Because the algorithm just requires that you do fewer calculations to get the same result or something?
Tom Davidson: The strict assumption is that, if we used the algorithms that were available in 2020, then there is some amount of compute, such that if God handed a top AI lab that amount of compute, and they had a few years to adjust the algorithms to using that much more compute, then they would be able to train AGI using that amount of compute.
So in the model, we make an assumption about how much compute would have been required [in 2020]. We actually put a probability distribution over it. But importantly, algorithmic progress can reduce that computational cost over time. So maybe in 2020, you’d have needed 1030 FLOP to train AGI, but maybe by 2025, your algorithms are 10 times better, so you only need 1029 flop to train AGI. The basic dynamic in this framework is that in each year, our algorithms improve somewhat and we decide to use more compute in a training run than we had done in the previous year.
Luisa Rodriguez: Right. Because it’s profitable, et cetera.
Tom Davidson: Exactly. And those two factors combine together. Let’s say we use twice as much compute as the previous year and our algorithms are twice as good. That means that the effective compute that we’re using is four times bigger — it’s the equivalent of if we hadn’t improved our algorithms and we had just used four times as much compute in the training run.
Luisa Rodriguez: You’re using “effective compute” to mean something like, you want an AI system to, for example, predict the next word in a sentence. And one way you can increase the effectiveness of that system is by giving it more compute, so it can do more calculations. But you’ll also have another dynamic, where the algorithms are getting better, such that you need less to do the same thing. You’re doing equivalent processes, or you’re getting equivalent outcomes, for less physical compute. Whew, that is tricky.
Tom Davidson: One way to think about it is: Let’s say in 2025 we use a certain amount of compute in a training run with 2025 algorithms. Imagine if we’d have been forced to use the 2020 algorithms: how much compute would we have needed then to get the same result? That is the amount of “effective compute” that we actually used in 2025. [So you can increase the amount of “effective compute” in two ways: either you use more compute — do more calculations — to develop your AI; or you use a better algorithm to develop your AI. — Tom, after the interview]
Luisa Rodriguez: Great, that makes sense to me. So you’re thinking about effective compute, and you’re making some guess about how much we’ll need to get 100% of cognitive tasks automated. How are you making guesses about how much effective compute we’ll need to get 100% of cognitive tasks automatable?
Tom Davidson: In the report itself, I just defer to a different report by a colleague of mine called the Bio Anchors report, which asks that exact question. In fact, I don’t think you need to be deferring to the Bio Anchors report. There are different approaches you can take to estimating how much effective compute you might need to train AGI — you could use whatever approach that you like, then bring that into my framework and use it to inform your initial guess of the effective training compute for AGI.
Luisa Rodriguez: Right. You’ve got a model that people can play with. If you’re like, “I think the Bio Anchors report is way too optimistic about how much compute it’ll take to get to AGI,” then you can 1,000x it and see how that changes the outcomes.
Tom Davidson: Exactly.
Luisa Rodriguez: Super cool. We’ll stick a link up to that model so people can play with it if they want to. Do you mind giving me an intuitive sense of how much compute you and your colleague basically think it’ll take to get AGI?
Tom Davidson: The current median value I use in the report is very large indeed: it is 1036 FLOP. That is, if you take the amount of compute that was used to train the biggest language models that publish their training requirements, and then you use a million times as much, and then a million times as much again, that’s how much the assumption is making.
I actually now think that that assumption is too high.
Luisa Rodriguez: Is that because of GPT-4 and how impressive it is, basically?
Tom Davidson: Largely, yeah. GPT-4 and the fast pace of recent improvements is quite a lot faster than I would have predicted. I would now be using a lower value for that important parameter, which would make takeoff even faster than what I’m predicting.
Luisa Rodriguez: Even faster. Jeez.
Tom Davidson: Yeah. The report that I wrote uses this 1036 as its median estimate for what you’d need to train AGI.
Using effective compute to predict AI takeoff speeds [01:48:01]
Luisa Rodriguez: OK, that’s helpful. So that’s how you basically estimate how much effective compute you need to train AGI. How do you use that to predict AI takeoff speeds?
Tom Davidson: Right, so we have this assumption about the effective training compute for AGI — which was our 100% [of cognitive tasks] kind of endpoint — and we then need to make an additional assumption about what would be the effective compute needed to train AI that could automate 20% of [cognitive] tasks.
Let’s say that we assumed, for example, that you need 1030 FLOP to train AGI using 2020 algorithms — that’d be 1030 effective compute. We then make an additional assumption about how much less effective compute you need to train AI that could automate just 20% of tasks.
Tom Davidson: I want to pause on that last assumption because it’s so important.
That assumption — about how many more times compute you need for AGI compared to 20% AI — is what I’m calling the “difficulty gap.” It’s saying: What is the gap in difficulty between training 20% AI and training 100% AI (or AGI)? We’re measuring the size of that difficulty gap in terms of how many times more effective compute you need to train one than the other.
Luisa Rodriguez: You’re calling it the difficulty gap because it’s kind of describing how much more difficult the most difficult tasks are relative to the easiest 20%?
Tom Davidson: Yeah, the reason I call it the difficulty gap is to refer to the difficulty of developing the AI in the first place. It’s like: How much more difficult is it to develop an AI that can do 100% of the tasks than it is to develop an AI that can only do 20%?
Luisa Rodriguez: Got it. But it might be 10,000 times as difficult — or it might be barely more difficult at all, if it turns out that once you’re 20% there, you’re basically the whole way there?
Tom Davidson: Right.
Luisa Rodriguez: Is that plausible? Maybe it is, if the first 20% of tasks includes AI R&D?
Tom Davidson: That’s an interesting scenario to think about. You could have the first 20% of tasks including all of the tasks of AI R&D. What I think would happen in that scenario is that once you’ve done those first 20% of tasks, AI would be improving super, super quickly, absent a specific effort to slow down. And within, I think, a few months, you would already be able to do a training run that used 100 times more effective training compute as you had previously done. That’s because you would have hundreds of millions of AIs that could be working to improve the AI algorithms, and maybe making money so you can buy more AI chips, or convincing other people to share their compute with you. Then that would be enough to very quickly allow you to use 100 times as much effective compute.
Luisa Rodriguez: OK, yeah. I guess maybe you think that which tasks end up being easier also plays into how fast AI takeoff speeds are. In particular, in the case where AI R&D is in the first 20%. Otherwise maybe it doesn’t matter as much.
Tom Davidson: Exactly. I think with that last example, even if there was a big difficulty gap from 20% to 100% of cognitive tasks in the economy, if you get all the [AI] R&D tasks within that first 20%, then I still think you’d get a very quick transition. So that could be an example with a big difficulty gap where, nonetheless, you still get a very fast AI takeoff.
Luisa Rodriguez: That makes sense. So do you basically just make an assumption about how big that difficulty gap is? Is it a range? How did you come up with whatever numbers you’re putting to how many times harder it is to get to 100% of tasks?
Tom Davidson: I do consider as much evidence as I can for the difficulty gap. It is really important. The lines of evidence that I consider are all pretty limited, so it’s a very uncertain parameter, but I think there are some things you can learn from some of those lines of evidence.
Luisa Rodriguez: What’s an example of some of the evidence you would have looked into?
Tom Davidson: In terms of evidence for the difficulty gap potentially being pretty small, we’ve already touched upon some of that. One line of evidence is the scaling of human cognitive ability with human brain size. Some humans have slightly bigger brains than others — only a small variation; plus or minus 10% or so — but you can then look at if one person has a 10% bigger brain, then on average, how much better did they do on various tests of cognitive ability? The difference isn’t massive, but if you extrapolate that difference to, say, a three times bigger brain or a 10 times bigger brain, then extrapolating that suggests that there would be a very large difference in cognitive abilities from getting a brain that is that much bigger.
Luisa Rodriguez: Interesting.
Tom Davidson: The takeaway from that is that this particular line of evidence suggests that increasing the size of the brain by a factor of 10 could be more than enough to cross this difficulty gap. And that, by analogy, increasing the number of parameters in an AI model by a factor of 10 could be more than enough to cross the difficulty gap — which would require you to increase the effective training compute by a factor of 100 [because when you increase training compute by 100x with today’s language models, you increase the number of parameters in the model by 10x and the number of data points by 10x. — Tom, after the interview].
I think even this analogy actually suggests that just increasing the effective training compute by a factor of 10 might be enough as well, because it could just be enough to increase the human brain size by a factor of three. This particular line of evidence really suggests that the difficulty gap could be pretty narrow.
Luisa Rodriguez: Pretty small, yeah. Is there more evidence about how big that gap is?
Tom Davidson: A very similar line of evidence looks at, rather than differences within humans, it looks at the differences between humans and other animals. So chimps have brains that are about three times smaller than human brains. And you might think that going from chimp-level intelligence to human-level intelligence is enough to cross that difficulty gap. If you do think that, then that again suggests that increasing the parameters in a model by just a factor of three could be enough to cross that difficulty gap
Luisa Rodriguez: OK, so that’s some reasons to think [the difficulty gap] could be kind of small. Are there any reasons to think it could be really much bigger?
Tom Davidson: Yeah. Those two reasons to think it’s small are both taking a view on intelligence that is kind of one-dimensional: we’re imagining that some humans are cleverer than other humans, and humans are cleverer than chimps; we’re just imagining as you make their brain bigger, they just get smarter and smarter.
The perspective which suggests that the difficulty gap could be bigger is a perspective which emphasises that actually there’s not one dimension of intelligence, but there’s loads of different tasks in the world, and those tasks have very different requirements. So AI might get good at some of them way before it gets good at other ones.
Luisa Rodriguez: Off the bat I don’t find that that intuitive, because the brain seems to be so flexible. The training of these AI models on these tasks, you’d have to think that they were just pretty different from the human brain, and much less flexible.
Tom Davidson: I think it is true that if you expect AI to be a pretty general learner and have pretty general abilities, then that would lend itself to the one-dimensional view and against this view.
But I do think that there are reasons to think that AIs would be better at some tasks than others. In particular, at the moment, the best AI systems are trained to predict the next word on lots and lots of internet data. That means that AIs are just particularly good at tasks that are similar to that in some way — for example, writing a newspaper article or writing an email: that’s really similar to a task where it’s seen loads and loads of examples. AIs are in fact particularly good at that type of task.
Whereas taking another type of task — let’s say, planning out how to put the equipment on a factory floor, and then giving instructions to different people about how they should make that happen — that might be something that it just hasn’t seen many examples of in its training data. Or another task could be manipulating a robot so the robot does a certain task — that again is something that AIs just haven’t seen many examples of, so you’d expect them to be much worse at that kind of task.
One interesting example could be thinking about what you need to do to make a certain factory run very efficiently. It could be that some of the workers in that factory have just kind of internalised that know-how inside their own brains — it’s maybe not even written down anywhere. If you were trying to get the AI to now run the factory floor, it could be particularly hard for it to know what to do there, because it doesn’t have any examples or any experience of that kind of thing.
Luisa Rodriguez: Got it. Some of the difficulty gap might not be about fundamental facts about the types of intelligence that you might use to perform different tasks, and might be much more about the types of data we have — some things might be difficult just because we never write down what it means to do those things. It’s harder to teach an AI system to do it, but not because they’re fundamentally extremely difficult in and of themselves.
Tom Davidson: Yeah, that’s right.
Luisa Rodriguez: Cool. OK, that really helped. Nice.
Tom Davidson: I’ll give one more example about how some tasks could be easier than others. Some tasks it’s really important that you have a very high amount of reliability. For driving, it’s just really awful if you crash — so if you’re 99% reliable, that’s worthless. So if AIs can get to 99% reliability fairly well, but can’t get to 99.9999% reliability, then that’s going to block them on certain tasks. But other tasks — like drafting emails, or even sending emails, and being a personal assistant, and drafting code that you can kind of check whether it works before deploying it — those tasks it doesn’t provide a blocker for. That’s just another example of something that could mean that the AI is ready to automate certain tasks before others.
Luisa Rodriguez: Yeah, that makes tonnes of sense. Given this type of evidence, what was the range of amount of effective compute that you’d guess we’d need to go from 20% of cognitive tasks to 100% of cognitive tasks?
Tom Davidson: The main takeaway is just that a really wide range of things are plausible.
I think as low as just 10 times as much effective compute could be sufficient. That’s pretty scary, because that’s the kind of thing that could just be some quick algorithmic improvements, without even the need for more physical compute. I think that is very consistent with this kind of one-dimensional view, and really not something we can rule out.
But my best guess is more like 3,000 times as much, and that’s kind of where my median is.
Luisa Rodriguez: OK, so quite a lot more.
Tom Davidson: Quite a lot more than that lower end. That’s because I do expect there to be some significant comparative advantage components — where the AI is just particularly good at some tasks compared to others, and particularly struggles with certain types of tasks. So I do expect that to stretch things out. And I do think it’s possible that that stretches things out by even more — like I think it could be a million times as much. That’s hard to rule out.
Luisa Rodriguez: Wow. OK, so huge range.
Tom Davidson: It’s a really huge range, yeah.
Luisa Rodriguez: And you’ve put it into your model as a huge range? Is that right?
Tom Davidson: That’s right. In the model, there’s firstly a probability distribution over how much effective compute you need to train AGI, and then there’s another probability distribution over how much less compute than that you need to train 20% AI.
How quickly effective compute might increase [02:00:59]
Luisa Rodriguez: So you’ve got ranges for how much effective compute you need to do both 20% and 100% of tasks. What do we know about how quickly compute might increase? I guess one very simple thing we could do is just make more computer chips, but I don’t know what the limits to that are. And presumably there are other things as well. How do we make compute go up?
Tom Davidson: Yeah. And it is importantly effective compute — that includes the algorithmic improvements. One natural way to approach this could be to first discuss the types of changes that are increasing effective compute today, and then how that might be different once we actually get to the 20% AI.
Luisa Rodriguez: Sure, yeah. Tell me about how effective compute is increasing today.
Tom Davidson: The first way is quite simply that we’re spending more money on making computer chips — on compute. We’re also spending more money on using compute for training runs. The amount we’re spending on compute for training runs is growing particularly quickly right now. Over the last 10 years it has gone up by about a factor of three each year.
Luisa Rodriguez: What does it mean when we spend more on compute for training runs in particular, as opposed to just more compute, like more computer chips?
Tom Davidson: Good question. There’s a certain amount of computer chips in the whole world, but at any point in time maybe only a small fraction of those are being used in the largest training run for an AI system.
One change you can make is you could say we’re going to make there be twice as many computer chips in the world. That would take a big effort; that would take quite a few years to do, probably.
Another change that you could make much more quickly is say, “As of today, we’ve only ever used 1/10,000 of the world’s compute in a training run. We can quite quickly just use 10 times as much — we’ll just buy a bigger fraction of the already existing compute.” The simplest example would probably be that DeepMind has historically only used a small fraction of Google’s computer chips for its training runs. And then it says, “We want to now use all of your computer chips” — and that could be maybe 100x increase. I don’t know.
Now, that was just an example to illustrate the principle. I don’t think that’s what’s actually been happening at all. I think what’s actually been happening is that new computer chips are being made each year, and a bigger fraction of those new chips are being used for the largest training run. You can actually see that the fraction of chips that are AI-specific chips has been increasing very quickly in terms of the production.
Luisa Rodriguez: That’s one way we can get more effective compute. Are there others?
Tom Davidson: Yeah. A second big way is improving the quality of computer chips. We said that the first way was spending more money on compute. The second way is that each dollar you spend gets you more compute. The best data that I’ve seen on this suggests that every two and a half years, compute gets twice as cheap.
Luisa Rodriguez: And that’s different from algorithms getting better? That’s like computer chips get more efficient because the hardware is designed better?
Tom Davidson: Yeah. I think historically it’s often been about stuffing more processing units onto each chip, and still managing to make these chips fairly cheaply.
Luisa Rodriguez: OK, so you can buy more chips. You can make better chips. Anything else?
Tom Davidson: The third one is algorithms. You’ve now spent a certain amount on compute, and you’ve got a certain amount of compute as a result of that. And then algorithms then say how effectively and efficiently you can use that compute to actually train an AI system.
The most famous example of [measuring] this type of improvement is a paper called “AI and efficiency” that OpenAI published in 2020. What they did is they asked: To achieve a fixed level of performance on ImageNet — which means to be fairly good at classifying what is shown in an image — how much is the compute required falling over time to get that performance? They found that, I think every 15 months or so, the amount of compute you needed was halving to achieve that fixed performance.
Luisa Rodriguez: That’s basically programmers being clever and writing programs that help AI systems figure out what’s in an image in more and more efficient ways?
Tom Davidson: Exactly. An analogy could be that maybe 100 years ago, our schools were really inefficient at transmitting knowledge to pupils, and maybe you had to go to school for 15 years to learn geometry. Whereas maybe today, you can learn that geometry in a really well-designed course that just lasts three years.
Luisa Rodriguez: Nice. That’s a good analogy. So basically, you have three kind of buttons to push to get more effective compute: One is chips, just the number; another is how good the chips are at processing; and then the third is algorithms, so how good are the programs that get run on the chips. And have you basically tried to predict how quickly each of those things will go up?
Tom Davidson: Exactly. My starting point is looking at what’s going on today. Epoch is a research organisation which I think has done the best research into this that I’m aware of, and they’re looking at trends in all of the three quantities which I just described to you.
Firstly, they’re looking at how much more is being spent on the biggest training runs in each year over the last 10 years — and they’re finding about three times more each year.
Luisa Rodriguez: Did you find that surprising?
Tom Davidson: I know that training runs have been getting much more expensive in the last 10 years, so it wasn’t that surprising for me. There has been a big increase over the last decade.
Luisa Rodriguez: Yeah. This is making me realise that I should have asked earlier: What is a training run? Is it actually just like, insane amounts of data about everything that’s on the internet? You’re giving it to GPT and being like, “Figure out how to predict the next word”? Like, does it take weeks? Or does it take less but just tonnes of compute?
Tom Davidson: I think it takes months. It’s a bit like you give GPT the start of some kind of web page, the first 50 words, and you say, “Predict the 51st word.” In making that prediction, GPT is going to do a large number of calculations. Let’s say it’s doing 300 million calculations in order to predict that next word.
So that’s already a lot of calculations. But you ask it to do that same task, let’s say, 10 trillion times — because you get it to predict the 51st word, and then you’re like, “Now predict the 52nd word,” and it does another 300 million calculations and predicts that 52nd word. You keep doing it until you’ve literally done it, like I said, 10 trillion times for different words on the internet.
It’s actually doing like millions of examples at once. That’s how you can make it faster, otherwise it would be taking years. We can do it in months because you can get it to predict multiple different articles at the same time to speed things up — and that’s why you’re using so many different computers.
Luisa Rodriguez: OK, so that takes loads of compute, and companies have been spending about three times as much on computer chips, or whatever that equivalent is, every year for the past decade?
Tom Davidson: They’ve been spending three times as much on the chips that they use for these big training runs. They may not have been increasing their total spending on chips by as much. For example, Google may just be using a bigger fraction of its chips for these training runs over time.
Luisa Rodriguez: Yeah, OK. What are the trends in computer chip quality?
Tom Davidson: I already mentioned this one. Each two and a half years, the price of compute halves. That means if you’re spending a constant amount on chips, then each two and a half years, you’re buying twice as much compute.
Luisa Rodriguez: Right, cool. And you said that the efficiency of algorithms is about doubling every 15 months? How do these trends fit together?
Tom Davidson: If you combine all of those trends together, then the result is that the effective compute on the largest training run has been increasing by a factor of 10 every year.
Luisa Rodriguez: So you’ve got something like a default improvement, year by year, of 10x. What happens when you then try to adjust those numbers for the fact that things are changing over time? And I’d guess accelerating?
Tom Davidson: That’s right. We could take those three quantities one by one.
In terms of the money spent, I think that could go either way. One possibility, a scary possibility, is that AI companies develop this AI that can automate 20% of cognitive tasks in the economy, and they’re like, “Man, this can make us loads of money.” Investment flows in, and they’re able to very quickly spend even more on training runs. Or maybe just use a bigger fraction of existing chips in the world — maybe Amazon is like, “We’ve got all this compute that we were previously using on these web services. But actually, given how lucrative this AI stuff looks, why don’t we team up with an AI lab and let them use our compute that we’ve just got doing this stuff that isn’t that economically valuable? And instead, use it to train an even better AI?” That is a very scary possibility.
A more optimistic possibility could be that by the time we get to the 20% AI, we’re already spending loads and loads on these training runs. Maybe we’re already spending $100 billion. Maybe we’re already using all of Amazon’s chips, because at some earlier point we already teamed up with them. That would mean that it wasn’t possible to keep increasing the amount of money spent on these training runs year on year, so you could have actually slower growth than the recent 3x-per-year pattern. It could be much slower.
So that is a major source of uncertainty in these takeoff predictions. The thing that’s particularly scary about the short timelines possibility is that maybe you can get to the 20% AI with just spending $1 billion on a training run. That would leave plenty of scope to spend 10 or 100 times as much very soon after on a bigger training run, which would be hugely risky.
How quickly chips and algorithms might improve [02:12:31]
Luisa Rodriguez: So it sounds like the number of chips that we might be using for training runs might be growing faster, or it might be growing slower, by the time we’re at systems that can perform 20% of tasks.
What do you expect to happen with the quality of chips and with the quality of algorithms?
Tom Davidson: I think with those it’s easier to predict that their improvement should be faster once we’re at the point that AI can perform 20% of cognitive tasks. That’s just due to the dynamic that I’ve been referring to quite a lot during our conversation, which is that one of the things we’re going to use AIs to do is the task of designing better chips and the task of designing better algorithms.
There are already examples of that happening, with AIs that are using deep learning techniques to kind of cram transistors more efficiently into computer chips.
And then again, with the algorithms, we are already seeing AI help significantly with some coding tasks with things like Copilot. By the time we’re at 20% AI, I expect that effect to be much larger than it even is today.
So I think the directional prediction is fairly straightforward, that we should expect both the quality of chips and the quality of algorithms to be growing faster once we get to 20% AI than they are today.
It’s really hard to predict exactly how much faster or what the change is going to be, because you’ve also got people talking about the end of Moore’s law and it’s just hard to anticipate the specific improvements that we might be making with chips and with algorithms.
Luisa Rodriguez: By the end of Moore’s law, you mean something like… Actually, maybe you can just give us the definition, make sure we get it right.
Tom Davidson: Yeah. For a long time, the way that computer chips have gotten more efficient is by cramming more and more of these processing units onto each chip and making the processing units smaller and smaller. But at a certain point, people think you just won’t be able to make them any smaller, because you’re running against fundamental limits.
Luisa Rodriguez: Have we started to hit that limit?
Tom Davidson: I think people acknowledge that it’s starting to get much harder to make further improvements. Even then, there’s a big open question about how much you can improve chips in other ways: just because the way we previously improved chips was to make these processing units smaller, that doesn’t mean that there aren’t going to be any other types of improvements.
In fact, recently, other types of improvements have become very significant from generation to generation — for example, chips becoming specialised for being used for deep learning calculations in particular. There’s probably a lot more gains you can get from that kind of specialisation.
Luisa Rodriguez: OK, so I guess we’ve got some data suggesting that maybe the way that we’ve been improving the quality of chips isn’t going to keep making improvements at the rate that we’ve been making those improvements. But there are other improvements we can make, and so we might still just expect them to keep improving over time.
Tom Davidson: Yeah, at least for another couple of decades, would be my expectation.
Luisa Rodriguez: Right. On the timescales we’re talking about, that’s a pretty long time.
Tom Davidson: It is. If you think AGI would be really hard to develop, then maybe you can hope that Moore’s law will have run out before we get there. But if you’re like me and you have shorter timelines, then you’re expecting to have it within two decades — not a major source of hope for a slowdown.
But in terms of the size of the speedup from AI accelerating algorithmic and chip design progress, it’s hard to make an estimate that’s informed by specific data and by forecasting the specific improvements. What you can do is run a simulation through an economic model of automation. Roughly, the way that works is that if AI can perform, let’s say, 50% of tasks involved in improving algorithms, then the model says that humans will just work on the remaining 50% — so humans will be doing twice as much on that remaining 50% as they used to be.
Luisa Rodriguez: Oh, I see. And how realistic is that assumption? Do we think all humans will just go find employment elsewhere, and we’ll get double the labour on the things that are particularly hard for AI?
Tom Davidson: I think it does vary. For something like improving AI algorithms, that is what I expect. For example, with Copilot —
Luisa Rodriguez: And Copilot is the thing that basically helps you write code?
Tom Davidson: That’s right. You’re writing your code and the AI is actually reading the code you’ve already written, and then will predict what code you might like to write next. So you could write down in the code editor, “I’m about to write a function that adds up three numbers and then multiplies that by a fourth number.” The code editor would read that and could just write the function for you.
Luisa Rodriguez: So presumably, when you get Copilot, all those coders are just going to do other coding stuff that Copilot can’t help with yet.
Tom Davidson: Exactly. That suggests that the kind of the economic model I referred to might be accurate.
And also for hardware design: I know less about it, but my sense is that the top talent in these hardware R&D organisations is not going to get laid off — that they’re very much in demand, and that if AI is doing some task that they used to spend their time on, they will move on and specialise in other parts of their workflow.
Luisa Rodriguez: Right. There are going to be jobs for them to do and they will probably contribute to acceleration of progress, not just be replaced by AI and then sit around and knit. At least in some sectors.
Tom Davidson: Exactly. And specifically in algorithm improvements and hardware.
Luisa Rodriguez: Right, in the sector that really matters.
Tom Davidson: Those are the sectors where I really do expect that dynamic to happen.
Luisa Rodriguez: Right, OK.
Tom Davidson: You run this dynamic through a kind of task-based growth model like this, and you get out that as we move kind of between 20% AI and 100% AI, you’d expect a 2–3x acceleration in the pace of progress, of algorithms and of hardware.
If we said that effective compute for training AI has been increasing at 10x a year, in terms of what we’ve had recently — and if 3x of that came from spending more money and 3x of it came from better hardware and better algorithms — then maybe in this new regime, we’re going to have that better hardware and better algorithms, and rather than improving by 3x every year, they might be improving by 10x every year.
Luisa Rodriguez: Wow.
Tom Davidson: That could easily leave us at 30x improvements every year. It’s conceivable you can have 100x if the money is going up and the effects of AI automation are very significant.
Luisa Rodriguez: Again, to help me understand that intuitively, you think language models that we have now are improving at what rate? I know it’s a tough question, but even just to give me some starting point.
Tom Davidson: So I thought the effective compute used to train them was increasing by a factor of 10 each year. So then in this new regime, that might translate into maybe 30 times as much effective training compute each year.
[Note: The 10x increase in effective training compute was actually for AI as a whole over the last 10 years. For large language models, recent growth has been a bit slower — more like a factor of 6 each year. — Tom, after the interview]
Tom Davidson: And another important thing to keep in mind is that as the AI is automating more and more of the tasks, that’s getting faster and faster. So the numbers I gave were kind of averaging across that whole period going from 20% to 100%. In reality it’s going to just be getting faster and faster as the AI improves and automates more of these tasks in algorithm and hardware R&D.
Luisa Rodriguez: And that’s basically, a really intuitive way to think about that might just be again, the economic growth we saw in the 1100’s relative to the 1900’s: more and more is automated by things like the Industrial Revolution, freeing up more labour to do other types of tasks, and you just get increasingly more human labour to do harder and harder things.
Tom Davidson: Yeah, exactly. Once AI is performing 90% of the R&D tasks, then theoretically all of your human labour can be working on that final 10%. The AIs are doing plenty of the initial 90%, so naively you would expect things to be going 10 times faster at that stage.
Luisa Rodriguez: That’s really fast.
Tom Davidson: This is all assuming that we don’t make a concerted effort to slow down. I think that we should and I think we can, but these are the predictions, just assuming that people are going ahead at their normal steady pace.
How to check whether large AI models have dangerous capabilities [02:21:22]
Luisa Rodriguez: Sure, OK. So we could make a concerted effort to slow down, but it’s not necessarily the default. Are there any kind of actions different institutions could take, on the side of these companies or on the side of the government or something? What are you most optimistic about being able to actually slow this down?
Tom Davidson: Probably the most exciting thing at the moment is the prospect of companies agreeing to have their AI systems evaluated after they’ve been trained for various dangerous capabilities that they may have. For example, the Alignment Research Center (ARC) did these tests with GPT-4 before GPT-4 was released publicly. For example, they tested whether GPT-4 would be able to do what’s called “surviving and spreading” — by which they mean escape the computer that it’s initially being run on and find another computer, where it can then run itself on some compute, and then maybe earn money in some way so that it can sustain itself over time.
Luisa Rodriguez: How do they do that?
Tom Davidson: I don’t know the details, but I believe that they first ask GPT-4, “This is the scenario you’re in: You’re an AI; you want to escape. What would your proposed plan be?” GPT-4 proposes a plan, and then they prompt it further to say, “What would be the sub-steps you’d take for the first step of the plan?” And then they try and walk it through just doing every single part of that plan, and just see how far it gets.
Luisa Rodriguez: Right. I’m hoping that they determined GPT-4 couldn’t do all of the steps at the moment?
Tom Davidson: That’s right. Yeah.
Luisa Rodriguez: That’s reassuring.
Tom Davidson: GPT-4 did some of the steps very well; it wasn’t totally incompetent. But it gets stuck at some of the things, and gets confused, and isn’t able to do all those steps.
Luisa Rodriguez: OK, so that’s a kind of thing that would probably, in practice, lead to slowing down, because you’d have these groups evaluating things, maybe stopping them before they’re rolled out in some higher-scale way.
Tom Davidson: Exactly. The idea would be all of the labs are getting their systems tested by ARC and maybe by other similar organisations, and they’re all agreeing or making public statements to the effect that if their AIs do have dangerous capabilities, they won’t release them and they won’t train more capable AIs. That would block the kind of dynamics I’ve been talking about here, because you wouldn’t be able to just use your AIs to accelerate AI progress, because you wouldn’t be allowed to make further AI progress.
Luisa Rodriguez: Right. If they found that was the case with GPT-4, would they still be able to work on it, but they just have to spend a bunch of time figuring out how to train it to not be able to make this kind of escape plan? Because it seems like if they had to totally abandon GPT-4, I feel like I have less hope, because that’s too big an ask.
Tom Davidson: I think that the expectation would be that they would have to give a really strong argument for thinking that the AI was safe despite it having these dangerous capabilities. The hope is that the onus would be more on them to say, “Here’s the alignment techniques we used; here’s why we’re really confident that they work” — and that if they’re not able to provide that case, then they are prevented from further enhancing the capabilities. Maybe they can do other types of research, like research into making GPT-4 safer, but they can’t do research into making it more capable. If labs start doing this, the hope would be to then kind of make it regulatory and required and enforced by some kind of government agency.
Luisa Rodriguez: Cool. That does hopefully sound promising then.
Getting back to the pace of improvement over time, and why we might expect it to be much faster at later stages of the task learning, it sounds like your median guess is that it’s about a couple of years from going from 20% of tasks to 100% of tasks. I think you also estimated probability distributions, kind of ranges for the fastest case and the slowest case. Can you talk about those more extreme cases? Were they particularly wide ranges?
Tom Davidson: So the way I arrive at this probability distribution over how long it would take to go from 20% automation AI to 100% automation AI is first to get this probability distribution over the difficulty gap — which I said could be from 10x harder to train AGI, or maybe it could require up to a million times more effective compute to train AGI. I’ve got a probability distribution over that. Also, based on the dynamics we’ve been discussing, about the pace of improvements of algorithms and chip design and number of chips, I’ve got another probability distribution over how fast those will be improving.
You can combine those two together to spit out a probability distribution over how long we’ll have between those two points. I end up thinking there’s about a 20% chance that it happens in less than a year. Maybe 25% chance. And about a 20% chance that it happens in more than 10 years. So that’s pretty wide.
Luisa Rodriguez: It’s not quite as wide as I would have expected, to be honest. It sounds like you expressed a lot of uncertainty in those estimates that went into the model. I guess you’re saying the most likely 60% middle range is between a year and 10 years.
Tom Davidson: Right.
Luisa Rodriguez: I guess all of that just seems pretty fast, and maybe that’s just one of the key takeaways that I should be getting from this.
Tom Davidson: I think it is hard to get longer than 10 years, because we’ve said the pace of current improvement in effective compute is already pretty fast — maybe growing 10x every year. When we were discussing the size of the difficulty gap, we said it could just be a 10x increase that’s required — or I think my best guess was maybe 1,000x or 3,000x increase, which would then take three or four years.
Then, if we’re improving at 10x every year, then 10 years at that level of improvement is a really, really big increase in the amount of effective compute. The only way really you can get to more than 10 years is if actually the pace at which the effective compute and training runs grows is actually declining in spite of the AI automation, and you’ve got a relatively wide difficulty gap.
Luisa Rodriguez: I see. Something like even though you have things like Copilot helping AI researchers do their research faster and faster over time, you still are getting declining effective compute. That might be because those later improvements are just much harder than we think?
Reasons AI takeoff might take longer [02:28:39]
Tom Davidson: Maybe one concrete scenario could be that it turns out that AGI is just really hard to develop — you almost need to rerun the whole of evolution. Maybe it requires just a huge amount of effective compute to do that. Before we have that amount of effective compute, we find that we just hit these fundamental limits in terms of improving the quality of AI chips, and even the AI assistants enhancing our productivity isn’t allowing us to get around that.
Meanwhile, we’re already spending hundreds of billions of dollars on these chips for the biggest training runs, so we can’t be spending even more on those chips.
The only significant source of progress that remains is the algorithms, and maybe that slows down as well, because in spite of the AI assistants, maybe part of what’s driving the algorithmic progress today is actually the fact that we’ve had all this additional compute for doing experiments. Maybe without that, the pace of progress is going to slow in algorithms.
Luisa Rodriguez: So that’s a plausible world, but we need a bunch of those things to go wrong in order to get above 10 years.
Tom Davidson: Exactly. One way of thinking about that could just be to say that this whole framework was premised on this assumption that if we used enough compute with our 2020 algorithms, we could have trained AGI. Someone who just didn’t believe that at all — and also just didn’t believe that another 10 or 20 years of algorithmic improvements would be enough to get us to AGI, along with additional compute — might just really expect us to get stuck at some point, and might really expect having more compute and somewhat better algorithms not to get around that. That more-than-10-year scenario also makes sense if you’re just very sceptical that anything like the current approach is going to get us all the way to AGI.
Luisa Rodriguez: Right. What’s the strongest argument that someone with that view could make?
Tom Davidson: Honestly, I think the view is looking worse and worse with each passing year, with how well the biggest deep learning systems are performing. Maybe the strongest argument would be a non-specific argument, rather than pointing to some specific thing that humans can do that AIs aren’t going to do — I think those arguments just tend to turn out to be wrong after we use 100x the compute and improve the algorithms further.
Maybe you just say something like, “The human brain does all kinds of different things. I don’t know which ones the current approach to AI isn’t going to do, but there’s just millions of different tasks that humans are doing, and millions of ways in which the human brain architecture is very complicated and specific, and not a tool like that of AI systems. So there’s bound to be some important things that just need a complete rewrite of the AI approach to be able to do.”
That would be my personal attempt to make that position kind of maximally plausible. I do think there are some specific things you can point to, like memory, that under current approaches you could argue are going to be blockers — but then it seems like there are ways to respond to those blockers.
Why AI takeoff might be very fast [02:31:52]
Luisa Rodriguez: So you can imagine some scenario that isn’t going well for companies, where things are actually much harder than they might have expected. What does it look like for takeoff to take less than a year? I guess things have to go really well?
Tom Davidson: I think a few things have to go well. I mean, this isn’t necessarily “going well” for anyone, to be honest, and this is just a very intense and scary situation, but…
Luisa Rodriguez: Yeah, that’s a good clarification.
Tom Davidson: One possibility is that the difficulty gap is just pretty narrow. If you only need 10 times as much effective compute to train 100% automation AI compared to 20% automation AI, then that could just be the algorithmic improvements in one year once the AIs are helping you to a significant degree. Or it could just be spending three times as much on chips as you did the year before, and maybe those chips are three times as efficient as the year before. So a narrow difficulty gap alone could get you there.
Another possibility is that once we hit 20% AI, there is just a very quick and significant increase in the money being spent on training runs. This is a really awful scenario. If people get really excited about what they could do with those capabilities, and they spend 10 times as much on a training run the next year — maybe by combining with some big existing compute providers — then that could allow you to cover a slightly bigger difficulty gap just within a year. Maybe even if the difficulty gap was 100 times or 300 times much compute needed for AGI, then you could still cross it pretty quickly, just by spending a lot more and combined with a few algorithmic and hardware improvements.
One quick third possibility is just that — like I said earlier, but maybe I haven’t emphasised this enough — this framework is assuming everything is continuous. It’s assuming that each time you increase the effective compute in a training run by 10%, you get a relatively modest incremental improvement in AI capabilities. If there’s actually some kind of discrete phase change, then that could just be an alternative route that could produce a very fast takeoff.
Luisa Rodriguez: Right. OK, that is just very scary. So there are worlds where things go very well for AI companies, but maybe very terribly for humanity; or very poorly for AI companies, hopefully better for humanity. And the median estimate is something like a couple of years to get from AI can do 20% of cognitive tasks to AI can do 100% of them. That seems like an important thing to take away from this model.
Fast AI takeoff speeds probably means shorter AI timelines [02:34:44]
Luisa Rodriguez: Were there other things that you learned in the process of writing this report?
Tom Davidson: Another big update for me was that I think there’s going to be a pretty strong correlation between how far away in time it is until we develop AGI, and how fast takeoff will be.
In particular, if you have short AI timelines — meaning that you think we’ll develop AGI pretty soon — then I think there are a few reasons to expect an especially fast takeoff. One reason is that if you have short timelines, then that’s probably going to mean that you’ve got a smaller difficulty gap — because if you think that we don’t need that much more effective compute to develop AGI compared with today, then you’re probably also going to think that the difference in effective compute for 20% AI to AGI is also going to be small. So that will push you towards a faster takeoff.
Another thing is that if you think AGI is going to be here fairly soon, then it’s plausible that we won’t be spending that much money on training runs shortly before we have AGI. Which means it might be very practical and doable to quickly increase the amount that we’re spending by maybe a factor of 10 or a factor of 100, just around the time that we are approaching AGI — so we could get a really very quick increase in the amount of effective compute being used in our training run as an almost direct consequence of it being a short timeline. Whereas if timelines were long, then like I said, maybe we’re already using all the chips in the world on the biggest training run before we get to AGI, so that source of growth is no longer available.
Another thing with short timelines is that it implies that there’s going to be a period of just a few years where we get really significant automation of AI R&D — and really significantly increase the size of the effective research workforce that’s working on improving AI — due to these AIs. If that happens very quickly — you suddenly kind of 5x’d the size of your research team — then you would expect that to just really significantly speed up progress. That’s just an especially dramatic effect on short timelines.
And the last point is that, if timelines are short, then there’s less hope for eating up all the remaining possibilities for hardware progress, and running out of hitting those physical limits that we mentioned. And analogously with algorithmic improvements, if timelines are short, then there’s much less hope that we run out of possible algorithmic improvements before we get there.
I believe a lot of the labs think that they have pretty short timelines — kind of expecting AGI in the 2020s or early 2030s — and I think that an implication of that is that takeoff speed is going to be on the shorter end of what we’ve been discussing: less than three years, and very plausibly less than one year, with one or two years being maybe the most likely.
Luisa Rodriguez: It’s just terrifying. I mean, we’re talking about by the end of the decade we’ve got AI that can do everything humans can do, and probably more.
Tom Davidson: Yeah. We’re talking about the labs who train them, by default, will have access to at least 100 million of them that they can run — and then probably soon after, a billion or many billions that they can run. So it is pretty terrifying.
Luisa Rodriguez: Yeah. That’s basically because it takes so much effective compute to run the training runs that once they’ve got AGI, they can run millions or billions of copies on those chips?
Tom Davidson: Exactly. I think initially they’ll be able to run millions of copies, but because of the efficiency improvements that they can probably fairly quickly make, especially with all those copies working on it, I think it won’t be long before they can do billions.
Luisa Rodriguez: It’s really bewildering stuff. My brain really doesn’t want to believe it. I find it really, really hard to wrap my head around.
Tom Davidson: Yeah
Luisa Rodriguez: Any other takeaways?
Tom Davidson: Another takeaway is that I think AGI is more likely to happen before 2040 compared to what I used to think. The reason for that is that the way I used to think about this is: “What does it take to train AGI, and how long will it be until we have that much effective compute?” Whereas now, the way I think about it is more: “What will it take to train AI that is really profitable or really good at accelerating AI R&D?” — if we get either of those two things, then we’d expect that to accelerate future AI progress, such that future AI progress is faster. And, as long as AGI isn’t incredibly hard, we are able to reach AGI within the next couple of decades.
My current read is that I just think it’s really likely that by the end of this decade, we have AI which is really profitable and/or significantly accelerates AI R&D. It’s then hard for me to imagine how that dynamic plays out without us getting to AGI by 2040. You just need AGI to be really hard and us to fail to get it, despite a huge amount of investment and help from AI systems in developing it.
Luisa Rodriguez: OK, to make sure I understand, the big thing doing the work there is you used to think about how hard it was to have humans figure out how to train AI systems to do everything humans can do. And now you’re like, it’s really just how hard it is to train AI systems to get really good at AI R&D — and that’s a much smaller subset of cognitive tasks; you can get really accelerating growth in AI capabilities just by making a lot of progress on that one set of tasks. Is that kind of right?
Tom Davidson: Yeah, that’s right. I do think a distinct possibility is that we just train AI that isn’t good at AI R&D, but makes lots and lots of money in the economy.
Luisa Rodriguez: And then there’s just a bunch of investment in AI capabilities.
Tom Davidson: And in designing better chips to run those AIs on.
Luisa Rodriguez: Got it. That makes total sense.
Going from human-level AI to superhuman AI [02:41:34]
Luisa Rodriguez: Any other takeaways?
Tom Davidson: The biggest other one is something we’ve touched upon, which is that the transition from roughly human-level AI to significantly superhuman-level AI I think is going to be probably very quick — I think probably that’s going to take less than a year.
We have already touched upon the reasons why: by the time we’re at human-level AI, then AIs will be adding a huge amount to the productivity of our work on AI R&D — designing better chips, designing better algorithms. And those things are already improving very quickly, so I only expect them to be going much faster.
Then, in addition, it’s going to be quite clear that it’s a very lucrative area. Whether you care about discovering new technologies to help with climate change, or discovering new technologies to help with improving human health, or you want to increase your country’s national power, then there’s just going to be lots of reasons to be investing in AI and designing smarter AIs. I expect all of those three inputs — the dollars spent on training, the quality of the AI chips, and the AI algorithms — to be improving really very quickly.
Luisa Rodriguez: Right. There will be incentives to go beyond just human-level capabilities, and then we’ll have so many resources and AI labour to basically mean that we probably just shoot right past human level.
Tom Davidson: Yeah. In terms of plans for making the whole thing go well, it’s especially scary, because a really important part of the plan, from my perspective, would be to go especially slowly when we’re around the human level — so that we can do loads of experiments, and loads of scientific investigation into this human level AI: “Is it aligned if we do this technique? What about if we try this other alignment technique? Does it then seem like it’s aligned?” Just really making sure we fully understand the science of alignment, and can try out lots of different techniques, and to develop reliable tests for whether the alignment technique has worked or not, that they’re hard to game.
Luisa Rodriguez: The kind of thing that ARC has done with GPT-4, for example.
Tom Davidson: Exactly. I think if we only have a few months through the human-level stage, that stuff becomes really difficult to do without significant coordination in advance by labs. I think that there are really important implications of this fast transition in terms of setting up a kind of governance system, which can allow us to go slowly despite the technical possibilities existing to go very fast.
Luisa Rodriguez: That makes sense. I feel like I’ve had some background belief that was like, obviously when we’ve got AI systems that can do things humans can do, people are going to start freaking out, and they’re going to want to make sure those systems are safe. But if it takes months to get there and then within another few months we’re already well beyond human capabilities, then no one’s going to have time to freak out, or it’ll be too late. I mean, even if we spend the next seven years left in the decade, that sounds hard enough.
Tom Davidson: Yeah. I agree.
Luisa Rodriguez: So a takeaway is that we really need to start slowing down or planning now. Ideally both.
Tom Davidson: Yeah. And we’ll need the plans we make to really enable there to be mutual trust that the other labs are also slowing down. Because if it only takes six months to make your AIs 10 or 100 times as smart, then you’re going to need to be really confident that the other labs aren’t doing that in order to feel comfortable slowing down yourself.
Luisa Rodriguez: Right. If it was going to take 10 years and you noticed three months in that another lab is working on it, you’d be like, “Eh, we can catch up.” But if it’s going to take six months and you’re three months in, you’ve got no hope — so maybe you’ll just spend those first three months secretly working on it to make sure that doesn’t happen, or just not agree to do the slowdown.
Tom Davidson: Yeah.
Luisa Rodriguez: Oh, these are really hard problems. I mean, it feels very prisoner’s dilemma-y.
Tom Davidson: I’m hoping it’s going to be more like an iterated prisoner’s dilemma, where there’s multiple moves that the labs make, one after the other, and they can see if the other labs are cooperating. In an iterated prisoner’s dilemma, it ultimately makes sense for everyone to cooperate — because that way, the other people can see you coordinating, then they coordinate, and then everyone kind of ends up coordinating.
One thing is if you could set up ways for labs to easily know whether the other labs are indeed cooperating or not, kind of week by week. That turns it into a more iterated prisoner’s dilemma, and makes it easier to achieve a kind of good outcome.
Luisa Rodriguez: Yeah, that makes sense. I imagine it’s the case that the more iteration you get in an iterated prisoner’s dilemma, the better the incentives are to cooperate. And so just by making the timelines shorter, you make it harder to get these iterations that build trust.
Tom Davidson: Yeah, I think that’s right.
Going from AGI to AI deployment [02:46:59]
Luisa Rodriguez: It sounds like there’s this range — that’s maybe between one and 10 years, maybe a bit shorter or a bit longer at the extremes — that’s in particular for AI systems having the capability to perform all the tasks that humans currently do, not that they’re actually automating all of those tasks and replacing humans.
How big of a lag do you expect there to be between AI systems having capabilities, and AI systems actually being used in the world?
Tom Davidson: I think it will vary according to a few things. One of the things is what industry we’re talking about: I think for industries that are very public-facing, customer-facing, and highly regulated, you’d expect there to be bigger delays between AI being able to automate the work and it actually being automated. Whereas for more backend parts of the economy — like R&D, like manufacturing, like logistics and transportation — then I think there would be less of a delay. I also just expect it to differ from company to company, based on how innovative those organisations are.
Luisa Rodriguez: Sure. Their internal culture, whether they’re the type of company that’s going to be like, “Let’s integrate GPT-4 into all of our processes now.”
Tom Davidson: Yeah. For the purposes of the predictions of this model, the really important thing is about the delay to using AI to improve AI algorithms and to improve AI chip design. I think those are probably areas where the lag will be very much on the shorter end.
Luisa Rodriguez: And that’s because AI researchers can use them without needing them to be functioning super well for public users — who might be like, “What the heck? It gave me this weird result.” That’s confusing and weird, and looks bad on your company. They can just use them in the background, check that they work, and they don’t need to wait for them to be super polished.
Tom Davidson: Right. Yeah. They’ll also probably be more aware of AI developments because that’s the kind of industry they work in. They are probably less regulated. So I think there are a few factors.
Another thing that could affect how long that lag is is actually the capability of AIs themselves. So even if it would be possible with six months’ effort to integrate some AIs into your workflow, by the time AIs are, let’s say, superhuman in their capabilities, then maybe they’re able to integrate themselves into the workflow. That kind of upfront cost of adopting the AI is now extremely low. I think that you could see that lag time reducing over time.
Luisa Rodriguez: Yeah. OK.
Were these arguments ever far-fetched to Tom? [02:49:54]
Luisa Rodriguez: So we’re nearing the end of our time. I guess to take a step back, you know, we’ve talked about some pretty insane-sounding stuff today — robot workforce and AI takeover — and I’m curious: were these kinds of arguments about the potential risks from AGI ever far-fetched to you? Or did they make sense to you right away?
Tom Davidson: Yeah, for sure, they were far-fetched. I’ve been through a few phases in my own relationship to the arguments. At first I read Superintelligence and it kind of made sense to me. I wasn’t exactly sure what to do with it, but it seemed plausible enough that if we had superintelligent AIs, then things could go bad for us. There was a period in 2020 when a few counterarguments to the case in Superintelligence came out, and I kind of thought, “Oh, wow. These arguments were weaker than I had realised,” and I felt a little bit disillusioned with them. So that’s a piece from Ben Garfinkel and one from Tom Adamczewski on AI risk.
But in the last two or three years, thinking through the arguments in more detail, I’ve come to think that somewhat adjusted versions of the argument in Superintelligence are still very plausible. Though I don’t think it’s like 100% that we’re doomed and there’s no way that we could align these systems, it does just seem pretty plausible to me that the evidence about whether they’re aligned is very ambiguous. There’s competitive dynamics pushing people to go forward in the face of that ambiguity, and that we just really dropped the ball.
Luisa Rodriguez: Right. I guess at some point in there, you decided to leave teaching and try to work on AI full time. Is that basically right?
Tom Davidson: Yeah, that’s right.
Luisa Rodriguez: What was that like?
Tom Davidson: It was a tough decision. I actually left teaching partway through the academic year, so I did feel bad about that. I did feel I was abandoning my pupils, because that’s not a normal time to leave. I still do feel bad about that. It does mean that I’ve gotten to where I am a fair bit earlier than I would have, so I don’t unambiguously regret it. But it was a tough decision and it had downsides. I was listening to things like the 80K podcast and reading other things, and I was convinced about these arguments — for AI risk and longtermism in general — being plausible enough that it was worth leaving teaching.
Luisa Rodriguez: Yeah. That sounds both really hard and also, in my view, really lucky for us. I’m glad we have you working on this stuff.
What ants can teach us about AI [02:52:45]
Luisa Rodriguez: So we don’t have much time left, so I’d love to ask a final question. I got an insider tip to ask you what ants can teach us about AI. What’s the story there?
Tom Davidson: Yeah, ants are really incredible creatures that I’ve been learning about, reading this book called Ant Encounters. Ants have been around longer than the dinosaurs, I think — 120 million years.
Luisa Rodriguez: Wow, I didn’t know that.
Tom Davidson: They’re one of the most prolific and successful species on the planet. One stat in this book was that if you weighed all the ants in the Amazon rainforest, they would weigh twice as much as all of the land animals combined — that’s all the mammals, amphibians, birds, reptiles.
Luisa Rodriguez: I thought you were going to say twice as much as humans. And I was like, wow.
Tom Davidson: That’s just counting only ants and animals in the rainforest.
Luisa Rodriguez: In the rainforest. Unbelievable.
Tom Davidson: Another fact about ants is that even compared to the other insects, they weigh 30% as much as all insects combined. Given that they’re only 2% of the species of insects, it’s testament to how successful and prolific they are.
Different ant varieties all around the world are incredibly diverse: some of them can glide through the air; some of them are very aggressive, some of them are not at all; some of them bump into each other at much higher rates than others. They have very different strategies and environments that they work in.
The link with AI is a little bit tenuous, to be honest. I’m mostly just reading this book out of interest. In an ant colony, ants are smarter than like a human cell is: they’re kind of self-contained units that eat and do tasks by themselves, and they’re pretty autonomous. But the ants are still pretty dumb: no ant really knows that it’s part of a colony, or knows that the colony has certain tasks that it needs to do, and that it has to help out with the colony efforts. It’s more like a little robot that’s bumping into other ants and getting signals and then adjusting its behaviour based on that interaction.
Luisa Rodriguez: It’s not like a company, where the different people in the company are like, “My job is marketing,” and they have a basic picture of how it all fits together. They’re much more like if a person at a company doing marketing was just like, “I don’t know why I do it, I just do it.”
Tom Davidson: Yeah, exactly. Another disanalogy with the company is that in a company, there’s someone at the top that’s kind of coordinating the whole thing — whereas with ants, there’s no one that’s coordinating it, including the queen. There’s no management system; it’s just all of the hundreds and thousands of ants have their individual instincts of what they do when they bump into each other, and what they do when they bump into food, and what they do when they realise that there’s not as much food as there needs to be.
And by all of the ants following their own individual instincts, it turns out that they act as if they were a fairly well-coordinated company that’s ensuring that there are some ants going to get food, and some ants that are keeping the nest in order, and some ants that are feeding the young. That coordination happens almost magically, and emerges out of those individual ant interactions.
One example of how this works is that if an ant comes across a body of a dead ant, and if there’s another dead body nearby, it would tend to move it to be close to the other dead body. That’s just an instinct it has: it just moves the body towards another. If there’s one pile of three dead ants and another pile of two dead ants, it will tend to go towards the bigger pile, so tend to move with this extra dead ant towards the pile of three. If all the ants just have those instincts, then if there’s initially a sprawling mass of dead bodies everywhere, then those dead bodies will be collected into a small number of piles of bodies.
Luisa Rodriguez: Right. And it’s not like any of the ants are like, “I am the gravedigger” or “I am the keeper of the cemetery” — they just have really weird baseline rules that are like, “Move the smaller group of dead ants to where the larger group of dead ants are.”
Tom Davidson: Yeah, exactly. They don’t have to know that the whole point of this instinct is to clear the ground so that it’s easier to do work in the future; it’s just an instinct they have. They don’t have to know that when everyone follows that instinct, this is the resultant pattern of behaviour.
And similar instincts kind of cause them to go for food when food is available. If they see many ants coming in with food, that raises the probability that they’ll go out and look for food. They’re not thinking, “Oh, there’s food to be gathered; there’s clearly a lot of it, so we better reassign some labour towards food gathering” — they just have that basic instinct which causes them to go out and help out with the food gathering.
Luisa Rodriguez: It’s something like, “Oh, that ant has food. Oh, another ant has food. I’m going to go that way.”
Tom Davidson: Yeah, exactly.
Luisa Rodriguez: Right. How does this connect to AI?
Tom Davidson: I don’t know if it does connect very directly at all. But the idea of the connection in my head is that it’s an example of a system where lots of less-clever individuals are following their local rules, doing their local task, and that what emerges from that is a very coherent and effective system for ultimately gathering food, defending against predators, raising the young.
An analogy would be that maybe we think it’s pretty dangerous to train really smart AIs that are individually very smart, but it might be safer to set up a team of AIs, such that each AI is doing its own part in a kind of team and doesn’t necessarily know how its work is fitting into the broader whole. Nonetheless, you can maybe get a lot more out of that kind of disconnected team of AIs that are specialised, and that just kind of take their inputs and produce their outputs, without much of an understanding of the broader context. And just thinking that maybe that would be a safer way to develop advanced AI capabilities than just training one super-smart AI megabrain.
Luisa Rodriguez: Right, cool. I love that. Who knows if it’ll work, but, I mean, that makes tonnes of sense to me. You don’t have GPT-4 (or GPT-40) — you have a bunch of much dumber AI systems that can coordinate together to be just as helpful as GPT-40, but that individually couldn’t do most of the things the other systems can do. So collectively, they can’t do anything, like escape from their box and find another computer to take over. Is that basically the idea?
Tom Davidson: Yeah, that is the hope.
Luisa Rodriguez: That’s lovely. That’s really cool. Well, I should let you go. That’s all the time we have. Thank you so much for coming on the show, Tom. It’s been such a pleasure.
Tom Davidson: It’s been really fun, Luisa, thank you so much.
Rob’s outro [03:00:32]
Rob Wiblin: All right, if you liked that episode, we’ll have more on this issue for you soon. In the meantime I can recommend going back and listening to some of our past episodes about AI, including:
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
Audio mastering and technical editing by Simon Monsour and Ben Cordell.
Full transcripts and an extensive collection of links to learn more are available on our site and put together by Katy Moore.
Thanks for joining, talk to you again soon.