Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite
By Luisa Rodriguez, Katy Moore, Robert Wiblin and Keiran Harris · Published August 23rd, 2023
Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite
By Luisa Rodriguez, Katy Moore, Robert Wiblin and Keiran Harris · Published August 23rd, 2023
On this page:
- Introduction
- 1 Highlights
- 2 Articles, books, and other media discussed in the show
- 3 Transcript
- 3.1 Cold open [00:00:00]
- 3.2 Rob's intro [00:01:06]
- 3.3 The interview begins [00:02:38]
- 3.4 How Michael became interested in AI [00:03:41]
- 3.5 Surprising developments in AI [00:11:01]
- 3.6 The jobs most and least exposed to AI [00:16:31]
- 3.7 Why AI won't necessarily cause massive unemployment in the short term [00:47:13]
- 3.8 How automation affects employment at the individual level [01:07:23]
- 3.9 How long it took other technologies to have economy-wide effects [01:18:45]
- 3.10 Ways LLMs might be different from previous technologies [01:27:02]
- 3.11 Ways LLMs might be similar to previous technologies [01:48:40]
- 3.12 How market structure affects the speed of AI adoption [02:03:26]
- 3.13 Couldn't AI progress just outpace regulation? [02:12:16]
- 3.14 How other people think AI will impact the economy in the short term [02:17:22]
- 3.15 Why Michael is sceptical of the explosive growth story [02:24:39]
- 3.16 Whether AI will cause mass unemployment in the long term [02:33:45]
- 3.17 Career advice for a world of LLMs [02:56:46]
- 3.18 Relieving talent bottlenecks [03:16:53]
- 3.19 A musician's take on AI music [03:24:17]
- 3.20 Rob's outro [03:28:47]
- 4 Learn more
- 5 Related episodes
Do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of that was invented in 1892.
However, the number of human manual operators peaked in 1920 — 30 years after this. At which point, AT&T is the monopoly provider of this, and they are the largest single employer in America, 30 years after they’ve invented the complete automation of this thing that they’re employing people to do. And the last person who is a manual switcher does not lose their job, as it were: that job doesn’t stop existing until I think like 1980.
So it takes 90 years from the invention of full automation to the full adoption of it in a single company that’s a monopoly provider. It can do what it wants, basically. And so the question perhaps you might have is why?
Michael Webb
In today’s episode, host Luisa Rodriguez interviews economist Michael Webb of DeepMind, the British Government, and Stanford about how AI progress is going to affect people’s jobs and the labour market.
They cover:
- The jobs most and least exposed to AI
- Whether we’ll we see mass unemployment in the short term
- How long it took other technologies like electricity and computers to have economy-wide effects
- Whether AI will increase or decrease inequality
- Whether AI will lead to explosive economic growth
- What we can we learn from history, and reasons to think this time is different
- Career advice for a world of LLMs
- Why Michael is starting a new org to relieve talent bottlenecks through accelerated learning, and how you can get involved
- Michael’s take as a musician on AI-generated music
- And plenty more
If you’d like to work with Michael on his new org to radically accelerate how quickly people acquire expertise in critical cause areas, he’s now hiring! Check out Quantum Leap’s website.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.
Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore
Highlights
The jobs most exposed to robots, software, and AI
Michael Webb: So I did a lot of work in my paper looking at, if you’re just aggregating, how does exposure vary overall on average as a function of how much education you have, or how much your job is paid currently, whatever it is. And I found a really interesting pattern of results, comparing AI to these previous technologies. So think about a graph where, on the x-axis, you have income or salary for a job — so on the left-hand side it’s very low paid; right-hand side it’s very high paid — and then on the y-axis, you have how exposed jobs at that level are.
So for robots, you have a line that basically starts high on the left and then goes down a lot: so it’s very low-skilled jobs, low-paid jobs that are exposed to robots, and high-skilled jobs are not at all exposed.
With software, you have a very different pattern, which is that actually the lower-skilled jobs are not exposed and the higher-skilled jobs are not exposed; it’s the middle-skilled jobs that are most exposed. And what’s cool is that this reflects a pattern that lots of other very careful research in economics has found about the impact of software in particular: it’s really impacted middle-skilled jobs.
Like, really careful studies specifically for software, middle-skilled ones are most exposed. So it was cool that I kind of replicated that with this very different method.
But the really interesting thing is that for AI, it’s a completely different pattern again. So for AI, it’s actually the upper-middle-skill jobs that are most exposed. So the line starts on the bottom left, at a low level, and then goes up and up and up and it peaks, I think, in the 88th percentile of jobs as sorted by salary — so really upper, upper income, high-paid jobs — and then goes down at the very top. So that the CEOs are paid the most and not exposed so much, but the lawyers and the accountants and whatever, they actually are exposed.
The really interesting thing is that the OpenAI paper — using a different methodology and focusing very much on GPT-4 and these new large language models, as opposed to the slightly earlier vintage of AI I was focusing on — they replicate this figure with their measure, and it’s basically exactly the same. So the same pattern.
Now, it turns out that many of those jobs are the most regulated jobs. So the doctors and the lawyers and the accountants, they’re the ones who actually have the most power in the economy and society to put up barriers and stop the exposure that might otherwise cause them to be paid lower wages or whatever. They can pull up the drawbridge and stay happy as they are. But on the pure economics of this — before getting to the political economy; instead it’s a fancy pretend world where there’s no actual humans and there’s no politics — it’s those jobs that are most exposed.
How automation can actually *create* jobs
Michael Webb: Let’s just look at this one sector that’s getting automated, and think about whether it really is the case that when you have big automation in the sector, the number of humans goes down. That’s intuitive, right? Automation means fewer humans. Done. Turns out, it’s not that simple. So there’s a few examples I’ll start with, and we can talk about what the broader lesson is.
So here’s one example. I think this is due to Jim Bessen, who’s an economist who studied ATMs, cash machines, where you go to a bank branch and get cash out. So before ATMs, there were individual humans in the bank. You’d go up to them and show some ID and get your account details, and they would give you some cash. Bank tellers, I think they were called. And you would think, ATM comes along, that’s it for those people: no more bank tellers, huge declines in employment in the banking sector.
What in fact happened is something quite different. So the ATM did indeed reduce the number of people doing that specific task of handing out money. But there are other things people do in bank branches as well. The big thing that happened is that because a given bank branch no longer needed to have all these very expensive humans, doing the cash-handing-out, it became much cheaper to open bank branches. So whereas before, there were only bank branches perhaps in the larger towns, suddenly banks were competing to open branches everywhere — because the more you can go into the smaller and smaller towns and villages, you can have more customers and provide them a service and so on.
So what happened was the ATM meant there were fewer staff per bank branch, but enabled the opening of many more bank branches overall. And that actually offset the first impact. So fewer staff per bank branch, but so many more bank branches that the total number of people in bank branches actually went up.
What they were doing was quite different. The humans now are doing more higher-value-add activities. They’re not handing out cash. They are doing other kinds of services, but you know, similar people doing a similarish job, and there’s actually more of them now.
The fancy economist way of putting this is: you have a “demand elasticity in the presence of complementarity.” So those are crazy silly words, but I’ll tell you what they mean. “Demand elasticity” means when you reduce the price of something, you actually want more of it. So automation generally brings the cost of things down. But what normally happens is, one doesn’t say, “Great, I’ll have the same amount of stuff.” They say, “No, I want more of that stuff now. Give me more, more, more.”
Then “in the presence of complementarity”: so “complementary” we think of, if humans are complementary to the automation, the technology, whatever it is, in some way, there’s still some humans involved — fewer than before, per unit of output, but still some. Then because people now want more and more of this stuff, each unit of the thing is more automated, but there’s still some humans involved. And therefore, you end up possibly having ever more humans totally in demand, doing slightly different things, but still roughly in the same ballpark. Does that make sense?
How automation affects employment at the individual level
Michael Webb: So the final thing I think it’s really interesting to think about, and it’s often not intuitive, is thinking about the impacts on individuals. So we’ve talked about, we’ve accepted that there definitely could be some individuals whose jobs existed, and then they don’t — they disappear because they’re being automated. Nothing I’ve said so far is saying that doesn’t happen there; that certainly happens a tonne. And I’ve given you some examples of why perhaps we shouldn’t worry so much about it, because there’s more demand for other parts of the economy, whatever. But what does that look like for the actual person experiencing it? And is it good or bad? And when is it good or bad? So there’s a couple of really interesting facts about the way things kind of work in the economy that I think are worth touching on briefly.
The first one is that there is this not-very-nice term, but it has a benign consequence: the term is “natural wastage.” So if you are a company, and you’re hiring people. Let’s say you’re McDonald’s. You’re McDonald’s, and people leave — their average tenure is they start working for you, and six months later, they leave and go and get a better job. So that’s half of people leave within six months, whatever, that’s called natural wastage: people naturally leaving. And you would include people retiring and whatever as well as part of that — that natural churn. That means there’s a very natural attrition happening in all companies all the time.
Let’s take McDonald’s as an example. So if McDonald’s somehow automated everything, like the burger flipping and the cashiers. They’ve been trying for a long time, right? That’s slowly happening, but there’s still some humans there right now. Suppose they did it. All they would have to do is stop hiring any new people, and within a year, they would just have no employees, because everyone naturally leaves and goes and gets a better job anyway. That generally is what happens; the average tenure is like six months at McDonald’s. So you just sit and wait and everyone goes off on their own accord — no firing required, no displacement required.
And it makes a tonne of sense, right? Because if you are the mastermind organising the economy, and allocating people to different jobs — obviously, that’s not what’s happening — but if you are the mastermind, it would naturally be the right thing to say that people who have got all the human capital, and they’ve worked in the industry, and they’re going to find it really hard to move: let them keep the jobs. And then the young people, they shouldn’t get into it because that’s a bad bet for the long run; they should do something else. And people make those decisions for themselves, and that’s what happens. So you have these really interesting effects of that kind.
So the big macro thing is that. Older people will stay in, and younger people move into different things. And that’s by far the most important individual-level effect. Now, where does that go wrong? It generally goes wrong in a couple of circumstances. Namely, it’s very inflected by geography. So what we know, in terms of where you can go and see people who have actually really been hurt by an automation technology coming along, or maybe another kind, trade is a big example: you know, China comes along and suddenly makes things cheaper. If you are a young person in a big city and you were doing some job, then that job goes away, whatever.
If you are an older person who’s been at a particular firm for a very long time in a town where that firm is the only large employer, and there was no other industry in that town — and also you’ve got this amazing union job: your wages are really high because of decades of strong worker empowerment and so on — and then that company goes away from that town, that is not a good place to be. Because empirically, people turn out to be stuck in their towns, right? They just don’t like moving. And if you’re in your 40’s, 50’s, got a family… Your house is now worth nothing, because there’s no jobs anymore. So you can’t sell it. You can’t sell up and move somewhere else. That’s really hard. You can’t sell your cheap house and move to a much more expensive house in a city somewhere else. Your kids are in school, as you’re saying, et cetera. So what you see is people kind of get stuck, and there is no job of any comparable quality that they can do.
So on average, when you have these big plant closures, people do tend to go and get other jobs, but they often experience big wage declines, like 25% enduring wage decline. That’s not nice. That’s really horrible. That’s a really horrible thing to happen to someone. And that happened to large numbers of people at the same time in these geographically concentrated ways. That’s where things get bad. So if you’re young in a city, you’re kind of fine. If you’re older, mid-career, older in a small town with a single employer, and that’s the thing that gets automated: that’s when things look much less rosy.
How long it took other game-changing technologies to have economy-wide effects
Michael Webb: We can start by very quickly talking about what’s the baseline — like, how long do these things take for other technologies that were as big as AI seems like it will be — and then we can talk about why might AI be different and what will be the same.
So the two big examples that are everyone’s favourite are IT — computers in general — and then electricity. These are probably the two biggest general-purpose technologies of the last certainly 150 years. So how long did they take? Well, there’s an astonishing regularity in how long these things took. You can date the arrival of electrification to 1894, which is the time economists who study this tend to use — I think it’s a couple of years after the first proper power station was built — and date IT to 1971. I’m not sure why economists use that date; maybe it was when some IBM mainframe properly came online or something. Anyway, those are the dates people seem to use in economics.
And if you plot the x-axis as years following the arrival of IT or electrification, and then the y-axis is percent of adoption that’s happened — so the 0% is no one has it; 100% is now everyone has it — it turns out those two lines sit exactly on top of each other. So IT diffused basically as fast as [electricity]. So surprising point number one is that these things that were 100 years apart almost took as long as each other, even though you might expect things to be moving faster later in history. And the second interesting fact is that it took a long time. So it took 30 years to get to 50% adoption.
One final quick interesting fact: If you think about all technology and capital in the economy — take the US, and think of every bit of factory equipment and every computer and everything you might think of broadly as technology, capital equipment type stuff. So in 1970, there was basically close enough to 0% of the capital stock consisted of software and computer equipment, hardware and software. In 1990, it had only got to about 2%. And then by 2000, it had gotten to 8%. So the real inflection is about 1995, if you look at the graph. The point is there were two and a half decades of actually very slow [growth]. Everyone thought, “This is it. We’re here: IT era. Go!” And yeah, 25 years later, nothing to see. And only after 30 years do you see a real increase. And even then, even in 2000, only 8% of the capital stock consisted of computer software and equipment.
Luisa Rodriguez: Yeah. And was most of the thing happening in that early period the technology improving? Or was it just the technology being incorporated into the world, and the world catching up in various different ways took that long?
Michael Webb: Very much both. Very much both. Think about technology in the 70’s compared to 1990’s, the IT was getting ever more user friendly, ever cheaper. You know, Moore’s law was happening all through this time: so you wait a few years, it gets twice as fast and half as expensive. So that’s happening. And people always wait a long time to get to the point where it’s actually worth adopting. And it takes a long time for companies to adjust all their operations to make good use of this stuff, right? And we’ll say more about that in a second when we think about LLMs.
Another example, actually: what is interesting is the automation of the telephone system. So do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of that was invented in 1892. However, the number of human manual operators peaked in 1920 — 30 years after this. At which point, AT&T is like the monopoly provider of this, and is the largest employer in the US. They are the largest single employer in America, 30 years after they’ve invented the complete automation of this thing that they’re employing people to do. And the last person who is a manual switcher does not lose their job, as it were: that job doesn’t stop existing until I think like 1980.
So it takes 90 years from the invention of full automation to the full adoption of it in a single company that’s a monopoly provider. It can do what it wants, basically. And so the question perhaps you might have is why?
So the way it worked in the case of AT&T was: there’s a fixed cost of automating any particular telephone exchange. The exchanges are physically located in different places. The telephone exchange in a city is going to have thousands, hundreds of thousands of wires coming into it. So by switching that to automated, you save loads of humans. Whereas all these different exchanges in the middle of nowhere in much of rural areas, they might only have one human. You don’t save much by switching, but the cost of doing all that change in the equipment is actually still really high. There’s a huge fixed cost, and so you don’t bother doing it until you really, really have to. If you look at the history of AT&T, they started by automating these big cities and the very last thing to be switched over from human to automated was, I think it was on some island somewhere, like a tiny population. It was just like the last thing that was worth doing.
Ways LLMs might be similar to previous technologies
Luisa Rodriguez: What will make AI similar to other technologies that have been kind of general-purpose, big game changers?
Michael Webb: So I think there’s two buckets: there’s a “humans are humans” bucket, and then there’s the government bucket. So let’s start with the government bucket. The government bucket is basically regulation. So I put it as a broader bucket, just call it “collective action.” Government is one kind of societal-wise collective action, but there are other things — like unions and professional bodies and all this kind of stuff.
So, here’s a question: Do you think that in 10 years’ time, you’ll be able to just talk to a language model, and it will prescribe you a prescription-only medication, which you can then go and collect from a pharmacy? Do you think that would be legal? Because by the way, it’s possible today. It’s good enough. Basically we’re there, right? You can do that already today. It’s going to be good enough. Would it be legal?
Luisa Rodriguez: Yeah. As soon as I start thinking about it, I’m like, there are a whole bunch of interest groups that are going to want that not to happen. There are some interest groups that are going to feel worried that it’s going to make mistakes; there are interest groups that just want to be protecting the people in the jobs that are doing that now. So it seems at least plausible to me that people somewhere will decide that we shouldn’t make it legal. Though I don’t know. In 10 years, it also wouldn’t surprise me, to be honest.
Michael Webb: Right. You’re absolutely right in the sense that there are these very powerful interest groups. So some of the areas that will be most affected by AI — that we all agree, I think, seem very likely to be able to — are things like what the doctors do and what the lawyers do. Doctors and lawyers, separately, have the most powerful lobby groups you can possibly imagine: the American Medical Association, the British Medical Association, and then for lawyers, it’s the Bar, the Bar Council, the various solicitors’ things. So here’s one thing that happens: they do all of the kind of professional standards for that profession. They decide who gets to be a doctor, and they decide how many doctors get to be accredited as doctors every year, or lawyers, whatever. Right? If you just open a newspaper basically any day of the week, you will see how powerful doctors are.
And so regulation has always been something that is kind of regulation by the government / collective interest groups. So unions, whether they’re blue collar unions or whether they’re professional white collar workers — which sound like they’re not unions, but they really are unions; they don’t have the word union in the title, but they’re definitely unions — they’re very, very, very powerful. And so these really, really slow down all kinds of applications — possibly for good reasons a lot of the time. An open question for any given question is whether we should or shouldn’t slow down the application, given the harms involved. But they are always going to argue for “You need the human completely in the loop, and we shouldn’t change a thing, and we should keep our salaries the same” and so on and so forth. So I have no idea what’s going to happen in any particular case. But I think we can be extremely sure that there’s a tonne of interest groups that are going to be pretty successful for a pretty long time in stopping things from changing faster than it’s in their interests for them to change.
Then the other bucket of “humans are humans” in terms of the way they make decisions: So I talked about how LLMs could make it easier to retrain, but you still have to want to retrain, or do things differently in some way.
Think about teaching as an example: LLMs could completely change the way classrooms are run. And the teacher will spend much less of their time marking, and maybe lecturing, and more time doing one-to-one support, whatever it is. Maybe teachers want to do that, maybe they don’t. I don’t know. I imagine most of them would want to do that, actually. But one thing I’m quite sure in saying is that there is no way the government will be able to force teachers to start adopting their software and using it in certain ways. The teacher is master of their classroom, right? There’s been many examples of governments wanting to make teachers do things differently, and generally, it’s very hard. Occasionally, I know with phonics in the UK, things can get trained in certain places — but in general, teachers’ unions have a lot of power, and the government cannot control what happens in classrooms. And so that again applies in lots of different places. The stronger the union, the more it applies. But in general, humans don’t like change for the most part. They like things the way they are.
Whether AI will be rolled out faster than government can regulate it
Luisa Rodriguez: AI seems like it moves incredibly quickly. If we’re going to get improvements to GPT-4 — that basically double from the ones that we got last year, in the next year — will there already just be really extreme impacts? And not just impacts, but adoption, that means that some of these regulatory effects just don’t keep up, and so don’t slow things down the way you might expect they would, or they have in other cases?
Michael Webb: I think that the things we were talking about before — in terms of all the reasons that interest groups and lobby groups can slow things down — as I said, I think those very much apply here. And so even though the technology is moving really quickly, they will “keep up” in terms of stopping it being used, right? However fast it’s moving, you can always pass a bill to say no, right?
So the thing that I’d be more worried about is the sharp end of capabilities — the things that you’ve had many guests on this podcast talk about — as well as misuse and those kinds of things. That’s where I’d be more concerned about regulation keeping pace. Because there, it’s not like you have to persuade lots of people in the world economy to adopt your thing and change their systems. All you need is just one bad person to have a very clever thing and to do bad stuff with it, right?
It’s those kinds of things that you have to worry more about regulation moving fast enough. But even there, I’m not an expert on the history of nuclear regulation, but I believe something like the following is true. At some point, someone convinced the US government, the US president, that nuclear was a really big deal, and it was possibly very dangerous. And with a single stroke of the pen — I don’t know whether that was a presidential executive order or congressional legislation — but almost overnight, all research on nuclear anything was classified. So you’re a researcher, you’re just doing your PhD, sitting at home, doing some physics of whatever. Suddenly, from tomorrow, you doing any more work on that is illegal. The government can just do that, right? The US government can do that.
And you can imagine that if people do enough to convince governments that this stuff is really, really scary — in terms of the existential risk level of this — the government can be like, “OK, you convinced me. As of now, we are classifying all research on AI.” That could just happen tomorrow, and then all these companies would just shut down overnight. And that would be the law, and they couldn’t do anything about it, end of story. That’s a completely possible scenario, in terms of the powers governments have.
Luisa Rodriguez: So it’s not that fast government action is impossible; it’s that it doesn’t happen that often. Sometimes it does happen suboptimally. It’s too slow.
Michael Webb: Always it happens suboptimally, right? It’s obviously slow. Or it’s too fast and it’s too blunt. As I say, I’m not an expert. I imagine there’s stuff that was classified under the nuclear stuff that was completely reasonable to not be classified, and people should still be able to, but they couldn’t. Maybe we’d have much better nuclear energy today if that hadn’t happened.
So there’s all kinds of ways in which any regulation is going to be very much not first best, second best, or at best, third best. And I think we’re in a really scary place right now, because regulation, if it happens, could do a lot of good. It could do a lot of harm as well. And so we’re going to have to tread very, very carefully.
Whether AI will cause mass unemployment in the long term
Luisa Rodriguez: Yeah. It feels both low and high to me. But it could be really high. It could be 50%; it could be 90%. At some point, we’ll probably get to superhuman AI, and it can do all the tasks we can and more. But even 50% feels pretty different to what’s happening now. And I’m wondering if, at that point, any of these models will even apply? At that point, is the world just too different for this kind of conversation to be applicable?
Michael Webb: Yeah. So I think I’m going to stand up for economists here and say yes: the models do apply, all these considerations do apply. So let’s think about the question: Wouldn’t it be different if we’re talking about 90% of jobs being automated? Let’s go back to a place we started earlier in the conversation, thinking about agriculture in the US. In 1790, it was a true statement to say, “In the coming years, 90% of jobs will be fully automated.” That’s a true fact. That’s in fact what happened.
That happened over a 100-, 150-, 200-year timeframe, and so the speed of this change is really important. But then don’t forget — back to our talk about unions and the American Medical Association and politics and so on, not to mention all the rational decisions of company CEOs and so on — there’s all kinds of forces that mean these things take a long time, even if in theory one could do lots of stuff quickly. There’s also just these capital availability constraints and all kinds of things as well.
There’s just not enough spare cash flowing around in the world for everyone to do that at the same time. Or there’s not enough resources, because adopting technology requires all kinds of work to be done, and you can’t just stop the entire economy whilst you retool everything.
People still want to eat food, and they still want to fly in planes, and whatever it is. You can’t like down tools to say, “No, all we’re doing for the next five years is switching everything over to LLMs.” You can only take so many planks out of your boats and replace them while you’re sailing in the water at the same time.
And so all these kinds of constraints I think are not obvious until you think about them. So that’s point one: Even in a world with 90% of tasks automated, we have been there before. It happened. It happened lots of times. And we’re still here, and things are fine, right? Things look quite different from 1790, but many things are still the same. In that sense, things can get weird, but there’s still some sort of upper limit in how fast I think they will naturally get weird from an economic perspective.
That said, let’s think about what happens when it is 90%, whether that comes in 100 years’ time or whether it comes in 10 years’ time. I think there’s a few really important things here. So we generally are going around saying, “Gosh, what if it automated 90% of cognitive tasks?” Big emphasis around the word “cognitive.” Many, many tasks in the economy are not cognitive tasks. And back to the old thing we’ve been discussing all the way through: when you automate something, suddenly all the incentives go towards how do you make more value out of the stuff that is left that is not automated, or that humans can now do because they’ve been freed up and they can do something else now. And I think there are many, many things that are not cognitive, that there’ll be huge amounts of demand for humans to do.
Articles, books, and other media discussed in the show
Michael’s work:
- Quantum Leap
- The impact of artificial intelligence on the labour market
- Are ideas getting harder to find? — with Nicholas Bloom, Charles I. Jones, and John Van Reenen
- On the Rationally Speaking Podcast: Are ideas getting harder to find?
- See all of Michael’s research, and get in touch with him, on his website
Technology, innovation, and the economy:
- Learning by doing: The real connection between innovation, wages, and wealth by James Bessen (also discussed on the EconTalk podcast)
- The dynamo and the computer: An historical perspective on the modern productivity paradox by Paul David
- Organizational and economic obstacles to automation: A cautionary tale from AT&T in the twentieth century by
James Feigenbaum and Daniel P. Gross - Are we approaching an economic Singularity? Information technology and the future of economic growth by William D. Nordhaus
- The growth of low-skill service jobs and the polarization of the US labor market by David H. Autor and David Dorn
- Banks scramble to fix old systems as IT ‘cowboys’ ride into sunset by Anna Irrera
- Engels’ pause: Technical change, capital accumulation,and inequality in the British Industrial Revolution by Robert C. Allen
- The race between education and technology by Claudia Goldin and Lawrence Katz
Effects of current AI on jobs and the labour market:
- The Clark Center for Global Markets Economic Experts Panel‘s polls on AI and productivity growth: Europe and US
- GPTs are GPTs: An early look at the labor market impact potential of large language models by OpenAI researchers Tyna Eloundou, Sam Manning, Pamela Mishkin, Daniel Rock
- Generative AI at work by Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond
- Experimental evidence on the productivity effects of generative artificial intelligence by Shakked Noy and Whitney Zhang
- Your job is (probably) safe from artificial intelligence in The Economist
- The growing importance of social skills in the labor market by David Deming
- Myth or measurement: What does the new minimum wage research say about minimum wages and job loss in the United States? by David Neumark and Peter Shirley
- Labor market institutions and the distribution of wages, 1973-1992: A semiparametric approach by John DiNardo, Nicole M. Fortin, and Thomas Lemieux
- The effect of minimum wages on low-wage jobs by Doruk Cengiz et al.
Other 80,000 Hours podcast episodes:
- Tom Davidson on how quickly AI could transform the world
- Markus Anderljung on how to regulate cutting-edge AI models
- Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less
Everything else:
- Economic possibilities for our grandchildren by John Maynard Keynes
- The housing theory of everything by John Myers, Ben Southwood, and Sam Bowman
- The rise and fall of American growth: The U.S. standard of living since the Civil War by Robert Gordon
- Mind’s eye: Grounded language model reasoning through simulation by Ruibo Liu et al.
Transcript
Table of Contents
- 1 Cold open [00:00:00]
- 2 Rob’s intro [00:01:06]
- 3 The interview begins [00:02:38]
- 4 How Michael became interested in AI [00:03:41]
- 5 Surprising developments in AI [00:11:01]
- 6 The jobs most and least exposed to AI [00:16:31]
- 7 Why AI won’t necessarily cause massive unemployment in the short term [00:47:13]
- 8 How automation affects employment at the individual level [01:07:23]
- 9 How long it took other technologies to have economy-wide effects [01:18:45]
- 10 Ways LLMs might be different from previous technologies [01:27:02]
- 11 Ways LLMs might be similar to previous technologies [01:48:40]
- 12 How market structure affects the speed of AI adoption [02:03:26]
- 13 Couldn’t AI progress just outpace regulation? [02:12:16]
- 14 How other people think AI will impact the economy in the short term [02:17:22]
- 15 Why Michael is sceptical of the explosive growth story [02:24:39]
- 16 Whether AI will cause mass unemployment in the long term [02:33:45]
- 17 Career advice for a world of LLMs [02:56:46]
- 18 Relieving talent bottlenecks [03:16:53]
- 19 A musician’s take on AI music [03:24:17]
- 20 Rob’s outro [03:28:47]
Cold open [00:00:00]
Michael Webb: So think about a graph where, on the x-axis, you have salary for a job — so on the left-hand side it’s very low paid; right-hand side it’s very high paid — and then on the y-axis, you have how exposed jobs at that level are.
So for robots, you have a line that basically starts high on the left and then goes down a lot: so it’s very low-skilled jobs, low-paid jobs that are exposed to robots, and high-skilled jobs are not at all exposed.
With software, you have a very different pattern, which is that actually the lower-skilled jobs are not exposed and the higher-skilled jobs are not exposed; it’s the middle-skilled jobs that are most exposed.
But the really interesting thing is that for AI, it’s a completely different pattern again. So for AI, it’s actually the upper-middle-skill jobs that are most exposed. So the line starts on the bottom left, at a low level, and then goes up and up and up and it peaks, I think, in the 88th percentile of jobs as sorted by salary — so really upper, upper income, high-paid jobs — and then goes down at the very top. The CEOs are paid the most and not exposed so much, but the lawyers and the accountants and whatever, they actually are exposed.
Rob’s intro [00:01:06]
Rob Wiblin: This episode is just so good, you’re in for a treat today! Or at least I loved it, so you’re in for a treat if you have taste like mine — I learned a tonne in this conversation.
Luisa interviews economist Michael Webb of DeepMind, the British Government, and Stanford, about how AI progress is going to affect people’s jobs and the labour market, something he has been studying for many years.
Will we see mass unemployment anytime soon? Whose incomes will go up and whose will go down? How widely shared will the benefits likely be? Will regulation greatly slow down deployment? How likely is it that AI will lead to explosive economic growth? What can we learn from history, and what reasons are there to think that this time is different than what we’ve seen before?
Luisa and Michael are so thorough that it’s not until the second hour that they actually get to those arguments for expecting AI to be an exceptional case. So if you’re wondering why they’re not more seriously contemplating futures that get weird, just stick around!
That said, Michael’s expectations for the economic effects of AI in the short-medium term aren’t as radical as some previous guests — like Tom Davidson, who spoke about his modelling of a growth explosion earlier in the year — and I’m glad we have this interview so that you can hear every different angle on this question.
All right, without further ado, I bring you Luisa Rodriguez and Michael Webb.
The interview begins [00:02:38]
Luisa Rodriguez: Today, I’m speaking with Michael Webb. Michael most recently served as a senior aide in the British government.
Before serving in government, Michael was a research scientist at Google DeepMind and received his PhD from Stanford University. He’s published articles on a wide range of topics and was coauthor of the now-famous article, “Are ideas getting harder to find?” He’s also done influential work on the topic we’ll be focusing on today, “The impact of artificial intelligence on the labour market.” Before Stanford, Michael studied at Balliol College Oxford and MIT. He was also, at various points, an organ scholar and choral conductor, an epidemiology researcher at Harvard, and a war correspondent for The Economist, where he reported from Afghanistan.
Thanks for coming on the podcast, Michael.
Michael Webb: Thanks for having me. It’s great to be here.
How Michael became interested in AI [00:03:41]
Luisa Rodriguez: So I hope to talk about the impact AI will have on the labour market over the next few years, and in particular, how it’ll affect our jobs. But first: often I know a lot about guests’ positions on the broader topic of AI before they’re coming on, if we’re going to speak about AI. And I actually don’t know that much about yours. So I am curious how excited or worried you are about the development of AI and possibly AGI?
Michael Webb: Awesome. I have been somewhat engaged in this question for quite a long time. So more than 10 years now, I’ve been hanging out with people who think that the end is nigh, or at least big things are going to happen very soon. And actually, at the very beginning of my PhD in 2014, I decided that this seemed like the most important question that people could possibly be thinking about and studying, and so ended up pivoting all of my research to focus on that question.
And as part of that, I ended up working with DeepMind for several years as a research scientist, and did a bunch of different things. One of the things I did — which I can’t say much about, because for obvious reasons, it’s quite confidential — was some kind of exercise that you might call today a timelines or forecasting exercise, to think about what was happening with AI in the world and what it would be able to do at different points in the future.
So I can’t say much about that. However, I think one thing I’m comfortable sharing is that one thing that everyone in all of DeepMind pretty much agreed with was the following claim. We were talking about different capabilities that might become possible with AI at different points in the future. One capability we talked about quite a lot was would an algorithm be able to write an undergraduate level essay on Foucault — Michel Foucault being the famously abstruse French philosopher — and the reason that I used it was because that was my hardest week in all of my undergrad, was having to write an essay on Foucault. And indeed, I ended up spending much more than a week doing it in the end, and had a great time, but it was extremely hard. And so I was like, that’s like a nice benchmark of a hard task.
And so I asked this question to all these people at DeepMind, all these researchers, and everyone universally agreed that an algorithm would not be able to write an undergraduate level essay on Foucault for a very, very, very long time from now — say the 2040s.
Luisa Rodriguez: And when was this again?
Michael Webb: This is in 2017, I should say. Six years ago now.
Luisa Rodriguez: OK, so at least a few decades.
Michael Webb: Right. And so as you might guess, when I got access to GPT — I guess, GPT-3, the text-davinci-003, whatever that was in 2020, 2021 — for the first time, the first thing that I put into it was this essay title that I had to answer myself as an undergraduate a decade ago. And it did a pretty passable job: it clearly passed my internal mental benchmark of what a decent undergraduate essay on this would look like.
And so that, for me, was this very visceral feeling of, here’s this thing that the smartest people — with all the knowledge and on the cutting edge in the world — thought, a small number of years ago, was completely decades away. And these people were themselves on the leading edge of being optimistic about this. Then here we are, something that was going to be 10, 20, 30 years away, and suddenly, we just rushed past it in three, four, five years. So that was a feeling of, whatever I think, in terms of how fast things will be or how long they could take, it’s possible for me to be very wrong on the side of things moving much faster than you expect.
And so I guess the question for me is like: Do I say, let’s take how fast things actually were between 2017 and 2022, say, and just assume that will continue? I was too low before; it’s going to be faster. Or do I think, actually, you know what? What we did was we took the linear forecast from 2017, and it turned out it was the second derivative, like the acceleration, what was happening, and so we actually need to continue forecasting the acceleration. I think we can have a long discussion about this. And I think probably the answer is it’s somewhere in the middle, and maybe there’s been acceleration and maybe it’s going to slow a little bit for various reasons.
But broadly, I think we are already at an incredible level of capabilities, which we’ll spend a lot of time talking about for the rest of this conversation, and there’s a lot of very obvious low-hanging fruit with what we already have to make it ever more effective. And I strongly suspect that we’re going to see at least what we saw from GPT-3 to GPT-4, the same again in the next year or two seems pretty likely. And that’s going to be huge, because the first thing was enormous. I’m curious whether you think this is sort of obvious, or a controversial statement, but it seems to me that it is better than the average human at pretty much everything where you can make a fair comparison, where it’s a text-in/text-out type of thing. That’s a massive deal, and I think people haven’t gotten their heads around how big a deal that is.
Luisa Rodriguez: Totally. I think there’s a thing where people are like, “Well, it can’t do this super well. And so it’s still clearly limited.” But if you take most of the things it can do well, it is just both much better at that thing, and also has a bunch of things it’s insanely better at. Including just productivity. Like, even if the one essay on Foucault is only somewhat better than the average student’s, it could write thousands of them in a day. And I feel like people miss some of the ways it’s not just roughly parity, or somewhat better, but also just worlds better in a few other important ways. But go on.
Michael Webb: I guess these elements we’ll discuss in not small detail over the course of today, vis-à-vis economic impacts. I imagine some listeners at least will be very bored by now of hearing people geek out about exactly what it can do and how great it is. So I don’t want to spend lots of time sharing my personal experiences of all of that, other than to say I think it’s completely amazing and I spend, depending on the day it is — like, if I’m in meetings all day, then no — but I’m all day just on my laptop doing whatever it is, I would easily spend three, four, five, six hours a day interacting with GPT-4 or Claude for the most part. And it’s just being transformative for many, many areas. In pretty much anything you can think of, there’s a way it can really transform things.
So the answer is: yes, I’m extremely excited, and also concerned because this is going to have huge impacts. And there’s all kinds of safety issues as well — which you have covered well on this podcast with other guests — that I think are incredibly serious and important.
Surprising developments in AI [00:11:01]
Luisa Rodriguez: Cool, well let’s dive into your area of expertise. So as you’ve said already, there’s been a lot of public discussion about how AI systems like large language models might affect the economy and the labour market since the release of GPT-3 and GPT-4.
You’ve already hinted at some of the capabilities, but I was particularly blown away by some of the test scores it got. So it scored in the 88th percentile on the LSAT, the 99th percentile for the verbal GRE, the 80th on the math GRE. It got fives on the AP exams in biology, macroeconomics, and microeconomics. It did much worse on writing-oriented exams, but still, the overall picture seemed to me to be that these LLMs are just getting really good at some types of things — and as you said, good enough that they might be able to contribute more and more to the economy. And again, you’ve alluded to this as well, that there seem to be loads of anecdotal reports of people using LLMs in their day-to-day lives.
I’m wondering if there have been any particular developments that you found especially surprising or impressive?
Michael Webb: Yeah. So many. But let me just mention a couple of them. The first one is everything that it can do with code. So in particular, the fact I can just say, “Make me an interactive visualisation that does this” or “Create something for me that has these properties” or whatever it is. It can just create that, something that would have taken a software engineer a full day’s work or something.
You know, I’ve got lots of engineering experience, but I haven’t made front-end apps or whatever. And in this case, recently I was making a front-end app that I wanted to use for something. Just did it in like 20 minutes. Just done, from end to end.
Luisa Rodriguez: Wow.
Michael Webb: And so this is amazing in terms of what it makes possible, the reduction in cost of highly customised software or widgets or who knows what in so many different areas. And this also means that you can interact much more easily with existing systems. Often you have this friction of some new technology comes along and there’s tonnes of work that has to be done to integrate it into your existing system so that it kind of plays nicely with everything else. But because the way that one “plays nicely” with other things generally involves code of some kind, well, GPT could write its own code. It can do it for you. So it makes the interaction much more seamless and quick to achieve. That’s kind of one bucket of things.
And a second one, very quickly, is the fact that you can hook it up to other things — not just kind of to integrate it or whatever into a wider system, but the point where the thing itself that you care about is what’s coming in and going out of the algorithm. So say we’re answering physics problems. It’s super easy to hook up an LLM to, say, a calculator is the obvious simple example. So it turns out that, just like humans, LLMs are not that great at mental arithmetic coming out of the blocks. However, you can give it sort of a prosthesis of a calculator, and it can learn to query the calculator, and it can know when it needs to do so and then get much better answers.
And you can take this to a much broader area. So one paper I saw, I guess, last year now, that I thought was amazing and also very scary, I think it was a DeepMind paper, where they were training an algorithm on answering physics questions. They said, here’s how well it does if you just use the language model, with train-of-thought prompting and various tricks that people have figured out are important. But then, separately, let’s take a physics simulator. So over the last decades, people have made all kinds of fancy software that is carefully designed to do particular tasks really well. And one such software is a physics simulator, where you encode all the rules of Newtonian mechanics and whatever into the simulation.
Luisa Rodriguez: And it’s doing something like, if you do all this stuff to all these particles, where will the particles end up?
Michael Webb: Precisely. Yeah. And so what the algorithm learns to do is, you give it some hard natural language physics question, it then learns to turn that physics question into code that it can use to query the fancy physics engine. And then on the other end of that engine, the engine produces some code as output, and then it can then read the output code, turn that back into natural language, and produce the answer to the original natural language physics question.
And so that’s just massive, right? And you’re seeing this all the time today, in perhaps more prosaic but really important areas like querying databases, or all kinds of things. And so these are kind of in a similar way to humans, our brains can do certain things, but they’re way better when they have paper to write on and books to look things up in, or the internet, or friends to ask, or whatever it is. And these models, they can do all those same things! So it means that it’s just incredible — that they’re unlimited, almost, in what they can do, because they can access all these other things, they can be extended so much more easily.
Luisa Rodriguez: Right. Yeah. I agree that is huge.
The jobs most and least exposed to AI [00:16:31]
Luisa Rodriguez: Moving on a bit to the thing I was especially interested to talk about today. There seems to be a huge range of views held by experts in these fields — economic papers, but also corporate research studies and government white papers — about how big of an impact these systems will actually have on the economy. So like, will all of those uses have real impacts not just on the economy, but also the labour market — so whether it’ll cause more unemployment, or reduce people’s wages, or increase or decrease income inequality.
You wrote this great paper on which jobs are most and least exposed to AI substitution — so this thing where AI automates away certain tasks — and how that might affect people in different industries, and income inequality in particular.
And I want to find out how your views have changed since you wrote the paper. But before we do that, I want to dive into the paper in some depth first. Let’s take things step by step. For context, in the paper, you looked at the jobs that were most exposed to AI based on just the AI capabilities that existed and had been written about in 2020, which is when you published the paper. Is that right?
Michael Webb: That’s right. And in fact, a tiny bit before 2020, because these papers take a lot to write.
Luisa Rodriguez: Sure, because you wrote the paper beforehand.
Michael Webb: And it’s worth saying a little bit about the methodology very quickly. So there’s a whole line of papers that try to think about the impacts of the exposure of particular jobs to particular technologies.
What they generally do is, there’s a bunch of data on what’s involved in particular tasks in particular jobs. So in particular, the US Bureau of Labor Statistics has sponsored for a long time this wonderful project that basically looks at every job in the economy and writes down exactly what’s involved in that job in a standardised way. So all these papers all use that in various iterations. And so you hand me a description of what a doctor does, say, “interprets tests to diagnose patient’s condition” or whatever.
So what I do with that task description is, I say it’s very hard for me to sit around and cleverly decide what’s going to be automated because of some technology. Let’s use the hive mind, and much smarter people to figure out what’s going to be automated. And a really good place for that is the text of patents. So a patent: you have an invention, you’ve got a patent saying, “This is my invention. I might use it.” And there’s many patents for every application of AI you can possibly think of.
And so that means that all these inventors have done the hard job for us of thinking what’s actually worth automating with AI. And because patents go back literally hundreds of years, the same thing has been done for everything — from steam and electricity to more modern IT and robots, and so on and so forth. And so you can kind of backtest the method and see if it works in some sense.
And a benefit of this is that patents are kind of forward looking. So at any point in time, you follow a patent today, basically before your invention is adopted. So think about the 20-year adoption curve or whatever for some technology: generally, the patent is year 0 of that, or a year minus three or five or something, right? So it is actually very forward looking. And so it means you’re not engaging in wild speculation, nor are you asking people who don’t really know much about automation or anything like that. You’re going to the actual experts who have incurred a cost: it costs like $10,000 or $20,000 to file a patent, so you would only do it if you’re quite confident it’s actually going to happen. So you can think of it as aggregating the forecasts of a huge decentralised set of people in the actual economy who are really incentivised to make the right decision here about their forecasts.
Luisa Rodriguez: Who are putting money on this bet.
Michael Webb: Exactly. They’re putting money on the bet. So that was my idea. And I did this, but to be clear, there’s other ways of doing it. I guess the original way of doing it was the BLS sponsored government people go and collect the data on tasks. They don’t just collect the task word or the sentence describing it in language; they also will have all kinds of other dimensions, like does this task involve upper body strength? Does it involve writing at the level of a college graduate? Whatever it is, right? And there’s loads of these things. And so the very first kinds of papers that did this kind of exercise, you know, 20 years ago, they picked some of those dimensions and said, let’s take this dimension as capturing what it is that software can do.
And then more recent papers have done… So there’s a paper that came out very recently by some folks at OpenAI and some of their coauthors. And they basically wrote down a prompt of what they think GPT-4 can do, and they got GPT-4 to go and read all the text descriptions of the tasks, and got it to label them as to whether GPT-4 thought, itself, that GPT-4 could substitute for these things. They also had some human labellers, and they compared the human labellers with the GPT-4 labellers. You know, asking people you recruit. I don’t know whether they use Mechanical Turk or Upwork or whatever to find the human labellers, but you’re generally getting people who are smart, but they’re not generally domain experts in all the different domains or automation, right?
So the advantage of my thing is that you’re actually relying on the skin-in-the-game forecasts of actual experts in these areas, and you can backtest it. But there’s also plenty of bad things about patents. We can have a long discussion about why patents are a bad measure for this kind of thing. So there’s no one right or wrong method, and you hope that different methods produce similar results for the things you care about. Or that would be nice: converging evidence on certain kinds of impacts. And indeed, we see that in some things, which we’ll talk about in a second.
Luisa Rodriguez: So a thing that had been done before was looking at not just jobs, but the kinds of skills required for those jobs, and then making educated guesses about whether a technology has the skills to automate away those jobs or those tasks. You mentioned upper arm strength, and I guess that’s if some manufacturing jobs require upper arm strength and then you get robots that are strong, then maybe it meets the criteria for automating away that particular job, or at least part of the job. Then you can make predictions about how many of those tasks are going to be automated away by that technology.
And then there’s this other version, where you’re asking GPT-4 to actually make predictions about which tasks it’s going to be able to automate away.
But your methodology, which I think is really cool, looks at patents, which basically require domain experts, at least kind of experts, to make predictions that cost some amount of money to make about what their technology is going to be able to do. Then I guess you do some sort of matching from the text in the patents to the types of skills reported as being required for these jobs. And then you’re reasonably confident, because they’re putting money on that technology being able to eventually automate away those tasks, and so we think that those jobs with those kinds of tasks are going to be exposed — where “exposed” just means at risk of being either automated away or having their jobs basically impacted by that new technology. Am I basically getting that right?
Michael Webb: Exactly right.
Luisa Rodriguez: OK, great. That makes lots of intuitive sense to me. So you said that you came up with this idea, and then you actually applied it to other technologies besides AI to figure out whether it actually captures the thing and make predictions that line up with what actually happened. What technologies did you do that with, and how good did the methodology seem to perform?
Michael Webb: So I did it with robots and software as well as AI, and that’s what’s in the paper. I also spent a bunch of time doing it for steam and electricity; that’s not in the paper because there’s a bunch more work to do on that, and it’s pretty hard because you’re dealing with very ancient archival documents that have to be digitised and stuff. But anyway, it’s definitely doable for those as well.
And so for robots and the software, it was, I’d say, reasonably compelling in terms of what came out, of what is exposed and what is not exposed. So one example for software: Who’s exposed? Parking lot attendants. So what do they do? They go around and record number plates and issue tickets saying “I’ve agreed to park here” or whatever. That’s exactly what software embedded in a parking machine can do. And so, great, software takes over that. You don’t really have parking lot attendants anymore — or many, many fewer. And something that wasn’t exposed was a barber or a podiatrist, which makes perfect sense, because there’s nothing software could do in that.
And so in terms of AI, this is where it shows up that I’m looking at patents pre-2020. So people were thinking very hard about what AI could do based on the current state of the art and what might be possible. They hadn’t seen GPT-4, so they generally hadn’t patented loads of text-based stuff. They did a bit that seemed plausible, but they hadn’t spent loads of money saying, “Yes, this is definitely going to be possible, because we’ve seen it.”
And so, what I found was most exposed to AI were clinical lab technicians: people who stare in microscopes and do visual interpretation of images. So AI can do that. Chemical engineers: I’ve got friends running startups that are using AI to automate chemical engineering. Optometrists: optometry is one of the areas of AI in medicine which is performing really well; it can just do stuff with eye scans. Power plant operators was a top one. When I was at DeepMind, one thing we did was use AI to automate the cooling system of a data centre. It’s a very similar job, I think, ultimately, to power plant operation in some respects. So that was just happening. And the final one was dispatchers, taxi dispatchers: of course, that’s exactly what the Uber algorithm does. So I was most pleased with these exposed occupations, because it felt like these are exactly the ones that add up.
And then the least exposed were animal caretakers, food preparation workers, postal service mail carriers, college professors, and arts and entertainment performers. Which again, it all seemed like things that AI is not going to do much with, because they all very involve human interaction, and they involve all kinds of other stuff, like physical labour as well, often.
Luisa Rodriguez: Yeah. Interesting. Those make lots of sense. But I guess there’s been loads of progress since you wrote that paper. So I’m very curious if you’ve had any big updates on specific jobs since the paper was written? For example, you might have said fashion modelling isn’t very exposed, and then maybe Midjourney makes it incredibly exposed. Maybe musicians soon, for example. Yeah, what have been your updates?
Michael Webb: Yeah. So I guess there’s kind of my intuitive updates, and then there’s people who have taken my method or done similar things to my method, and what have they found with more up-to-date data.
And so the OpenAI paper that did something very much in the same spirit as what I was doing — which is itself in the spirit of what others before had done, but using these much more up-to-date exposure descriptions, as it were — but not using patents; they’re doing something else. But they found, for them, the most exposed occupation to GPT-4 were interpreters, translators, journalists, poets, writers, mathematicians, court reporters. So very much things that make sense that GPT-4 could do.
Luisa Rodriguez: Yeah. It’s interesting though, because it does make sense now, but it doesn’t sound like any of those were in the paper from 2020, which I guess highlights how quickly things changed — how much wider the scope for task automation has gotten since then. And it’s been three years.
Michael Webb: Yes. GPT-4 and these large language models just feel to me like a completely different thing to the AI that was being studied in 2018.
Luisa Rodriguez: Right. So those are some specific types of jobs that might be automated away, or that might just be exposed and impacted by GPT-4. Were there broader patterns?
Michael Webb: Yeah. So for me, the most interesting thing in my paper and also in the OpenAI paper — which they sort of talked about, but haven’t commented on that much in the paper itself — is, so I think these measures that we’re both using are very noisy measures. Like, there’s all kinds of errors that are going to be showing up in terms of particular jobs, because of, in their case, the way the prompt was written, and in my case, because of idiosyncrasies of patents and whatever. And you’re on slightly safer ground by taking a step back and doing some averaging. You know, there could be some noise, but on average, you can say things about categories of job or people or whatever.
So I did a lot of work in my paper looking at, if you’re just aggregating, how does exposure vary overall on average as a function of how much education you have, or how much your job is paid currently, whatever it is. And I found a really interesting pattern of results, comparing AI to these previous technologies. So think about a graph where, on the x-axis, you have income or salary for a job — so on the left-hand side it’s very low paid; right-hand side it’s very high paid — and then on the y-axis, you have how exposed jobs at that level are.
So for robots, you have a line that basically starts high on the left and then goes down a lot: so it’s very low-skilled jobs, low-paid jobs that are exposed to robots, and high-skilled jobs are not at all exposed.
With software, you have a very different pattern, which is that actually the lower-skilled jobs are not exposed and the higher-skilled jobs are not exposed; it’s the middle-skilled jobs that are most exposed. And what’s cool is that this reflects a pattern that lots of other very careful research in economics has found about the impact of software in particular: it’s really impacted middle-skilled jobs.
Luisa Rodriguez: Empirically.
Michael Webb: Empirically. Exactly. Like, really careful studies specifically for software, middle-skilled ones are most exposed. So it was cool that I kind of replicated that with this very different method.
Luisa Rodriguez: That is really cool.
Michael Webb: But the really interesting thing is that for AI, it’s a completely different pattern again. So for AI, it’s actually the upper-middle-skill jobs that are most exposed. So the line starts on the bottom left, at a low level, and then goes up and up and up and it peaks, I think, in the 88th percentile of jobs as sorted by salary — so really upper, upper income, high-paid jobs — and then goes down at the very top. So that the CEOs are paid the most and not exposed so much, but the lawyers and the accountants and whatever, they actually are exposed.
Luisa Rodriguez: Fascinating.
Michael Webb: And just quickly, before we go to the story, the really interesting thing is that the OpenAI paper — using a different methodology and focusing very much on GPT-4 and these new large language models, as opposed to the slightly earlier vintage of AI I was focusing on — they replicate this figure with their measure, and it’s basically exactly the same. So the same pattern.
Luisa Rodriguez: Wow. That’s cool. Really validating.
Michael Webb: Now, it turns out that many of those jobs are the most regulated jobs. So the doctors and the lawyers and the accountants, they’re the ones who actually have the most power in the economy and society to put up barriers and stop the exposure that might otherwise cause them to be paid lower wages or whatever. They can pull up the drawbridge and stay happy as they are. But on the pure economics of this — before getting to the political economy; instead it’s a fancy pretend world where there’s no actual humans and there’s no politics — it’s those jobs that are most exposed.
Luisa Rodriguez: Were there any other findings from that paper that are worth pointing out?
Michael Webb: Actually, there’s some different papers, some follow-on papers, on differential impacts across different kinds of people. So what my paper and this more recent one were talking about is that there are different jobs, and different jobs get paid different amounts. On average, it looks like the higher-paid ones are going to be more exposed. Though a given kind of job will have different kinds of people doing it, right? So there are brilliant genius doctors and there are very novice ones who are just starting and don’t know very much yet, for example. Are these different kinds of people going to be impacted differently by, for example, having access to GPT-4 or something similar?
So there’s some really interesting papers that have started to look at this experimentally. So they actually recruit real people and give them access to GPT-4 or whatever it is, and see what happens. My favourite one of this is very well done, then there’s a really cool experiment. It’s by Erik Brynjolfsson, Danielle Li, and Lindsey Raymond, and it’s called, “Generative AI at work,” from earlier this year. And they did a really big study in a real company. They got basically technical support customer service chat agents. So real humans typing over chat.
I’m guessing there’s some sort of help button and they press it, they get the live chat with the customer support person. And they did an experiment where the control group doesn’t get access to… I think it was, in this case, it wasn’t like they were giving them an extra window open on your computer with GPT-4 or ChatGPT. It was like, we’ve actually embedded it; we built it into the software that’s helping you inside the chat system. So it’s going to suggest things for you, and you click on the suggestion instead of having to write it out yourself.
And what they found is really interesting. So comparing the treatment group, with these extra suggestions from GPT or whatever it was, versus the control group that just had business as usual, there was a huge impact on the things they cared about — in terms of, you know, problems resolved per hour and how happy customers were, whatever — huge impact for the lowest-skill customer service agents, but basically no impact for the highest skill agents.
Luisa Rodriguez: Right. So the people who are already crushing it didn’t benefit that much. Because they’re just good at their jobs.
Michael Webb: Because it’s suggesting things you were going to do anyway. So it hasn’t helped very much. But if you weren’t very good, it’s actually improving what you’re doing.
Luisa Rodriguez: That is a really cool result.
Michael Webb: Yeah. So my interpretation of that is that in a world where… You know, this is not low-skilled work, but in the grand scheme of the economy, it’s relatively low-skilled. And there’s a kind of a performance ceiling: there’s only so good you can be at it. The best thing you can possibly do is that you can resolve the problem working with a customer human. That’s the best you can possibly do. So there’s a real performance ceiling here. And the high-skilled are at that ceiling already.
Luisa Rodriguez: Yeah. Right.
Michael Webb: And so what GPT-4 was doing — or ChatGPT or whatever they were using, I’m not sure — is it brings the lower-skilled people up to that performance ceiling. That’s why there’s an impact for the low-skilled but not for the high-skilled.
There’s another really cool paper that was very different actually, but the same theme. It’s by two grad students at MIT, Shakked Noy and Whitney Zhang, who are really awesome.
So this other one, they got college-educated professional people — they recruited them on Upwork or somewhere — to complete tasks like writing press releases and writing delicate emails and that kind of thing. They unfortunately don’t have really good measures of prior ability. So for the other paper, they had a huge track record of these customer conversations, how good they were, like resolved customer problems per hour or something. Whereas for this experiment, they just basically have people self-report “How good do you think you are at writing?” Not very good, quite good, or very good.
And what they found is interesting. It’s a fairly small sample size, but there was basically no clear trend in the benefit of ChatGPT as a function of your prior (at least self-reported) ability. So the best people benefited just as much as people who thought themselves to be the least good writers.
So you might think, in this case, writing a press release or writing a delicate email, there’s not so much of an upper bar on performance.
Luisa Rodriguez: There isn’t the performance ceiling. Yeah. Right.
Michael Webb: You can just keep making that press release better and better. There probably is some ceiling, but it’s, like —
Luisa Rodriguez: But it’s much higher and fewer people are already right at it. Yeah.
Michael Webb: Exactly.
Luisa Rodriguez: That makes sense.
Michael Webb: And for the context they were doing it in, they weren’t recruiting, you know, people who were literally the world’s best press release writers; they were recruiting random people who happened to be on Upwork. So that’s an interesting different context.
And then the summary of this stuff, I think, is that right now, we don’t really know. It was like a bit of interesting evidence in terms of where there’s a performance ceiling, you can really help the low-skilled. But I think there’s a lot of anecdotal evidence as well that people who are really brilliant, particularly in sort of creative tasks, they can have huge impacts with accessing these models.
So taking software developers, for example. You might think that it takes someone who can’t code to suddenly they can code. That’s a massive deal. That’s like 0 to 1, right? So that there is a huge improvement. But also, someone who before, maybe they were designing the architecture, doing some very high-level senior developer type work, and then they had to have 10 humans actually implementing it. Maybe now, it’s like, I do that work as a senior person designing the architecture, and now GPT-4 or whatever does all of that work that was done by humans. And so I now, as that senior person, I’m now 100x more productive, because I’ve got the equivalent of many people’s jobs before me. And then you just need me; you don’t need the other people anymore.
And also worth noting a separate point, which is that a 10% increase for a so-called 10x developer is a lot more in absolute terms than a 10% increase for a more normal level developer. So there’s some mechanical sense in which if you’re getting a percentage increase in what you’re doing, then the absolute size of that is [bigger]. This is a pretty obvious maths point, but I think it’s worth making here.
Luisa Rodriguez: No, I don’t think it’s totally obvious. Yeah.
Michael Webb: OK. The size of that is going to be bigger if you’re already more highly skilled, if you’re getting about the same percentage point.
Luisa Rodriguez: Yeah, 10 times 10 is bigger than 1 times 10.
So just to kind of summarise some of these points, one thing that’s standing out to me is just that there aren’t uniform effects on different types of skills or different types of jobs: different kinds of skills and different jobs will actually be hugely differently impacted depending both on what the skill is and also on how skilled you are at that skill. And you’re just going to get impacts going in all sorts of different directions. I guess I did have some simplistic, I don’t know, some kinds of people will be unemployed, and some types of people will have their productivity go up. But I’m picturing a billion arrows going in different ways.
Michael Webb: Yeah. So we can average out all these effects at the economy level and get something. But absolutely, I think at the individual task level — we’re just talking about in terms of, “Are you a low performer or a high performer?” and “Are you a novice or an expert?” — I’m very sceptical there will be a general answer at that level. You know, “For a given task, LLMs help the low-skilled more than high-skilled”: that will not be something one can say about all tasks. It will be very specific to different tasks, and the economy will be evolving a tonne — in terms of, talking about demand impacts of when you can deskill a job, do you want even more of it? Or whatever. There’s all kinds of things going on there, different incentives for people to use different labour in different ways.
A more general point here actually is wherever there is the most displacement… So suppose it’s the case that middle-high-skilled people with college degrees are the ones where most of their jobs are being exposed and possibly automated away. That then means that there’s now huge amounts of people in the economy with these skills who need something else to do. So if you are an entrepreneur, you have a massive incentive to figure out how to make those people productive, right? To invent jobs, basically invent ways of using those people’s labour to create value for the specific kinds of skills they have.
Luisa Rodriguez: That they can do, but that GPT-4 can’t do yet or something.
Michael Webb: Exactly. And that’s always a constant, dynamic process. Where as soon as there’s people out there with certain skills that, now, those things are being done by GPT-4, but they are still very skilled people, someone’s going to come along and figure out what we can get them to do that is very valuable and will create value. That’s true whether we’re talking about very low-skilled people, who can’t read and write, or whether it’s very high-skilled people who’ve got PhD degrees, yet the research is all being done for them or whatever it is. Right? That process is always going to be happening.
I think right now, there’s a really interesting point where, in a sense, what GPT-4 or whatever can do, that’s now the baseline. So as a human, what can you add on top? What are you adding on top?
You can imagine a thought about software engineering. You can imagine this kind of hollowing out the middle of software engineering. You can imagine where suddenly it’s possible for much lower-skilled people, with less training, less experience, to create the kind of software that took a person who was earning £150,000 a year before. And suddenly, the thing that’s bottlenecked — that the humans are actually doing — is they have the context: they’ve done the user interviews to figure out what we should build next. But the skill of actually writing the code is done by the algorithm for the most part, and they’re just making sure the tests are the right tests and things are working as they’re supposed to do and whatever. So you’ve really deskilled that, and now you have many more people who need much less training to do that kind of software engineering.
But then, at the other end, you might say there’s also a different kind of people who add a tonne of value on top of what GPT-4 can do. Because maybe right now GPT-4 can only reason about a codebase of a certain size, and it doesn’t know everything about exactly what sort of cluster setup you’ve got or whatever. And so that kind of architectural thinking and all that managing of all these lower-skilled humans needs lots of people skills, and interaction with the business side of your company and what their objectives are, and translating that with the traditional product-manager-type skills.
You can imagine all those kinds of things being more high end now. And you’re adding a tonne on top of GPT-4 there. So that suggests that there’s more low-skill demand and more high-skill demand of a slightly different kind. I think you’ll see something like that everywhere. And one of the key questions is: If you’re good, and you can use GPT-4, what can you do with 1,000 assistants who are better than you at the thing you currently do? How would you orchestrate them? And if you can do that, you’re now very productive, and the scale of orchestration is the hard and difficult thing, and you’re doing that. And that’s going to be really valuable.
Luisa Rodriguez: Very valuable. These feel like ways that the economy is going to smooth things out in the labour market. Are there any other kinds of effects that you expect to see?
Michael Webb: I think there’s one kind of more macro thing which is more historical, which is to do with the overall pattern of inequality over time. You might think a technology comes along, and the technology can do either one of two things: it can increase inequality or it can decrease inequality. And what the historical record suggests is that it’s not as simple as that, and the same technology over time can increase at one point and then decrease at another.
Luisa Rodriguez: Do you have an example of that?
Michael Webb: Yeah. Well, it’s not so much one technology; it’s more the Industrial Revolution, in Britain in the 19th century. So there’s an economist called Robert Allen, who wrote a very famous paper called “Engels’ pause” in 2009. He’s an economic historian, and he makes the point, through lots of very careful measurement, basically showing that inequality increased a lot, and workers kind of lost out. The share of national income that was being captured by people doing wage-paying work, labourers, really went down over time, and the share of the output that went to the owners of capital went up. But then at some point, that reversed, and inequality decreased a lot. So the workers caught up.
And one story for what’s going on there is that at the beginning of a period of fast technological change and automation, there’s lots of adoption going on. What, concretely, is adoption? It is companies investing money in order to change their production processes and figure out how to use this new technology, and then actually paying the money to get it: buying the new equipment or the software subscriptions or whatever it is. And so at that moment, suddenly, capital — which is to say, you know, people who’ve got spare money sloshing around that then can be invested; companies will borrow money or they will issue stock to raise money or whatever it is — suddenly becomes really in high demand.
And because you invest capital, and you get these wonderful returns on the capital, because you’re now doing this huge amount of automation or whatever it is, those gains flow a bit more towards the people who have the capital — because the thing that is relatively more scarce and more important is the capital, because of this really expensive-to-do automation. And there’s these huge gains to be had from automating, and therefore, people who have the scarce resource — which at this point is money floating around, spare cash to invest — they get the benefit of that.
But then over time, you finish doing all of that. Capital suddenly becomes less scarce, the automation, all that stuff’s been done. And at the same time, all these entrepreneurs have come along and figured out, “We’ve got all these machines doing this other stuff. I’ve figured out some ways of now making workers more productive, in this newer world.” And if more productive, they’re getting paid more, under standard conditions. So that means that workers then now start getting paid more, and the economy rebalances and readjusts, and you’re back to where you started almost in terms of inequality.
And this, by the way, is over, I don’t know, a 50- or 100-year period. So these are very long-term historical stuff. So I would not be surprised if we saw something similar today with AI, and maybe it’ll all happen faster. But I think it’s quite possible you’ll see something similar.
Why AI won’t necessarily cause massive unemployment in the short term [00:47:13]
Luisa Rodriguez: So I want to move on a bit, and get into some of the specifics about the short-ish term: so what might happen while AI systems are increasingly capable, but before we have AGI? We don’t know how long that will be. Maybe it’s just the next few years; maybe it’s five to 10 years; maybe it’s longer. But I’m interested in this partly because, one, it’s easier to make predictions in the near term and especially when the state of the world will more closely mirror things that we’ve seen before, but also just because even if the world might look really crazy once we get to AGI, I’m still personally very curious about what the transition is going to look like.
There’s a thing I’m often tempted to do, which is just imagine what the world’s going to look like when we have AGI. Like, that could be insane. But if you took that away, and assumed that AI’s abilities would be capped before AGI, things are still going to be really remarkable and different. And I feel like it’s easy to ignore that just very soon, those things will start to impact our daily lives. I mean, they clearly already have, but probably more and more. And so I want to spend some time focusing on that.
Michael Webb: First, I completely agree, that I think if we stopped all development of bigger language models today, so GPT-4 and Claude and whatever, and they’re the last things that we train of that size — so we’re allowing lots more iteration on things of that size and all kinds of fine-tuning, but nothing bigger than that, no bigger advancements — just what we have today I think is enough to power 20 or 30 years of incredible economic growth. This certainly, you know, feels as big as the internet already, quite easily.
Luisa Rodriguez: Cool. OK, that actually is a really helpful comparison.
Michael Webb: So we’re already in an exciting place.
Luisa Rodriguez: Great. I’m glad we’re on the same page about that. And actually, I wasn’t totally sure we would be, and I trust your judgement on it more than mine, as the labour economist.
I guess to get really concrete about what that’s going to look like, it seems intuitively obvious to lots of people that if a technology like this suddenly comes along and it can do lots of jobs better than humans can, then the people who had those jobs are going to find themselves unemployed very quickly. I think actually that story is maybe wrong, despite the fact that it’s kind of a natural thing to think. I wanted you to tell me why that’s wrong, or at least why it might be.
Michael Webb: So I think there are lots of different ways that it can be wrong. And those ways are all really interesting, so we should spend some time talking about them in a bit of detail.
So there’s a few buckets of explanation or conceptual points to make here. The first one is around this idea that if some jobs get replaced by technology, then the people who did those jobs are just thrown on the human slag heap, and there’s nothing for them to do. And we’re all immiserated, and isn’t that terrible, and we need to urgently move to a UBI system or whatever it is.
But here is the story as to why that is not necessarily true — and why I guess in the entire economic history pretty much of humanity so far, that hasn’t happened. Why is this? Let’s take the following scenario. Right now, we all spend a decent chunk of our income on food. Think about food that you go and buy in the grocery store. So maybe every week you go and do your weekly shop, and maybe you spend £50, maybe you spend about £100, who knows. And so over the course of the month, maybe you’re spending £300 or £400 on groceries. So let’s say it’s £400.
Now suddenly suppose that some amazing automation technology comes along. And all those people who were driving tractors and managing farms and driving lorries that move things around the country and around the world, and all the people working in food processing plants, and all those things, they’re all fully automated just tomorrow. It’s amazing. And those jobs have all gone; it’s all done by these incredibly much cheaper robots and algorithms and who knows what. So there’s all kinds of clever vertical farming going on. You name it. Right?
And suddenly, that means that the actual cost of producing this stuff — you know, before you were paying all these salaries to these humans and now all the LLMs are doing it for free, or virtually for free — so your shop of £400 maybe now it’s gone down, and it’s £50. This is obviously a silly accelerated example, but suppose that’s the case. You do a huge automation and now everyone in the country, they were all spending £400, and now it’s £50.
So what does this mean? On the one side, we’ve got a bunch of people who now don’t have jobs, apparently: all the farmers and the tractor drivers and the food processing workers and so on. And also, though, every single person in the economy has now got an extra £350 per month in their pocket. Their jobs haven’t changed, right? Everyone else in the economy is doing the same as before; the only thing that’s changed is now food is cheaper.
So you’ve all now got an extra £350 per month in your pocket. So what do you do? You tell me: What would you do?
Luisa Rodriguez: I want more other things. I probably don’t want more food, because probably I’m eating enough food, but I probably want to spend that £350 on, I don’t know, a nicer flat. Or actually, maybe I want to donate. That’d probably be good. But still, I probably want things that other people could create for me with that additional money.
Michael Webb: Exactly. So let’s just go through those examples. You said probably not food. I actually suspect that you might actually want more food, but you’d go for fancier food. So you would now go to restaurants much more.
Luisa Rodriguez: Sure. Yeah. That sounds right.
Michael Webb: And the restaurants are now going to be, what you’re paying for is the fact there’s a wonderful chef, who is from a certain town in Italy, who hand rolls the pasta every morning. And so you’re paying for actually a much more human-intensive form of food than what you’re getting now in Tesco or whatever, right?
Or maybe you are saying, “You know what? I’m going to work on my mental health. And I was just paying £10 a month for a Headspace subscription, but now, with an £350 a month, I’m can afford a therapist.”
Luisa Rodriguez: Two therapists!
Michael Webb: Two therapists. And so, again, that’s now extra demand for new jobs that didn’t exist before.
Now, a couple of other things are a bit different. So you mentioned, funny enough, you said, “I might get a better flat.” So that’s really interesting, because flats are these inanimate objects, right? There’s no labour involved in a flat continuing to exist; they just sort of exist, just sort of sit there. And we won’t get into the housing theory of everything on this podcast, but there’s a whole bunch of important things to say about the way that land and housing is this really important weird thing in the economy that explains lots of other things.
And the way that’s coming up right now is the fact that if everyone suddenly starts saying, “I’ve got my £350, and I’m going to spend it on a better flat” — if everyone thinks that, then all that happens is the price of flats go up. You’re in the same flat, just now you’re paying more for it. So what’s happened? Where does that money actually go? Well, it goes to your landlord or your landlady. What’s happened is that there’s basically been a wealth transfer, a resource transfer to people who own these assets. So there’s been what economists would call “asset price inflation.” And these landlords and landladies are now the ones who are spending the extra money; they want to go to fancy restaurants or whatever it is. But nevertheless, it’s still the case that someone is spending the money somewhere.
The final thing you might do, by the way, if you have an extra £350 per month, depending on how old you are and your preferences over leisure and labour and so on, is that you might say, “You know what? I’m going to work fewer hours.”
Luisa Rodriguez: Yeah. Sounds great.
Michael Webb: I know. I’d rather have the extra day off a week or whatever it is. And so if you’re now saying you’re going to work fewer hours, and if everyone does that, then suddenly, that extra day that you were working, someone has to work that day now. And so, again, you’ve created extra demand for labour.
And so this is happening everywhere in the economy in this automation scenario. So what’s happened is this particular bit of the economy has been automated — wonderfully more productive, amazing — but also these jobs have disappeared. But what cannot help but happen, at the same time, like an iron law of economics: the money that everyone’s making has to go somewhere. And basically, empirically, historically, where that ends up going is towards goods and services that are more human labour intensive. So you’re actually creating more jobs directly. It’s this kind of feedback loop in the economy that is a really important first-order thing that happens when you have these kinds of productivity increases.
And because humans (and indeed houses, to your point about flats), because these are the things that are scarce — the LLMs, there’s unlimited quantities of them; we’re the things that are scarce — so, you know, the law of supply and demand, the thing that is in some limited supply, the price goes up. And so that’s the thing where suddenly all of our wages go up a bit and everyone’s got more stuff.
So that’s like the good story, right? And that’s broadly what has happened over the last 200 years. That’s why we are all in these weird jobs that did not exist 200 years ago, and we’re all much healthier and hopefully happier.
Luisa Rodriguez: Right. I don’t think I fully internalised the extent to which this does just happen constantly, and has happened over the past few centuries. And obviously, we’re not doing the same jobs as people were doing 200 years ago. For example, a fact that I read was something like 60% of the jobs that exist on the US labour census today didn’t exist something like 60 or 80 years ago — the exact numbers are probably wrong, but it’s something like that. And that’s pointing at this general thing happening loads, in a way that is just really not obvious to me unless I think about it.
I guess this AI case seems different, in that it might happen even faster, and the amount of automation that’s possible might be more extreme. But at least for now — when we’re talking about how it can automate some tasks, but not the vast majority of them — it does seem like if you look back historically, the kind of thing that happens is this shift to an economy where people are doing different sorts of jobs over time, and not one where there’s mass unemployment. Am I getting that picture kind of right?
Michael Webb: Exactly. Yeah. And the reason that happens is because, as things get automated, the amount of goods being produced in the economy are going up almost by definition: if they weren’t going up, why would you automate it? And so you’ve got goods going up, and so is human labour, and demand for that labour, because people can afford to spend. And so that ends up getting allocated to ever more productive or useful uses that serve human needs.
And to your point around this happening for a long time, my favourite example is, I guess I started with agriculture and food, so let’s think about agriculture, a US example. In 1790, in America, 90% of jobs in the economy were on farms. Today, it’s 1.7%. And it’s amazing if you look at the graph of this: on the x-axis you’ve got time, and the y-axis, you’ve got percent of jobs that are on farms. It’s surprisingly linear: it’s a straight line from 90% in 1790 to close to, I don’t know, 2% or 3% in about 1960. And then it can’t go much below that from that point to today. But it’s very, very linear. And very steady and very slow.
Luisa Rodriguez: Yeah I think in my head, it was something like it decreased a lot at once, as new technology, like a couple of important new technologies were introduced. And then that automated a bunch of tasks away, and then those jobs stopped really existing, and those people kind of moved on to other things slowly. But I guess if it’s linear, I don’t know. It makes me wonder if the whole process is actually just smoother than I’d imagine.
Luisa Rodriguez: So you said there were a couple of things that can happen that mean we don’t necessarily get mass unemployment. And it sounds like this is one: the economy shifting in response to these changes in the availability of certain kinds of labour. What are other kinds of things that can mean that you don’t get this extreme unemployment effect?
Michael Webb: Great. So that first thing was very macro: this sector of the economy has no humans anymore, and those humans all end up doing other things in other sectors of the economy, and possibly new ones as well.
A second thing is: Let’s just look at this one sector that’s getting automated, and think about whether it really is the case that when you have big automation in the sector, the number of humans goes down. That’s intuitive, right? Automation means fewer humans. Done. Turns out, it’s not that simple. So there’s a few examples I’ll start with, and we can talk about what the broader lesson is.
So here’s one example. I think this is due to Jim Bessen, who’s an economist who studied ATMs, cash machines, where you go to a bank branch and get cash out. So before ATMs, there were individual humans in the bank. You’d go up to them and show some ID and get your account details, and they would give you some cash. Bank tellers, I think they were called. And you would think, ATM comes along, that’s it for those people: no more bank tellers, huge declines in employment in the banking sector.
What in fact happened is something quite different. So the ATM did indeed reduce the number of people doing that specific task of handing out money. But there are other things people do in bank branches as well. The big thing that happened is that because a given bank branch no longer needed to have all these very expensive humans, doing the cash-handing-out, it became much cheaper to open bank branches. So whereas before, there were only bank branches perhaps in the larger towns, suddenly banks were competing to open branches everywhere — because the more you can go into the smaller and smaller towns and villages, you can have more customers and provide them a service and so on.
So what happened was the ATM meant there were fewer staff per bank branch, but enabled the opening of many more bank branches overall. And that actually offset the first impact. So fewer staff per bank branch, but so many more bank branches that the total number of people in bank branches actually went up.
What they were doing was quite different. The humans now are doing more higher-value-add activities. They’re not handing out cash. They are doing other kinds of services, but you know, similar people doing a similarish job, and there’s actually more of them now.
The fancy economist way of putting this is: you have a “demand elasticity in the presence of complementarity.” So those are crazy silly words, but I’ll tell you what they mean. “Demand elasticity” means when you reduce the price of something, you actually want more of it. So automation generally brings the cost of things down. But what normally happens is, one doesn’t say, “Great, I’ll have the same amount of stuff.” They say, “No, I want more of that stuff now. Give me more, more, more.”
Then “in the presence of complementarity”: so “complementary” we think of, if humans are complementary to the automation, the technology, whatever it is, in some way, there’s still some humans involved — fewer than before, per unit of output, but still some. Then because people now want more and more of this stuff, each unit of the thing is more automated, but there’s still some humans involved. And therefore, you end up possibly having ever more humans totally in demand, doing slightly different things, but still roughly in the same ballpark. Does that make sense?
Luisa Rodriguez: That makes perfect sense.
Michael Webb: Here’s another historical, but somewhat more recent, example, which I think illustrates this point really interestingly.
So let’s think about computers. Think about computers in the 1980s. So in the 1980s, as in many points in history, Britain was really worried about its productivity. And it was particularly worried about its productivity compared to other countries. At some point in the 1980s, it was very worried about it compared to Germany, because Germany has way better manufacturing and so on. And we were like, “Why are we so rubbish at this?”
And so there was commissioned a series of studies by NIESR, which is the National Institute for Economic and Social Research. They did a bunch of really fun studies where they sent a team to go and look at exactly what was happening inside a set of matched plants. So you’d find like a furniture manufacturing company in Germany, and one making the exact same or similar product in the UK, and you’d go and visit both of them and say, What is the same? What is different about these two plants? And do that for a bunch of different areas.
Basically, it turned out that in every possible industry they looked into in manufacturing, Germany is more efficient. And they were like, well, let’s try hotels. Maybe in the service industry, Britain is better. But in hotels, the same thing was true: Germany has lots better hotels.
And a particular almost throwaway anecdote, but I think it’s really interesting about hotels in Germany in the 1980s from this study, is that they were looking at the use of technology in lots of different areas throughout these studies. And they were looking at in particular the use of computers in hotels. And there’s a very striking conclusion they had, which was that in the UK, hotel managers saw computers as a way to keep the quality of service the same, but be able to employ a less skilled person to provide that service. So before you had to be able to read and write really carefully and manage records and whatever; now all you have to do is press a couple of buttons on a computer. So you’re basically reducing the cost of the labour. In Germany, however, they were like, “No. No. No. We can use computers to have the same humans but now providing a much higher quality of service.” So you generally always have these options as a person running a company, and which you will choose will depend on all kinds of interesting strategic factors.
So thinking ahead now about automation, I’ll give you two contrasting examples. Suppose that GPT-4 or whatever makes salespeople much more productive. So one salesperson would bring in £10,000 of sales, and now they bring in £50,000 of sales. If you’re the CEO of this company, are you like, “Great, I can keep everything the same and cut my salesforce by five times”? Or do you say, “I’m going to 5x my revenue”? I’m pretty sure most people would choose to 5x their revenue. So that’s the case where the elasticity works such that you’re like, “Great. Give me more. It’s cheaper, so give me more.”
However, take another example. Suppose you are one of those private equity owned care homes. So they buy a care home, they try and cut costs and make them more efficient from a capital point of view — perhaps with not entirely full regard for some of the people living in these homes, the elderly people living in them. OK, suppose that as of now, GPT-4 can in some way make nurses twice as productive. So maybe it does all of the prescribing and all the admin-type tasks that nurses have to do. What does the manager of the care home do? Do they say, “Great, I can now keep all the nurses and they can spend much more time with patients”? Or do they say, “Great, I can fire half the nurses and provide the same quality of care”? Probably at least some care homes are going to do the second thing.
And so I guess the broader point here is it can go either way. And generally, you’ll have some companies that choose to prioritise quality and compete on quality, and others that choose to prioritise cost and compete on lowering costs. And both of those things can happen at the same time, in the same industry, with different companies.
Luisa Rodriguez: Yeah. That makes tonnes of sense. Really good examples.
How automation affects employment at the individual level [01:07:23]
Luisa Rodriguez: Was there another kind of effect that’s not just unemployment that you wanted to talk about?
Michael Webb: So the final thing I think it’s really interesting to think about, and it’s often not intuitive, is thinking about the impacts on individuals. So we’ve talked about, we’ve accepted that there definitely could be some individuals whose jobs existed, and then they don’t — they disappear because they’re being automated. Nothing I’ve said so far is saying that doesn’t happen there; that certainly happens a tonne. And I’ve given you some examples of why perhaps we shouldn’t worry so much about it, because there’s more demand for other parts of the economy, whatever.
But what does that look like for the actual person experiencing it? And is it good or bad? And when is it good or bad? So there’s a couple of really interesting facts about the way things kind of work in the economy that I think are worth touching on briefly.
The first one is that there is this not-very-nice term, but it has a benign consequence: the term is “natural wastage.” So if you are a company, and you’re hiring people. Let’s say you’re McDonald’s. You’re McDonald’s, and people leave — their average tenure is they start working for you, and six months later, they leave and go and get a better job. So that’s half of people leave within six months, whatever, that’s called natural wastage: people naturally leaving. And you would include people retiring and whatever as well as part of that — that natural churn. That means there’s a very natural attrition happening in all companies all the time.
Let’s take McDonald’s as an example. So if McDonald’s somehow automated everything, like the burger flipping and the cashiers. They’ve been trying for a long time, right? That’s slowly happening, but there’s still some humans there right now. Suppose they did it. All they would have to do is stop hiring any new people, and within a year, they would just have no employees, because everyone naturally leaves and goes and gets a better job anyway. That generally is what happens; the average tenure is like six months at McDonald’s. So you just sit and wait and everyone goes off on their own accord — no firing required, no displacement required.
So you see this happening at a much more macro level in a really interesting way. One study that I did a while ago was looking at manufacturing workers in the UK between about 1919 and 2010, looking at what happened at this time when there was a huge reduction in manufacturing employment in the UK, as a result of a few different factors. The biggest ones are trade — so like, China is producing the same stuff that we were producing, but cheaper — and then also automation; there was a lot of automation happening as well of different kinds.
And if you look at the age profiles of who was working in manufacturing — and ideally, you would trace individuals over time, which I was doing a bit of in some other areas — what happens was really interesting. So here’s some astonishing facts. These numbers could be slightly off; I’m remembering them as it was a while ago, but they’re directionally correct. So let’s say the 1990 number of 55-year-olds working in manufacturing compared to the 2010 number of 55-year-olds working in manufacturing at this time. When the total number of people manufacturing is going down by some enormous percentage — I forget exactly what, but it’s huge, huge declines — the number of 55-year-olds actually goes up over this time period. It increases. The number of 25-year-olds goes down 90%.
Luisa Rodriguez: Wild.
Michael Webb: Right? So you see what’s happening there?
Luisa Rodriguez: Can I actually just guess what’s happening?
Michael Webb: Yes, tell me.
Luisa Rodriguez: It’s something like people that are kind of young and deciding what field to enter are like, “I’m not going to enter that one. That’s a dying industry. There’s a bunch of manufacturing, China’s doing it. That seems dumb. I’m going to go into something else.” And then older people that were already in the industry, they’ve got loads of career capital there. They’re good at their jobs, and they keep doing it. And they end up being kind of a large share of late workers in that field as they get older. So fewer people enter, but they just stay in it, and do a bigger proportion of the jobs available. Is it something like that?
Michael Webb: That is exactly it. Exactly that. And it makes a tonne of sense, right? Because if you are the mastermind organising the economy, and allocating people to different jobs — obviously, that’s not what’s happening — but if you are the mastermind, it would naturally be the right thing to say that people who have got all the human capital, and they’ve worked in the industry, and they’re going to find it really hard to move: let them keep the jobs. And then the young people, they shouldn’t get into it because that’s a bad bet for the long run; they should do something else. And people make those decisions for themselves, and that’s what happens. So you have these really interesting effects of that kind.
The same thing, by the way, happened… So we talked a little earlier about farm jobs in the US. And one thing I’ve done is I’ve plotted it. It’s amazing: you can link people over time in the census. So every 10 years there’s a census, where they count every person in the entire economy. This goes back hundreds of years, and you can link people over time. And you can also see their occupation, so you can see who is staying in being a farmer, who’s leaving, right? So over more than 200 years where the percent of people working in farms goes from 90% to, whatever, 1%, over that time period, every single cohort of people — you know, people who are born and start working — the percent of a given cohort working in farming only ever goes up this entire time period. People start working in farming, and they live their careers there, and they’re happy. So fewer people start every single decade, but people who are still in it, they stay in it — so that this happens in this huge long macro timescale.
And so to bring it into the future, or think about the future, who today is thinking, “I think being a taxi driver is going to be a great plan in a world of self-driving cars”? Probably fewer people today are thinking that than 20 years ago. Or it was all over the press five years ago about radiologists being automated. Who’s thinking, “For my medical specialty, I want to be a radiologist”?
Now, the really cool, ironic and interesting thing about this, is that this can have this really funny short-term effect, where the technology is promised, and everyone updates to say these jobs aren’t going to be there anymore — but it takes longer than expected. And so people stop moving into the field, but it hasn’t been automated yet, which means that actually wages go up for that field.
Luisa Rodriguez: There ends up being a shortage.
Michael Webb: And right now I believe, last time I looked into it, there’s a shortage of radiologists. And I’m sure there’s many reasons for it, but one very interesting story you could tell is people are forward looking: Why should I go into this? But it hasn’t actually been automated yet, so there was a shortage of humans.
So the big macro thing is that. Older people will stay in, and younger people move into different things. And that’s by far the most important individual-level effect.
So those are two very rosy pictures I just painted for you. So the final thing on this point, the third point, is a less rosy one.
Luisa Rodriguez: That was sounding a bit too good to be true.
Michael Webb: Yeah. So most of what I just said is true. This is what happens. This is the majority of what happens in the economy. It’s why people generally have jobs and they get paid more over time in their work.
Now, where does that go wrong? It generally goes wrong in a couple of circumstances. Namely, it’s very inflected by geography. So what we know, in terms of where you can go and see people who have actually really been hurt by an automation technology coming along, or maybe another kind, trade is a big example: you know, China comes along and suddenly makes things cheaper. If you are a young person in a big city and you were doing some job, then that job goes away, whatever.
Luisa Rodriguez: Loads of other jobs.
Michael Webb: Fine. Loads of other options. Go and do something else. If you are an older person who’s been at a particular firm for a very long time in a town where that firm is the only large employer, and there was no other industry in that town — and also you’ve got this amazing union job: your wages are really high because of decades of strong worker empowerment and so on — and then that company goes away from that town, that is not a good place to be. Because empirically, people turn out to be stuck in their towns, right? They just don’t like moving. And if you’re in your 40s, 50s, got a family…
Luisa Rodriguez: Kids are in school, own your house…
Michael Webb: Yeah. Your house is now worth nothing, because there’s no jobs anymore. So you can’t sell it. You can’t sell up and move somewhere else. That’s really hard. You can’t sell your cheap house and move to a much more expensive house in a city somewhere else. Your kids are in school, as you’re saying, et cetera. So what you see is people kind of get stuck, and there is no job of any comparable quality that they can do.
So on average, when you have these big plant closures, people do tend to go and get other jobs, but they often experience big wage declines, like 25% enduring wage decline. That’s not nice. That’s really horrible. That’s a really horrible thing to happen to someone. And that happened to large numbers of people at the same time in these geographically concentrated ways. That’s where things get bad. So if you’re young in a city, you’re kind of fine. If you’re older, mid-career, older in a small town with a single employer, and that’s the thing that gets automated: that’s when things look much less rosy.
Luisa Rodriguez: That can be terrible. Yeah. And I do feel like those stories come to mind for me more than the other stories. I guess just because it’ll be newsworthy.
Michael Webb: “Person leaves job, gets other job.” No headline there, right?
Luisa Rodriguez: Yeah. Exactly.
Michael Webb: Yeah. By the way, here’s my natural optimism coming through quickly on this point: which is that thinking about large language models in particular, what they automate is generally cognitive tasks. And cognitive tasks, I’ve not actually looked into this carefully, but my strong hunch is that they tend to be clustered in cities. I guess you find some examples of a call centre that is placed in some random middle-of-nowhere city, because there’s lots of cheap labour and not many other options there. You do see bits of that. But it feels like it’s a bit less common than these previous cases, where it’s a particular factory that made something and it was in this village. Or it was a coal mine, and it was there because that’s where the coal was. I think today there’s ever more urban agglomeration and so on.
Luisa Rodriguez: That makes sense. So some foreshadowing about the fact that AI might have different kinds of effects than technologies that have been worse for people who couldn’t move.
How long it took other technologies to have economy-wide effects [01:18:45]
Luisa Rodriguez: Yeah. How quickly do these adjustments typically happen? Because I’m wondering if one of the reasons that it’s so smooth in many of these cases is because technology gets rolled out slowly. Whereas in the case of AI, the rate at which AI might automate different tasks seems pretty quick. Plus AI seems to me like the kind of technology that doesn’t intuitively require a bunch of physical infrastructure to be built the way banks and ATMs did. Does that mean that these adjustments, where people go find other jobs, just won’t happen quickly enough or keep pace?
Michael Webb: Awesome. I think maybe the right way to approach this question is we can start by very quickly talking about what’s the baseline — like, how long do these things take for other technologies that were as big as AI seems like it will be — and then we can talk about why might AI be different and what will be the same.
So the two big examples that are everyone’s favourite are IT — computers in general — and then electricity. These are probably the two biggest general-purpose technologies of the last certainly 150 years.
So how long did they take? Well, there’s an astonishing regularity in how long these things took. You can date the arrival of electrification to 1894, which is the time economists who study this tend to use — I think it’s a couple of years after the first proper power station was built — and date IT to 1971. I’m not sure why economists use that date; maybe it was when some IBM mainframe properly came online or something. Anyway, those are the dates people seem to use in economics.
And if you plot the x-axis as years following the arrival of IT or electrification, and then the y-axis is percent of adoption that’s happened — so the 0% is no one has it; 100% is now everyone has it — it turns out those two lines sit exactly on top of each other. So IT diffused basically as fast as [electricity].
So surprising point number one is that these things that were 100 years apart almost took as long as each other, even though you might expect things to be moving faster later in history. And the second interesting fact is that it took a long time. So it took 30 years to get to 50% adoption.
Luisa Rodriguez: Yeah, that is much slower than I would have guessed.
Michael Webb: Yeah. These things just move really, really slowly. And this is true both for households adopting technology, or getting access to these technologies, and also for industry.
And so we can tell you all kinds of interesting things about how long it took. One final quick interesting fact: If you think about all technology and capital in the economy — take the US, and think of every bit of factory equipment and every computer and everything you might think of broadly as technology, capital equipment type stuff. So in 1970, there was basically close enough to 0% of the capital stock consisted of software and computer equipment, hardware and software. In 1990, it had only got to about 2%. And then by 2000, it had gotten to 8%. So the real inflection is about 1995, if you look at the graph.
The point is there were two and a half decades of actually very slow [growth]. Everyone thought, “This is it. We’re here: IT era. Go!” And yeah, 25 years later, nothing to see. And only after 30 years do you see a real increase. And even then, even in 2000, only 8% of the capital stock consisted of computer software and equipment.
Luisa Rodriguez: Yeah. And was most of the thing happening in that early period the technology improving? Or was it just the technology being incorporated into the world, and the world catching up in various different ways took that long?
Michael Webb: Very much both. Very much both. Think about technology in the ’70s compared to 1990s, the IT was getting ever more user friendly, ever cheaper. You know, Moore’s law was happening all through this time: so you wait a few years, it gets twice as fast and half as expensive. So that’s happening. And people always wait a long time to get to the point where it’s actually worth adopting.
And it takes a long time for companies to adjust all their operations to make good use of this stuff, right? And we’ll say more about that in a second when we think about LLMs.
Another example, actually: what is interesting is the automation of the telephone system. So do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of that was invented in 1892. However, the number of human manual operators peaked in 1920 — 30 years after this. At which point, AT&T is like the monopoly provider of this, and is the largest employer in the US. They are the largest single employer in America, 30 years after they’ve invented the complete automation of this thing that they’re employing people to do. And the last person who is a manual switcher does not lose their job, as it were: that job doesn’t stop existing until I think like 1980.
So it takes 90 years from the invention of full automation to the full adoption of it in a single company that’s a monopoly provider. It can do what it wants, basically. And so the question perhaps you might have is why?
It’s to do with a few things. One is the extent to which, when you start using humans in the system, you build everything else around the humans. The humans are generally doing a bundle of different tasks, and the switching is the most important one, but it’s one among many. And you end up having to do a tonne more corporate processes for your organisation to be able to do the automation. And that takes a long time to unwind yourself from the world where everything goes to this human, because there’s many more things happening than just a switch that’s been switched.
And then the second thing is that — both of these are very generally applicable — it costs money to switch from a manual exchange to an automatic one. And you know, money isn’t free. If you’re a company, you are going to make investments that make economic sense and not do the ones that don’t make economic sense.
So the way it worked in the case of AT&T was: there’s a fixed cost of automating any particular telephone exchange. The exchanges are physically located in different places. The telephone exchange in a city is going to have thousands, hundreds of thousands of wires coming into it. So by switching that to automated, you save loads of humans. Whereas all these different exchanges in the middle of nowhere in much of rural areas, they might only have one human. You don’t save much by switching, but the cost of doing all that change in the equipment is actually still really high. There’s a huge fixed cost, and so you don’t bother doing it until you really, really have to.
If you look at the history of AT&T, they started by automating these big cities and the very last thing to be switched over from human to automated was, I think it was on some island somewhere, like a tiny population. It was just like the last thing that was worth doing.
Luisa Rodriguez: That makes sense.
Michael Webb: This is a good, interesting segue into why large language models and AI in general might be the same or might be different.
Ways LLMs might be different from previous technologies [01:27:02]
Luisa Rodriguez: Great. Yeah, let’s start with the ways they might be different, and we’ll come back to the ways in which they’re actually similar.
Michael Webb: So how might this new breed of language models, and AI in general, be different in terms of how long it takes the economy to adopt them, and what that means for the economy and people’s jobs? I think the first really interesting thing is that these large language models are general purpose from day one. So you train GPT-3, GPT-4, Claude, whatever — and out of the box, it can do marketing, it can do writing emails, it can do science. It can do everything, right?
Luisa Rodriguez: Coding.
Michael Webb: Coding. 100%. And generally in history, like when the PC first came into existence, it could do one killer app. Do you know what the killer app was for the PC? Because it’s a fun bit of trivia.
Luisa Rodriguez: I don’t. And I’m actually surprised you used this example, because I would have thought they were more general purpose. But what was the killer app?
Michael Webb: So they are general purpose in terms of, in theory, they can do many things. But when they were first created by their “creators,” there’s only going to be so many apps, right? So think about the iPhone, when it first shipped in 2007: there was no App Store. It had a calculator, and had visual voicemail or whatever. It couldn’t do that much other stuff. It had the internet — that was a big deal, I guess. It was only a few years later when suddenly it could do literally anything you could think of — and there were thousands, millions of apps on the App Store — that it became a very general-purpose thing. The iPhone hadn’t changed very much: what had changed was the number of applications that had been built on top.
And so the same thing was true of the PC. So the PC, when it was first “invented,” was this huge box, and it was very expensive. Why would you want one? And what would you do with it? There was no internet back then, in the ’70s. And so the first killer app, actually, was VisiCalc, which is the first spreadsheet app, as far as I know.
So people who are going to bother to spend lots of money on a computer, because I guess a typewriter existed, right? So the word processor is not that much on top of a typewriter. This is a much bigger, fancier, cooler, newer thing. So in 1979, VisiCalc was launched for the Apple II, which is one of the very early Apple computers. And that was the reason a bunch of people first bought computers: to use VisiCalc. And it took an absolutely tonne of time before everyone wanted to have one, because you could watch movies on it, have email, or whatever it is. So it takes a really long time to figure out how to get the most out of the general purposeness of it.
Luisa Rodriguez: And I guess that’s taking years in these cases?
Michael Webb: Decades, in these cases. Before LLMs came along, AI was pretty similar. Because to make it useful, you had to go and collect all your training data for a very specific application of recommended YouTube videos, or text to speech.
Luisa Rodriguez: Right. Or AlphaFold protein folding. It’s just that. And it’s exceptionally good at just that, but just that.
Michael Webb: Exactly. Yes. And it takes a tonne of time and effort and risk and money expense to make each single application, even though in principle, it can do many things. What’s different about LLMs is because they trained it on everything, as it were — they sort of put the entire internet and who knows what else in the training data — out of the box it can do different things, and it’s much, much quicker and less expensive to get it to do different things. The limit is only your imagination of what you might ask it.
Luisa Rodriguez: Right. Creativity.
Michael Webb: So that’s the first reason that this time is different, I think. It’s so much more capable and useful out of the box at so many different things.
Luisa Rodriguez: Yeah. It does seem like people are taking some time to figure out that you can plug it into all of these other apps like calculators. But it’s happening over the course of months, not decades.
Michael Webb: Yes. And the way that you figure out whether we can do something is, you know, open up GPT-4 and type in the prompt, and you spend 0.04 cents.
Luisa Rodriguez: Right. It’s like, I kind of wonder if you can do this thing, and it’ll just figure it out for you. And maybe the answer is no, but often it’s yes. And then it does it.
Michael Webb: And in particular, you don’t have to raise tonnes of money from investors and recruit a team and whatever — which you would have had to do to build a software application for a computer.
Luisa Rodriguez: So it’s nearly free.
Michael Webb: Yes. So it’s nearly free, and very general purpose from day one. So that’s point one.
The second thing that occurs to me is, so we talked just now about how when AT&T was automating their telephone exchanges, they started with the big ones. And only later do they get to the smaller ones, because you pay this upfront cost of a fixed cost of switching over, which means you really have to have big scale to be able to bother to use it, to make the switch. And so in general, automation requires scale.
I think generally for LLMs, you can get real value from them with basically no scale at all. So you can be a one-person whatever, and you can start using it in your day-to-day life. You can adopt it yourself as a consumer almost, as a producer-consumer, and use it, and it’s useful straight away.
Luisa Rodriguez: And that’s basically because the fixed cost of it is already super low. You can get a subscription to GPT-4 to use as much as you want, basically, at very little cost. So you can do that and have most of the benefits. And you’re not building, I don’t know, server farms to run the thing. That’s basically done, and the only cost to you is tiny.
Michael Webb: Exactly. And indeed, one way of thinking about this is that there’s a massive fixed cost of training it for everything, but OpenAI already paid that. They paid it once for everyone. And it cost them hundreds of millions of dollars, whatever, but it’s paid now. Therefore, the marginal cost for us is tiny, which means that you can then adopt it throughout the economy. And then, because of the first point of it being general purpose, that means that loads of different people can adopt it basically instantly. And it’s also just much cheaper for any company that’s putting it into a production process system. I think that is not trivial, but it’ll be generally much easier than doing a big software upgrade or whatever.
So this next point is around normally, most kinds of automation or new technologies require huge complementary changes to be useful. There’s a classic example of this, which is back to electricity. There’s a great article called “The dynamo and the computer” by Paul David, which makes this point. In order for electricity to have its impact on the economy, a tonne of stuff had to happen. So obviously, you had to actually lay out a power grid. So everyone had to either get their own local generator, or ideally connect to a much bigger scale.
Luisa Rodriguez: So that’s that big fixed cost thing?
Michael Webb: That’s a huge infrastructure fixed cost. But then beyond that, so think about all these factories, making at this point, I don’t know, bicycles. Who knows what they’re making. And they are all set up to be running on steam power, and that generally means that you sort of stack the factory vertically, for reasons that are not worth going into, and often they’re going to be in city centres. And there’s all kinds of things to do with the fact that they’re running on steam that affect their design.
And once electricity comes along, and the dynamo, and things which convert electricity into motor power, it means a few things. One, it means that your location is often much easier: you don’t have to be in the middle of the city or near a river or various things that affected steam power. Also, it means that you do not have to have this funny vertical design. And in fact —
Luisa Rodriguez: Do you mean that literally?
Michael Webb: Literally. Think about vertical farms: very thin and tall. That’s how factories were designed. In the UK, if you go to Yorkshire, you see all these very tall factory buildings, for this reason.
And so with the electric motor, suddenly you can bring the energy to the machine and put the machine wherever you want. And when you can do that, it turns out that the best design is actually a large, flat horizontal design — because you know, moving stuff up and down this vertical thing is loads of energy and really annoying. And so you want a much bigger factory that’s on a single floor, and you’re going to have a completely different layout for your machines. Your machines will need upgrading, obviously, as well, in various ways.
And this all means that it just takes a tonne of time to make all these changes. Because in particular, when this new electricity thing comes along, you’ve got all these factories running on steam, and they still work fine, right? They work, they’re profitable, they’re making the products, you’re selling them, and it’s all fine. So if I just made this fancy factory that’s based around steam power, and you’re telling me that I can build a new factory somewhere completely different, there’s going to be like a huge capital outlay. You know, the expense of building a brand new factory in a different place with a different design, with different machines. Sure, maybe I’ll do that in 20 years’ time, but I just built my own steam factory, and it’s working. So I will get the investment payoff from doing that. And then maybe if I’m lucky, I can.
Luisa Rodriguez: Get all the value I can from it.
Michael Webb: Exactly. Then I’ll turn my attention to this other thing.
And so this is another reason you’d wait: there’s the time for the power to be spread around the country, insofar as you’re waiting for the power grid, and there’s time to wait for your existing investments to pay off for the steam — which might be 20 years, who knows? And then you have to wait for generally everyone else to try it first. So you make sure it works before you’re trying it.
Luisa Rodriguez: And this doesn’t seem like the case for AI. If GPT-5 comes out, there’s no reason for me to keep using GPT-4. I’ll just switch. There’s no cost.
Michael Webb: Yes. So within the vintages of GPT or AI, there are definitely relatively limited costs of switching. They’re not zero: I think that your prompts will be carefully designed and tuned or whatever. So if you’ve got a whole system running on these sets of prompts, you can’t just upgrade everything in one go. You have to test each thing and make sure it’s still working in the [right] way for these things, whatever.
But I think there’s a more interesting type of upgrade, which is from not using any GPT to using it for the first time. So you’re going from, you don’t have LLMs in your company or startup or whatever it is, to suddenly you’re now using them a lot. We’ve talked about how for the other technologies, it’s a big deal to switch from steam to electricity.
So a quick story: I once talked to someone who was some kind of fancy senior person at Deloitte, or KPMG, or one of those kinds of Big Four accounting companies. And he was talking about one thing these companies do, or some of the IT consulting-y ones, is they help companies with big IT transformations. So like moving to the cloud, or upgrading software, whatever it is. And many companies, particularly the big enterprises of this world, they’ll be really deeply embedded with software that goes under the name “ERP” — enterprise resource planning. So Oracle is one of the big ones: all your finances and the whole company runs on this single big bit of software.
And so the story this person told me was that this company itself had recently tried to — I’m going to make up the details — upgrade from Oracle version 13.1 to Oracle version 13.3. And the thing I remember he said is that this upgrade — which failed, or was botched — that attempted upgrade cost every partner in the firm a Ferrari. Puts it into some stark terms, right? Which is to say, it’s really, really expensive to change software in companies and enterprises — partly because people designing the software deliberately try to build them so it’s hard to switch. “Switching costs” is the word people use in software, the SaaS industry, whatever. Make it harder for people to switch away from your bit of software by embedding yourself.
Luisa Rodriguez: And that’s like one company software to another?
Michael Webb: Yeah. And my favourite example is an Australian bank. These banks, often they’re very old companies, and they adopted IT fairly early. So they have these like mainframe computers, that everything still runs on from the 70s, or who knows when. Often they’re involving programming languages for bank business logic that were big in the ’70s, but no one knows anymore. I think COBOL is an example of one of these languages. Like millions of lines of business-critical code for many enterprises was in COBOL, and no one knows COBOL anymore. People can learn, but you can’t easily hire a COBOL engineer. And so switching from the big bank mainframe to the whizzy fancy new thing, into the cloud or whatever, is a massive nightmare.
So there’s an Australian bank, I think that the story I read was that it cost them more than a billion dollars for the IT upgrades. Which sounds insane, but that is how much these things cost. That is why it takes 50 years for IT to be adopted, because that’s the kind of costs that are involved. You have to retrain your entire workforce, you have to rewrite millions of lines of legacy code. If it goes wrong, that is a complete catastrophe for your company, because your customers suddenly, their money disappears from the bank account, or they can’t access the money, whatever it is. So the stakes are really high. So that just means the cost goes up and up and up and up in making these kinds of changes.
So what’s different now? Well, LLMs speak any coding language, right? So just imagine you’re now this Australian bank, and you were like, “Oh my god, I’ve gotta go pay like a million dollars a day to hire these fancy COBOL coders to translate all of my old business logic from COBOL to whatever new language it is.” That cost has now just disappeared. It can be done out of the box by GPT-4. Because GPT-4 speaks these languages, it can just get rid of all this friction.
It’s amazing. I don’t think people have had enough experience with this yet to actually start experiencing that at a meaningful scale. But I think it just represents a huge opportunity to any startup building SaaS software, which before would have been impossible to persuade anyone to switch to, because it would have been too expensive. Man, it’s going to be way cheaper now.
And also in terms of training, it’s also now much easier and faster for you to use LLMs to help you build, for example, interfaces that look just like the old one, even though your new thing is under the hood, right? So it’s very cheap to build all kinds of new bits and bobs and customisations, because LLMs make that basically free. It wouldn’t have been possible before because of programmers’ time being too expensive. Now, you can just do it, and everyone can make their own interface or whatever, it becomes so much easier. So that is I think a huge, huge reason why LLMs will be adopted faster themselves, and also cause faster adoption of everything else.
Luisa Rodriguez: Of other technologies. Right. Cool. So part of it is just that they speak the languages we speak, and so learning how to incorporate Claude or whatever into your workflow is much easier than learning Excel for the first time or something. But also, it can help you engage with Excel better, so you don’t have to learn Excel. Excel is maybe the wrong example.
Michael Webb: Well, no. Seriously, so take Excel. Suppose some fancy new version of Excel comes along. If you’re a big company, so you’re a massive bank, you have all these analysts who work 12-hour days with their fancy Excel spreadsheets, and they’ve got all these custom VBA macros written to do certain calculations, right? Suddenly, all those custom VBA macros can be converted into whatever the new thing is instantly as opposed to taking months and months and years and years. So Excel is a very good example of that.
Luisa Rodriguez: Great. OK, so there’s two kinds of effects, and it sounds like both of those create, again, disanalogies between AI and previous technologies.
Michael Webb: Exactly.
Luisa Rodriguez: Are there other disanalogies?
Michael Webb: There’s one quick point I want to mention. The analogy is that you often start by automating very tiny things — you know, a very specific process that has some inputs and output. You’re like, “Great. I can keep everything else the same and then just change that process.” But the real benefits of almost any new technology generally come when you take a step back: “You know what? Now that this thing exists, I can rearrange everything.”
So back to the example of steam to electricity, once you’ve completely rearranged the factory floor, actually, that means that not only do you not have to carry things upstairs the whole time and downstairs and all that time, but you can do loads more. You can suddenly have assembly lines, right? They become possible. You can have assembly lines. That’s a huge productivity improvement.
An example that Erik Brynjolfsson, who’s an economist who thinks a lot about these things, likes to use is: When you think about an analogue watch versus a digital one, when we “automate” the analogue watch, what we do not do is take each individual cog and say, let’s take this cog and replace it with a digital cog. No, you completely reimagine the whole thing from the ground up, right? And it just looks completely different. And so the question is: What does a business or economic activity that has been reimagined from the ground up — to take account of what LLMs can do — what does that look like?
One big way that I feel LLMs are different than before is it’s possible to take humans out of the loop in very interesting ways, or at least to have humans not be a certain kind of informational bottleneck. So if you think about the CEO of a company: Right now, the way they get information is they either go onto the factory floor or the retail floor, or sitting with a customer, whatever the company does. They can get some anecdata by just randomly sampling a bit of real thing. That’s bucket 1.
Bucket 2 is they can get these reports written for them by endless hierarchies from the bottom level to the top level. And every time the report goes up from the sub-sub-sub-manager to the sub-sub-manager, information is lost and things are summarised and they’re made to look better than they really are, and things are hidden, and you can’t spot patterns, or whatever it is.
So now, once a business is running entirely, as it were, with LLMs involved, and all information is in the cloud and so on, just imagine what if the CEO had the personal experience of all the information from every customer service agent, every salesperson, every machine on the factory floor. Suddenly, all kinds of innovations and more sophisticated strategic thinking, and better operational management, all these things become possible.
When I was in government, I really, really felt this. Because you were sitting, as I said, at the top — it sounds fancy, but it was at the bottom, so you’re sitting at the top and in number 10 or whatever — and you want to know what’s actually happening on the ground. Here’s a question: How much AI research is the UK government currently funding? The UK government does not know the answer to this question, because no one’s bothered to go and label every single grant application with the word “AI” or not, right? And the application systems are all in completely different systems and different research councils and bits of government funding agencies and whatever it is.
So I asked this question, and it took three months, and we finally got an answer that was a good attempt given the data available — but still, it was nowhere near a sophisticated answer to the question that you’d want to have to make a good decision on various things. And that’s even in the world where everything is already digitised and in databases: this is the human labour required to go and read every grant and label it as AI or not, right?
Whereas now, a CEO or senior person who cares about allocation of resources could ask that question. And instantly, the equivalent of hundreds of thousands of human hours of labour of carefully going and reading every possible application for funding or whatever can just happen. And that really changes the nature of decision making, and also really changes the power of people at different [levels].
There’s really interesting organisational sociology stuff here, because middle managers have lots of power because they can hide things from the higher ups, and they can present things as they want them to be seen. And if the more senior people can suddenly ask the LLM a question and it will read everything from every customer service chat, or whatever it is, that’s not going to lie — assuming it’s set up right, which is another question — but in theory, that’s not going to lie. And suddenly you can do all kinds of things you couldn’t do before.
So I think that’s the way in which this is going to be really different from other technologies. Again, there’s analogies and disanalogies. IT certainly made centralisation much easier, because suddenly headquarters can keep in touch with all these distant places much more easily than before with information communication technology. I think this is kind of a whole further level of that, which is going to be absolutely fascinating to see what happens there.
Ways LLMs might be similar to previous technologies [01:48:40]
Luisa Rodriguez: Yeah, that seems like a lot of differences. I’m just really having this feeling of, at this point, does it even make sense to try to draw analogies from previous technologies? I guess it sounds like you think that at least some things are similar, and that maybe… Well, I don’t know. You’ll tell me how much those things matter. What will make AI similar to other technologies that have been kind of general-purpose, big game changers?
Michael Webb: So I think there’s two buckets: there’s a “humans are humans” bucket, and then there’s the government bucket.
So let’s start with the government bucket. The government bucket is basically regulation. So I put it as a broader bucket, just call it “collective action.” Government is one kind of societal-wise collective action, but there are other things — like unions and professional bodies and all this kind of stuff.
So, here’s a question: Do you think that in 10 years’ time, you’ll be able to just talk to a language model, and it will prescribe you a prescription-only medication, which you can then go and collect from a pharmacy? Do you think that would be legal? Because by the way, it’s possible today. It’s good enough. Basically we’re there, right? You can do that already today. It’s going to be good enough. Would it be legal?
Luisa Rodriguez: Yeah. As soon as I start thinking about it, I’m like, there are a whole bunch of interest groups that are going to want that not to happen. There are some interest groups that are going to feel worried that it’s going to make mistakes; there are interest groups that just want to be protecting the people in the jobs that are doing that now. So it seems at least plausible to me that people somewhere will decide that we shouldn’t make it legal. Though I don’t know. In 10 years, it also wouldn’t surprise me, to be honest.
Michael Webb: Right. You’re absolutely right in the sense that there are these very powerful interest groups. So some of the areas that will be most affected by AI — that we all agree, I think, seem very likely to be able to — are things like what the doctors do and what the lawyers do. Doctors and lawyers, separately, have the most powerful lobby groups you can possibly imagine: the American Medical Association, the British Medical Association, and then for lawyers, it’s the Bar, the Bar Council, the various solicitors’ things.
So here’s one thing that happens: they do all of the kind of professional standards for that profession. They decide who gets to be a doctor, and they decide how many doctors get to be accredited as doctors every year, or lawyers, whatever. Right? If you just open a newspaper basically any day of the week, you will see how powerful doctors are.
Another example is you might think it’s obvious that video consultations are just more efficient and better for patients for most things most of the time, and we should just have them as an option. It literally took COVID to go from saying it’s completely unsafe to allow people to have these remote consultations — “It’s not OK, we shouldn’t allow it, it’s banned” — to like “Oh, shit. It’s the only option, I guess we have to do it.” And now it’s OK. Everyone now does these things; it’s much more common. But they were able to block it essentially. Like, the government couldn’t make them offer e-consultations. And the government tried to and failed for many years, I believe. I’m not an expert on that exactly.
And so similarly, lawyers are very good at manipulating the law, as you might imagine. Of course, that’s mean, that’s not fair. They don’t make them, obviously, in terms of legislation. But still, they’re very, very influential. They are the sort of gatekeepers to this kind of work. And they make it literally illegal for, you know, why didn’t a startup come along and just do it online for free? Well, it turns out you can’t do that. Because it’s literally illegal, and they’re the ones that make it illegal, in practice.
And so regulation has always been something that is kind of regulation by the government / collective interest groups. So unions, whether they’re blue collar unions or whether they’re professional white collar workers — which sound like they’re not unions, but they really are unions; they don’t have the word union in the title, but they’re definitely unions — they’re very, very, very powerful. And so these really, really slow down all kinds of applications — possibly for good reasons a lot of the time. An open question for any given question is whether we should or shouldn’t slow down the application, given the harms involved. But they are always going to argue for “You need the human completely in the loop, and we shouldn’t change a thing, and we should keep our salaries the same” and so on and so forth.
So you might think that doctors are going to be hugely changed by AI. And maybe it’ll bifurcate into a job that requires much less medical knowledge and much more empathy — sort of closer to a nurse, and the doctor uses GPT and interacts with it a lot more — on the one side, and then a smaller number of very, very specialised people who somehow are even better than GPT-4 at medicine (which if that’s possible, I’m not sure). Is that going to happen? And if so, that feels like it would be a good reason not to go and assume you’re going to have a nice, steady, well-paid life being a doctor. It might suddenly seem like you’re kind of training to be a nurse now rather than a doctor, and those are different kinds of career paths for certain people, right?
Luisa Rodriguez: And that’s the dynamic we talked about earlier, where people that don’t have the skills yet might have fewer incentives to be like, “I definitely want to be a doctor, and then I definitely want to lobby hard for AI systems not to replace doctors.” They’re just going to be like, “Seems possible that AI might, at some point, replace a lot of the tasks that doctors do. So I’m going to go into this adjacent profession that does seem like there’s going to be loads of demand for it.”
Michael Webb: Exactly. And so, one, that might happen. And in addition, it might also be the case that the doctors’ union is successful at preventing any change. And so this change doesn’t actually happen, and so you have fewer doctors going into the profession and no AI-enabled diagnosis and primary health care or whatever. So everything actually gets slowed down and gets worse, because of the interaction of the automation and the regulation.
So I have no idea what’s going to happen in any particular case. But I think we can be extremely sure that there’s a tonne of interest groups that are going to be pretty successful for a pretty long time in stopping things from changing faster than it’s in their interests for them to change.
So the point here is AI is not immune from this. In fact, if anything, it’s so salient that it’s going to be extra subject to this. And as we’re hearing, there’s ever more talk of regulation of these models.
Luisa Rodriguez: Yeah. I guess there have been things in… Italy is like, no GPT-4 —
Michael Webb: They banned it, and then they’ve unbanned it.
Luisa Rodriguez: Oh, they’ve unbanned it now, OK. But that kind of thing, I guess that’s just an example of a thing where that kind of thing will happen.
Michael Webb: Exactly. To distinguish between what might slow down the progress of the frontier models — which is what a lot of the chatter is about right now — I’m mostly talking right now about: Forget whatever the frontier thing can do; is that going to actually be used in the economy? And that is a very different question, because the doctors’ union is going to get in the way of that. So that’s that huge bucket, and we could talk for a long time about that. Just say, it’s really, really, really important. Nothing has changed there. If anything, the people there are stronger than before, right? So, that’s that.
Then the other bucket of “humans are humans” in terms of the way they make decisions: So I talked about how LLMs could make it easier to retrain, but you still have to want to retrain, or do things differently in some way.
Think about teaching as an example: LLMs could completely change the way classrooms are run. And the teacher will spend much less of their time marking, and maybe lecturing, and more time doing one-to-one support, whatever it is. Maybe teachers want to do that, maybe they don’t. I don’t know. I imagine most of them would want to do that, actually. But one thing I’m quite sure in saying is that there is no way the government will be able to force teachers to start adopting their software and using it in certain ways. The teacher is master of their classroom, right? There’s been many examples of governments wanting to make teachers do things differently, and generally, it’s very hard. Occasionally, I know with phonics in the UK, things can get trained in certain places — but in general, teachers’ unions have a lot of power, and the government cannot control what happens in classrooms.
And so that again applies in lots of different places. The stronger the union, the more it applies. But in general, humans don’t like change for the most part. They like things the way they are.
Luisa Rodriguez: And learning new things is kind of scary. It sounds like you’re actually using Claude quite a bit. I’m using it some, but I could totally be using it more, except it’s kind of aversive. I’m not exactly sure how to do it. And so there are some things I’m just not bothering to learn how to do, like a particularly good prompt to figure out what my interview questions should be. I don’t know. I’m pretty curious and open to change, and loads of people are, I think, even less open to change than I am. So yeah, that does make sense that when you look at humans on an individual level, they are less excited to learn new things. And especially to have those changes imposed on them, then we can just imagine.
Michael Webb: And it’s often more than just a change to what you’re doing that has bigger psychological consequences. So for example, suppose you’re a teacher and you’ve spent years making these amazing ways of teaching things, and mark schemes, and you always do your marking from 6 to 7PM after dinner or whatever. And suddenly your entire life has to be completely reorganised because the LLM is doing the marking, but it doesn’t always get it right and whatever it is. And all the stuff that you’ve invested, of your own human capital — your own time, your own loving tender care of creating these certain ways of doing things — you’re being told that’s something that’s not worth it, and you should stop using them and use something else, which appears to be worse to you on the face of it. No way, right? That happens everywhere for most people all the time with new things, and this is certainly a new thing. So that’s another category.
Another subcategory of humans being humans — this is one of the more obvious ones — is we are generally liable for things. And if you make mistakes, generally it’s not an acceptable excuse to say, “Oh, I’m sorry that person died. The AI did it,” right? Or, “I’m sorry that the money left your bank account that wasn’t supposed to. It was the AI’s problem.” Like, that’s not an excuse. You have to put it right.
And your company, you’re liable. Most of us spend most of our time interacting with very big companies. Most people work for a big company. And big companies have brands to protect, and they don’t want to take risks. So they’re going to be really slow and conservative about introducing LLMs — which could be amazing, but that maybe is a 0.1% chance of an error. But if you’re really big, that 0.1% is going to happen at some point, and it’s going to be a big news story. So you’re not going to adopt it until you’re 99.999% sure that it’s going to be always working, and not causing a PR disaster.
Luisa Rodriguez: Yeah. It reminds me: I’ve tried to use Claude to prep for podcasts, but Claude just lies sometimes. And I’m like, if I don’t know for sure which of the things are true and not true, I’ll lie on the podcast, and that seems really bad. And I’m kind of just waiting to somehow hear that goes down. Yeah, I’m not racing to figure out how to use it in a way that doesn’t involve not truth telling. At some point, I’ll get the sense that the risk is low enough, and who knows how long that’ll take?
Michael Webb: Exactly. And different people will have different thresholds, right?
Luisa Rodriguez: Right. For risk.
Michael Webb: If you’re a creative writer, who cares if it’s making something up? Because you’re not trying to be fact-based anyway. I hope that’s not insulting to creative writers. Like, that’s not the point: you’re not supposed to track events in the world or whatever. But if you’re a lawyer, if you quote a case that doesn’t exist — as in that case earlier this year — that’s really, really bad.
And so the threshold for you and me using it might be 90% accuracy. If you’re a lawyer, it has to be like 99.99999%. And it’ll vary by different tasks.
But then another point, based on what you just said — “I’m going to wait for it to get better” — that is a huge, huge deal. If you’re in a world where it’s very obvious to you that this technology is getting better all the time; and you know that you will incur a fixed cost to adopt it; and when you have adopted it, it’ll cost you more to upgrade it again later: when do you make the decision to adopt it in the first place?
So there’s actually a whole bunch of work in operations research — which is a whole discipline of how do you run certain bits of companies’ operations effectively and efficiently — there’s a whole bit of that which looks at optimal technology adoption from this perspective.
Luisa Rodriguez: I guess in Australia, these banks that got whatever computer system you were talking about do their banking calculations kind of got it wrong. They invested in this thing that meant for decades after, they couldn’t get new software, because it didn’t use the right language. It did strike me that there’s a real optimisation that they have to do: you want to be on technology quickly, because it’s going to make you more efficient — but too quickly and you’ll get stuck in some suboptimal early stage.
Michael Webb: Like, “Oh, man. We’re stuck.” Yeah. Or maybe they were right to adopt it in the ’60s or ’70s whenever they did, but they then knew, “OK, we’re not going to be able to upgrade it for literally 50 years, and it will cost a billion dollars. When do we make that billion-dollar investment? In 1990? In 2000?”
Luisa Rodriguez: Yeah. It’s just a hard decision.
Michael Webb: Yeah. So that is a decision that faces businesses, and it’s a harder decision for bigger businesses that are more complex. And most of the economy is bigger businesses: they’re more complex, and so you will optimally just wait. Many people will ultimately wait a long time before making really big important decisions about adopting this, because they’re seeing how fast it’s moving, and it’s like, why not wait a bit?
And then a related point of that is: whilst waiting, you get to watch other people. And you can learn from other people’s mistakes, rather than making them yourself, and let them incur the costs: “Hahaha, let the early adopters make the mistakes,” right? And then we sort of sit back and then the late majority or whatever, they get to adopt it at a time when it’s much less of a risk and more of a guaranteed thing.
Luisa Rodriguez: Yeah. A bunch of lawyers just learned that they can’t use GPT-4 to write whatever legal documents without having these hallucinations. And so they’re going to wait until it doesn’t do that.
Michael Webb: Exactly.
How market structure affects the speed of AI adoption [02:03:26]
Luisa Rodriguez: Is there anything else we should be thinking about with regard to how the AI version of this technology rollout is going to look relative to other technologies?
Michael Webb: I think one piece we haven’t touched yet at all is what economists call “market structure” or “industrial organisation.” Namely: Is this technology something that is kind of controlled by a single monopoly supplier? Or is it completely distributed, and anyone can use it for free? Those are really important questions for speed of adoption. Because, basically, the short answer is if you are a monopoly supplier — i.e. you’re the only company that can possibly provide this product — then you are generally going to set prices higher than you would if you were competing with others providing the same thing. And if you set them higher, then that’s going to mean that fewer people can benefit from using it. And it also means that you get more of the profits, and people are paying you more than otherwise they would. And you’re concentrating wealth in these few companies, rather than it being more equally and broadly distributed around the world.
So there’s one kind of meta point I want to make first, which is: I think of GPT-4 tokens or whatever as kind of like bandwidth in the telecoms system, and telephone lines, fibre, or whatever. So if you talk to people in telecoms who’ve been around for a long time, and they run big telecoms companies or whatever, they’ll tell you the iron law is that people never stop wanting more bandwidth.
This is just an analogy. So the analogy is: in telecoms, you might think at some point that there’s enough telephone wire laid. And it turns out, no: 3G, then 4G, then 5G. People want more and more and more. I think access to these language models will be very similar. So right now, you might think, with GPT-4, I don’t know what the cost is per token. It’s less than a cent, right? But you might think, “Gosh, that’s basically free.” No. It’s not. That’s actually really expensive. And the applications you can imagine for these things, many of them will depend on having ever more unlimited access to tokens.
And so think about the example I gave you earlier: that you’re the government, and you want to know how much AI research has the government spent money on so far. That’s one question that involves reading hundreds of thousands or millions of documents and pages and whatever. And right now, that would cost you quite a lot of money. That’s just one query. Right now, today, I spend probably $5 a day on my queries to Claude and GPT-4. And I can afford that, but if I’m a student using it to tutor me, $5 a day is a lot of money. It’s way more than many people have available for anything, right? On a global level.
So the cheaper this stuff can be, it’ll have a huge, huge impact on who gets to benefit from it. And how do you get benefit from it? So right now there’s basically OpenAI and Claude, and then a few others — maybe Google is going to have something that’s as good as GPT-4 at some point. One lesson from economics is that three providers is basically not that much better than a pure monopoly. It’s a bit better, but you have to get to more like 10 providers before you have something that looks like the economist version of perfect competition — i.e. the opposite of offering it at cost; you’re not putting a big margin for your profit on top.
Luisa Rodriguez: Because you can. Yeah.
Michael Webb: Because you can. You can get away with it. And also, when you only have three providers, it already feels to me like there was some differentiation happening: that Claude is just better at writing tasks in general, and GPT-4 is better at coding tasks in general right now. That might change over time, but right now, that’s my experience of it. And so if I’m a coder, I don’t really have much choice: I have to use GPT-4.
Luisa Rodriguez: Right. it’s even more of a monopoly.
Michael Webb: And they are monopolists now in that little segment. And that’s a really, really big deal. So if regulation gets brought in, that doesn’t just stop incredibly potentially dangerous much larger models from being built, but mostly functions to stop new players doing anything at existing scales, then that is going to have huge impacts on what the benefits to the world are of these technologies. Because benefits really come when they are close to being free, right? Not when you’re making a little bit of margin.
Luisa Rodriguez: The richest people can afford them.
Michael Webb: Exactly. And so I think we should really get in the habit of thinking of this stuff as if it’s like bandwidth on your phone, and we should be wanting ever more of it ever more cheaply. And we should do things to make sure that it is cheap.
Another reason why AI is like telecoms is because telecoms, to offer phone service, you have to have all these cellphone towers, or lay this cable. And it’s really expensive, so not many companies can afford to do that. So you end up in a world where naturally you just have a very small number of providers. Historically what’s happened is the government’s ended up regulating those providers and capping their profits. Thames Water is the only water supplier in London, right? You have no choice but to use Thames water, I believe. They’ve got a complete monopoly. However, the government also —
Luisa Rodriguez: They can’t charge whatever they want. They have to charge some amount that’s —
Michael Webb: The government says, “Sorry, you can charge exactly this much, and you can make this much profit” — and yeah, “bad luck,” as it were.
Luisa Rodriguez: And they’ll do it because it’s worth it, but the prices won’t be exorbitant because that’s just not allowed. The government thinks that would be unfair. And rightly so.
Michael Webb: Exactly. And so these are generally like these listed companies, you and I can invest in them if we want to, trading on stock markets. They’re completely independent corporations. They’re not part of the government — they’re not nationalised at all; they’re independent — but the government says they have price controls to say how much they can charge.
And so one thing I would expect over the next few decades, potentially, depending on what happens with the market — if we end up having regulation that, for safety reasons, stops all these smaller players from coming along — then the government will end up saying, “OpenAI, you’re basically like AT&T, and we’re going to cap the amount of profit you can make and what you can charge per token” or whatever it is. And these become very dull and boring companies, kind of like a water utility company, right? Or become sort of the AT&T, and maybe there’s a Bell Labs attached to it, but the main thing is the AT&T thing.
And so I think there’s a bunch of really interesting political economy things there around how the speed of adoption will be very much affected by what the price of this stuff is. The cheaper it is, the faster it’ll be adopted; the more expensive it is, the slower it’ll be adopted. Even what we already have — which could sustain decades of really exciting economic growth and progress. So that seems to be a really important fact and a big deal.
Luisa Rodriguez: Right. Yeah. It seems like that could have a couple of effects. One is, if it caps the price for consumers, it could be much more widely adopted and the things we have now will have wider benefits across a wider range of people and industries. But it also seems like it might slow down further progress, if OpenAI, for example, it’s less profitable for them to build or to push the cutting edge. Is that something you’d expect to see in that world where the government did cap?
Michael Webb: Yeah. Well, I mean, water companies are not famously innovative, right? Although there’s an interesting thing with Bell Labs. So I’m not an expert on the history of AT&T and R&D and so on, but one thing I believe is the case, from having read a bit about it, is that they kind of staved off regulation for much longer than you might expect by investing in things like Bell Labs, and saying, “Look, we’re doing all this really important R&D stuff with our profits. This is good. This should exist.”
Well, you could have the other way around. Here’s a clever thing the government might do: the government might say, “We’re going to cap the cost at this amount. The cap is going to go down by 50% every year. If you can figure out a way of doing R&D to mean that your costs are way less than 50%, then you can keep all the profits you make.” So a smart government could nevertheless incentivise the sort of things it wanted to incentivise, if it was thoughtful and clever about it.
There’s lots of other ways of doing this, but I think there’s more to it than just, “it’ll reduce innovation”: there’s all kinds of ways you can set things up so that it might actually encourage innovation of the kind you want to see, as opposed to the kind you want to avoid.
Couldn’t AI progress just outpace regulation? [02:12:16]
Luisa Rodriguez: Yeah. Could AI progress just outpace the government’s ability to create these regulations? I think of regulation, well, in general, government policies as being just slow moving. And maybe they move a bit faster when there are extreme things happening in the world. But COVID was extreme, and some governments were able to move somewhat more quickly than others, but even so, it seems like all things considered, it would have been better if they’d moved faster, and they still found that hard.
AI seems like it moves incredibly quickly. If we’re going to get improvements to GPT-4 — that basically double from the ones that we got last year, in the next year — will there already just be really extreme impacts? And not just impacts, but adoption, that means that some of these regulatory effects just don’t keep up, and so don’t slow things down the way you might expect they would, or they have in other cases?
Michael Webb: I think that the things we were talking about before — in terms of all the reasons that interest groups and lobby groups can slow things down — as I said, I think those very much apply here. And so even though the technology is moving really quickly —
Luisa Rodriguez: They will keep up.
Michael Webb: They will “keep up” in terms of stopping it being used, right? However fast it’s moving, you can always pass a bill to say no, right?
So the thing that I’d be more worried about is the sharp end of capabilities — the things that you’ve had many guests on this podcast talk about — as well as misuse and those kinds of things. That’s where I’d be more concerned about regulation keeping pace. Because there, it’s not like you have to persuade lots of people in the world economy to adopt your thing and change their systems. All you need is just one bad person to have a very clever thing and to do bad stuff with it, right?
It’s those kinds of things that you have to worry more about regulation moving fast enough. But even there, I’m not an expert on the history of nuclear regulation, but I believe something like the following is true. At some point, someone convinced the US government, the US president, that nuclear was a really big deal, and it was possibly very dangerous. And with a single stroke of the pen — I don’t know whether that was a presidential executive order or congressional legislation — but almost overnight, all research on nuclear anything was classified. So you’re a researcher, you’re just doing your PhD, sitting at home, doing some physics of whatever. Suddenly, from tomorrow, you doing any more work on that is illegal. The government can just do that, right? The US government can do that.
And you can imagine that if people do enough to convince governments that this stuff is really, really scary — in terms of the existential risk level of this — the government can be like, “OK, you convinced me. As of now, we are classifying all research on AI.” That could just happen tomorrow, and then all these companies would just shut down overnight. And that would be the law, and they couldn’t do anything about it, end of story. That’s a completely possible scenario, in terms of the powers governments have.
Luisa Rodriguez: So it’s not that fast government action is impossible; it’s that it doesn’t happen that often. Sometimes it does happen suboptimally. It’s too slow.
Michael Webb: Always it happens suboptimally, right? It’s obviously slow. Or it’s too fast and it’s too blunt. As I say, I’m not an expert. I imagine there’s stuff that was classified under the nuclear stuff that was completely reasonable to not be classified, and people should still be able to, but they couldn’t. Maybe we’d have much better nuclear energy today if that hadn’t happened.
So there’s all kinds of ways in which any regulation is going to be very much not first best, second best, or at best, third best. And I think we’re in a really scary place right now, because regulation, if it happens, could do a lot of good. It could do a lot of harm as well. And so we’re going to have to tread very, very carefully.
Luisa Rodriguez: Yeah. OK, the key thing I’m taking away from this is, if I had some intuition that fast government action is unlikely because you just don’t see it that often. It actually is totally possible, and we just have to do a good enough job convincing government that especially the safety stuff is worth taking seriously. And then, were we to do that, things could change very quickly.
Michael Webb: The thing I would take issue with is “do a good enough job convincing.” What you’re kind of doing is you’re waving a red rag in front of a bull. And you want the bull to jump in that direction, and it might do, but it might jump in the complete opposite direction. And you don’t know in advance which way it’s going to jump. So anything where you’re trying to get the government to do a really, really big thing, by default, it will do nothing. But there’s a small chance it would do a really big thing, and there’s a big chance that if it does, it would be really bad. But it could also be really good. So the only thing I’d say is it’s very high-stakes stuff, and it needs to be thought through very, very carefully.
Luisa Rodriguez: Makes sense. Sounds right.
How other people think AI will impact the economy in the short term [02:17:22]
Luisa Rodriguez: So we’ve gone through lots of reasons that the deployment of these AI systems as they get more and more capable might be similar to other technologies historically — which might mean something like it takes decades, 90 years sounding like it’s a surprisingly common default. But there are some disanalogies that make it sound like it might be much quicker. Those seem pretty important to me, so it seems like putting a fair bit of weight on that path.
To put some parameters or upper and lower bounds on what we’re talking about here: in the “pessimistic” case where it’s deployed more slowly for lots of these reasons — government intervention and humans being humans and other things — what do people think the impacts on the labour market in the next few years will be? So not in decades when maybe AI is much more capable than it is today, but in the next two to five to 10 years: What do people think might happen on the low end of impact and change?
Michael Webb: I think some of the most pessimistic people are professional academic economists. Because, you know, they’ve seen this all before — they think, at least — and all the things I was just telling you about, they all see the analogies, and maybe haven’t spent that much time thinking about the disanalogies.
There’s a great thing called the IGM Economic Experts Panel, which is a bunch of seriously, very top of the field, absolutely brilliant people — you know, professors of economics at all the best places. And they ask them these questions every week or whatever it is, and they all give their answer in terms of what they think will happen and how confident they are. And they recently asked, do you think all this fancy AI stuff is going to have a really big impact on GDP in the near future? They basically all were, “Nope.” Like, maybe it’ll be as big as the internet if you’re lucky, but that didn’t show up in GDP statistics much for a very long time.
Luisa Rodriguez: Interesting.
Michael Webb: And so one point is that GDP growth has been slowing since the 1950s. That period post the ’50s had involved amazing innovations of all the computer and who knows what, right? That 60-year period involved that amazing stuff. “You’re telling me that AI is going to be better than all those things? And so I don’t think we’ll see anything showing up in the GDP statistics. We’re not going to be in a world where suddenly we’ve got, you know, 5% growth in the US” — you know, economists would all fall off their chairs if that happened next year, for sure.
Luisa Rodriguez: So out of distribution.
Michael Webb: Yeah. So that’s kind of what people say who are more pessimistic.
Luisa Rodriguez: Got it. And what’s the story they’re telling? It sounds like just an outside view kind of prediction. It’s like, there are loads of impressive new technologies that have been developed the last few decades, and they don’t cause perceptible changes to the GDP. This is another one of those. And not doing much updating on the specific facts about AI. Is the story that they have in mind that yes, there will be some adoption, but it’s not going to double or triple anyone’s productivity? It’s not going to automate away so many jobs that there’s a 150% increase in the labour available to do more work. Does it assume that the predictions about what AI could plausibly do, in terms of boosting these kinds of labour inputs, they just think they’re overblown?
Michael Webb: A charitable version is to think back to 1960. The first computers are being invented, and the inventors of the computer say, “Look, this can do anything. We have all these humans doing loads of jobs involving tabulating data and storing, retrieving, manipulating numbers and addresses and databases, and all that kind of stuff. And this computer can do all of that just today.” And you initially had the US Secretary of Labor Willard Wirtz in 1963 giving a big speech saying, we’re quickly about to be throwing all these people onto the human slag heap, because of automation from computers.
And everyone then, I think they were correct, in that a huge fraction of the economy was in fact this work that could be done by computers, and computers were coming along right then and there. The fact is though that it took 50 years for that to happen, for all the reasons that we talked about. So I think they just think that the same sort of thing is going to happen this time. You know: big impacts, but it’ll be spread over a long time. Ten years is not a very long time.
Luisa Rodriguez: Right. Have economists tried to quantify the impact of computers on the GDP, basically?
Michael Webb: They have done a lot to try and quantify the impact of TFP [total factor productivity] growth, which is a fancy way of saying productivity improvements of all kinds (very much including innovation technology) versus other impacts on GDP — you know, how much education people have, or how many people are working in the formal labour market, versus they were just working at home before or whatever. So lots of that kind of thing.
There’s been relatively limited attempts to quantify this much GDP is because of computers, because it’s just really hard to quantify. On an individual company level, you can do before and after: you know, before computers, they have this revenue, this profit; then afterwards, this revenue, this productivity, whatever. But because the entire economy is changing at the same time, all these demand impacts we discussed before, it’s really, really hard to come up with a single measurement of what the economy would look like today if there were no computers. That’s impossible to measure really properly. But we know an upper bound is how much it has in fact grown, right? Like, it can’t be more than 100% of growth.
Luisa Rodriguez: Right. And I guess in these cases overall, the economy, well, growth has been slowing down relative to before. And so if that’s your analogy, then AI isn’t obviously going to change things at the level of the GDP.
Michael Webb: Yeah. Don’t forget computers have had a huge effect, and we’ve had huge GDP growth over the last 50 years. Things are very different in your life and my life today than it would have been if we were sitting here in the ’50s. We wouldn’t be on this podcast, for a start. But that stuff has happened quite slowly, over several decades, and we’ve not had more than a small number of single-digit percentage points of GDP growth at any given time.
Luisa Rodriguez: Cool. OK, so that’s the kind of charitable read of where they’re coming from. My guess is that they’ll be wrong, but I see the logic, and it doesn’t seem crazy. And maybe I’ll be wrong, and maybe they’ll just be like, “Ha! We told you it was just another technology.”
Why Michael is sceptical of the explosive growth story [02:24:39]
Luisa Rodriguez: So the people on the other side of the spectrum, who are more optimistic about AI having a bigger impact, what are they expecting to happen?
Michael Webb: So I think you’ve had some of them on the podcast, so I’ll be very brief on this, but broadly the claim is that AI could lead to explosive growth. And where that comes from is thinking around not particularly automating day-to-day activities in the economy, but automating the process of innovation itself. The people who work on this think that the most important thing — certainly in rich, advanced economies — for economic growth suddenly going forwards is going to be having new ideas, innovations. And I’ve written on this; I have papers on ideas.
However, if you think that one thing AI may be able to do is to speed up the process of research itself — this thing that has the biggest, cutting-edge, most important impact on economic growth, then you could imagine a different regime where innovation is way faster, and the cutting-edge stuff is progressing very quickly in every different area.
Luisa Rodriguez: Right. Ideas get much easier to find, which creates this feedback loop of a bunch of growth.
Michael Webb: Exactly. Yes. And I certainly think that ideas are much easier to find with GPT-4, and its successors and fine-tunings and implementations and so on, than before. So I am completely in the camp that thinks that these large language models will have a huge impact on R&D and the speed with which you can do R&D.
I think the interesting question is what bottlenecks are still there. And we can have a long discussion about this, and I imagine you covered lots of it with other guests. Briefly though: for an innovation to actually have an impact on the economy, it has to be adopted, right? In all these economic growth models, they elide over this — they assume scientists do R&D, and it immediately shows up in terms of actual economic output of the goods and services that you and I are consuming. In fact, someone can have an idea, but the doctors have to agree to do it and all that kind of stuff.
I think all that stuff still applies. So I think you still have these huge issues of people getting in the way, basically, and things being much slower than they could be if you were like, “Yes, let’s just do everything the AIs told us to do.” I don’t think that democracies, or indeed any states, really will pursue that strong of a path — and humans will get in the way.
Luisa Rodriguez: OK. I guess my inner Tom Davidson — who, as you said, we had on the podcast and who has this idea about AI causing explosive growth — I wonder if he’d say something like those will get in the way for a time, but they won’t be bottlenecks forever. Humans being humans, I guess whoever’s sitting there doing a job and is like, “I’m not sure I want to use GPT for my job,” will eventually, over five to 10 years, consider adopting. Or they’ll just age out of their profession, and the new people will be more likely to adopt the tech.
Michael Webb: That’s true for any particular technology, right? So GPT-4 today can do all this stuff. In 20 years’ time, all the people have aged out and it’s finally being adopted. But all the R&D that’s now been done by GPT-4 and its successors over that 20-year period, there’s a whole nother set of humans now who have to age out and allow those adoptions to be adopted in this world where it’s humans getting the way. So it’s always going to be the case that there are humans in the way for any particular new adoption.
Luisa Rodriguez: Yeah. I guess I’m on board with that’s a bottleneck, and that’ll slow things down, but not that it’s a bottleneck that will rule out the more extreme outcomes, where growth is really on the explosive end. Why might I be wrong?
Michael Webb: No. Look, forecasting is a tricky business, and no one claims to know what’s going to happen. I would not rule out anything. I’m not going to sit here and say there is 0% probability of any particular thing. Compared to most economists, I’m sure I would be way on the side of thinking this is going to be a really, really big deal. But compared to Tom, I think I would say, if you spend enough time studying economic history, you see all these things that slow stuff down — and those things that slow stuff down look like they’re not going to go away. And so I would want to put all that stuff back into his model. His model doesn’t have that stuff in it; his model kind of assumes there’s none of this humans getting in the way, in the ways that we spent a lot of time earlier in the conversation talking about.
You know, Tom and I have had these kinds of discussions, talking through, “Tell me your bottleneck, and I’ll tell you why it’s not a bottleneck” or whatever. So we can have those discussions, and they’re very fun to have.
Luisa Rodriguez: Oh my gosh. I want to have you both on the podcast now.
Michael Webb: That would be fun. But it’s a kind of thing where, you know, I can keep coming up with new bottlenecks, and he can keep dismissing them, and we can keep going on forever. And so there’s not like a nice definitive thing, where we both agree that if X was true, then here’s the answer. It’s not a thing where in the next five minutes, you and I can talk more about this and reach a nice, clear conclusion whether there are bottlenecks or not in R&D or whatever.
Luisa Rodriguez: Be like, “All of the bottlenecks can be ruled out.” Yeah. Broadly speaking, it sounds like you’re in the camp of AI could have pretty big effects on the economy and on growth — maybe it’ll be on the faster side, maybe somewhat on the slower side. But overall, you are not in the camp of it’s just any other technology, like the internet — which had some impacts on growth, probably, but not world changing. I mean, they were world changing, but…
Michael Webb: Yeah. Going from like, nothing to the Industrial Revolution was a massive deal. Given we are in the economic regime we are in, we’ve done that 0 to something. So Bob Gordon has this nice thought around once you go from only having outdoor toilets to having a flushing toilet — a proper modern toilet — that’s a huge improvement in quality of life. Almost nothing compares to that. And you can only do that once..
Luisa Rodriguez: Right. You can add a bidet, but that’s only so much better.
Michael Webb: Yeah. And so I have some intuitions in that direction. Compared to us in 1800, today we’ve had so much amazing change, and I believe that AI is going to have another completely incredible, unbelievable set of changes that are going to have huge impacts on GDP in the coming years and decades. I think the world is likely to be not unrecognisable, but quite unrecognisable, in the coming decades.
But I think this is perhaps more of an argument about measurement than it is about impact on the world. I think the way that will show up in GDP, it might be a bit of a phase shift regime change, compared to the postwar periods. But I’m not quite as optimistic as Tom is, I think, about how big that will be — in terms of, I don’t think we’ll have 30% a year growth happening indefinitely. I think that’s unlikely.
Luisa Rodriguez: Cool. That is really helpful. OK, so there’s still some space between Tom’s view and yours, and then probably some more space between your views and some professional academic economists. Do we have any early evidence about what trajectory we’re following? Are there any indicators that are pointing to more growth or less, or bigger labour impacts or smaller ones?
Michael Webb: So there’s a bunch of things you look at in terms of GDP growth and productivity growth. And right now, there’s nothing at all to see in those that suggests anything is happening. It’s kind of historically always the case that, you know, technology comes along, does amazing things, and it takes a long time to show up in those statistics.
There’s a fun paper from a few years ago by Bill Nordhaus, an economist at Yale, called “Are we approaching an economic Singularity?” He comes up with these very thoughtful tests around what we would expect to see in the economy if we are approaching this point. Well, the final line of the abstract of the paper is, “The tests suggest that the Singularity is not near.” So that’s his sort of glib answer.
But broadly, economic statistics of the kind that we currently collect are more of a rear-view mirror than looking forwards. And so if something really big is happening, I think economists are not the people who will tell you, “Oh, yes, it’s all showing up in GDP.” No. It’ll be happening in many other places before economic indicators start suggesting it’s happening.
Luisa Rodriguez: Got it.
Whether AI will cause mass unemployment in the long term [02:33:45]
Luisa Rodriguez: I want to turn to the longer time horizon, or the point at which things are getting even more extreme because of AI becoming more and more capable. But I guess we should also just flag there’s all this complexity in the short time horizon, and so everything we say about the world where AI can automate even more tasks is going to be even more speculative. Is there anything more you want to say on that, caveat-wise, before we dive in?
Michael Webb: Just yeah, emphasise it, and to say that, you know, we’re sitting here kind of prognosticating — and we’re just talking about the economic impacts. We’ve touched a tiny bit on politics insofar as it impacts labour unions and that very small slice of things. But there is so much else going on in the world that’s going to be really important and relevant to this stuff, that we haven’t touched on at all.
So caveat emptor, I think. Hopefully, what we’ve talked about so far is giving people a really good sense of the kinds of forces at play and the things you have to be paying attention to.
The final thing I’ll just add onto all this is that you might say that one day, we’ll know what happened — but actually, we won’t. Because today, looking back, it’s not at all obvious what the impact of computers has been over the last 30 years. And many brilliant economists have had stellar careers answering a very specific question in a clever way, but no one’s really answered the macro question of like, all in, what was the impact? And what would the world be like if it hadn’t happened? We have no idea.
And so I think the same will be true, whether it’s in three years’ time, five years’ time, 10 years’ time: What would the world be like compared to if GPT-4 had never been invented? We’ll just never know, because we can’t observe the counterfactual world where it didn’t happen. So I think that the most we can do is be really thoughtful about the forces at play, and we can use those to reason, I think reasonably well, about the kind of things that are likely to happen and the kind of timeframes it might happen on.
Luisa Rodriguez: Yeah, that makes tonnes of sense. OK, so with all of those caveats in mind — ignore everything we’re about to say, because it’s going to be even more speculative, and we’ll never even know if it’s true — we’ve been talking about a case where AI can automate some small percentage of tasks. And do we know what percentage of tasks AI can automate right now?
Michael Webb: “No” is the answer. But in terms of the estimate, just to give you a ballpark that they give in the OpenAI paper, for example, they say something like, I think 80% of the US workforce could have at least 10% of their tasks affected by LLMs, and 19% of workers could see at least half of their tasks affected. Again, “affected” doesn’t mean automated or replaced, it just means exposed in some way. So these are reasonably large numbers.
Luisa Rodriguez: Yeah. It feels both low and high to me. But it could be really high. It could be 50%; it could be 90%. At some point, we’ll probably get to superhuman AI, and it can do all the tasks we can and more. But even 50% feels pretty different to what’s happening now. And I’m wondering if, at that point, any of these models will even apply? At that point, is the world just too different for this kind of conversation to be applicable?
Michael Webb: Yeah. So I think I’m going to stand up for economists here and say yes: the models do apply, all these considerations do apply. So let’s think about the question: Wouldn’t it be different if we’re talking about 90% of jobs being automated? Let’s go back to a place we started earlier in the conversation, thinking about agriculture in the US. In 1790, it was a true statement to say, “In the coming years, 90% of jobs will be fully automated.” That’s a true fact. That’s in fact what happened.
Luisa Rodriguez: That’s insane. Yeah.
Michael Webb: That happened over a 100-, 150-, 200-year timeframe, and so the speed of this change is really important. But then don’t forget — back to our talk about unions and the American Medical Association and politics and so on, not to mention all the rational decisions of company CEOs and so on — there’s all kinds of forces that mean these things take a long time, even if in theory one could do lots of stuff quickly. There’s also just these capital availability constraints and all kinds of things as well.
There’s just not enough spare cash flowing around in the world for everyone to do that at the same time. Or there’s not enough resources, because adopting technology requires all kinds of work to be done, and you can’t just stop the entire economy whilst you retool everything.
People still want to eat food, and they still want to fly in planes, and whatever it is. You can’t like down tools to say, “No, all we’re doing for the next five years is switching everything over to LLMs.” You can only take so many planks out of your boats and replace them while you’re sailing in the water at the same time.
And so all these kinds of constraints I think are not obvious until you think about them. So that’s point one: Even in a world with 90% of tasks automated, we have been there before. It happened. It happened lots of times. And we’re still here, and things are fine, right? Things look quite different from 1790, but many things are still the same.
In that sense, things can get weird, but there’s still some sort of upper limit in how fast I think they will naturally get weird from an economic perspective.
That said, let’s think about what happens when it is 90%, whether that comes in 100 years’ time or whether it comes in 10 years’ time. I think there’s a few really important things here. So we generally are going around saying, “Gosh, what if it automated 90% of cognitive tasks?” Big emphasis around the word “cognitive.” Many, many tasks in the economy are not cognitive tasks. And back to the old thing we’ve been discussing all the way through: when you automate something, suddenly all the incentives go towards how do you make more value out of the stuff that is left that is not automated, or that humans can now do because they’ve been freed up and they can do something else now. And I think there are many, many things that are not cognitive, that there’ll be huge amounts of demand for humans to do.
Luisa Rodriguez: Does that mean most people are going to be doing a bunch of physical tasks? I guess like moving boxes? Because that sounds pretty strange to me, even though your conclusion might mean that it’s not that we’ll have loads of unemployment, but it might be that most people are just doing a bunch of physical labour. Is that what it points at?
Michael Webb: So by physical labour, I mean, it would be weird if we ended up lifting boxes in warehouses. I don’t think that’s where we’re going to go. And there’s things, which we’ll get on to, beyond physical labour. But within physical labour, caregiving is a physical task, right? If you are looking after an old person who needs care, you have to be physically present and you have to help them do all kinds of stuff. That’s what you’re there for. That’s a physical job, and there is going to be so much demand for that kind of work going forward.
And don’t forget, most of us spend 18 or 25 or whatever years in education before we even start working, and then people seem to be living longer and longer and retiring earlier and earlier. So you’re actually only working for 30 or 40 years out of an 80-year lifespan, right?
And then half of those people maybe are not engaged in the formal labour force, because they’re looking after young children, or they’re just staying at home or whatever it is, or they’re rich and retired, they retired really early. Or they’re working not many hours a week. It seems like everyone these days is switching to four-day weeks. And it seems great. And in the long history of this, it’s amazing what’s happened in terms of hours.
Minor digression, quickly: In 1870, the average hours of work for a working person in America was 70 hours per week. It was a gruelling life. You were working in the factories or whatever, and you were made to work really, really hard. Today, it’s 35 hours a week. So we’ve literally seen a halving: of those who are full-time employees, a halving of the hours per week. And more recently, just in the last 30-year period, the UK has seen a decline of 20% in hours worked per worker, excluding those who were retired or whatever, among those who were working. So there’s ever further declines in hours worked because people choose to spend some of their extra wealth on working fewer hours. That’s a long-run trend.
So per capita, there’s like half of a person per capita because half your life, you’re spending in education, or you’re retired. And then it’s like half again, because we’re working way fewer hours now than we ever used to. And so you keep doing that. Let’s suppose that in the entire economy, there’s actually 20% of a worker per capita. That means that if the entire economy, if everyone was having to be caregiving, there’s not enough humans to go around if we think it’s important to have human carers looking after people who need care.
And so, if we think that looking after the young children and educating them, and we think we definitely want to have humans involved in bringing up children — we’re not going to hand our kids to the robots and say, “Great, see you when you’re 25” — we’re going to have adults be involved. And we think we’re probably not going to hand all of the elderly over to robots either — maybe some we will, if we don’t have the choice, but if possible, you’re going to prefer to have human care — that already is like a huge, huge unsatisfied demand that you can see persisting.
And the demographic directions of those forces are ever more in the direction of a shortage of human labour, because we’re spending more time in education and more time retired, right? And therefore, there are less people who are able to do this work of teaching and caring, and ever more people who are demanding it.
Luisa Rodriguez: So it sounds like the story you’re telling is one where AI can automate loads of tasks. But because it can do that, there’ll be a couple of effects. One is people might just work less because they can. That makes a lot of intuitive sense to me. I will probably do that.
Michael Webb: And has been happening for a long time. Yeah.
Luisa Rodriguez: There’s historical evidence that’s been happening, and it could just keep happening. But then at the same time, we might just have values that mean that there are some types of jobs that we want humans to keep doing. And that there could be an equilibrium where the amount of work done is basically like the amount of work required to do the tasks that we want humans to do, and then everything else will be done by AI.
And maybe that does mean that there’s a much smaller overall workforce. Maybe it’s because there are fewer people working, and other people don’t need to be working. Or maybe it’s like everybody-ish is working, but they’re working for five to 10 years of their lives, and then retiring or working a few days a week. And so there’s like a huge decrease overall in the number of hours worked by humans. But it doesn’t necessarily go to zero, unless we at some point decide we just really don’t value being served by humans in any cases.
I guess maybe the piece that is missing for me right now is the reason that that’s possible is because, the fact that we’ve automated a bunch of tasks super cheaply means that there’s growth and a surplus of goods and wealth that means that people just can work way less. And so I guess, I do feel like I had some story before that was like, maybe we’ll need universal basic income. But maybe the actual thing is either we’ll have something like that, or natural economic restructuring and equilibriums will mean that you need much less wealth to have as good or better a standard of living at a lower amount of working hours. And I mean, that’s just pretty wild to me.
Michael Webb: Well, again, it’s a continuation of current trends. There’s nothing new here, right? John Maynard Keynes wrote this famous essay in about 1920 I think, called “Economic possibilities for our grandchildren.” And he predicted back then that in 100 years, I don’t know what exactly, we would all be working, 15-hour weeks. And in fact, we got halfway there. We got down to 30-hour weeks, where we are now.
Luisa Rodriguez: He was a bit ambitious.
Michael Webb: But he was broadly correct. And to your point, it means that the equilibrium here is kind of, you know, if you can afford to retire earlier then you do, but that means you have to have enough resources to be able to pay other people to look after you, or make food for you, or whatever it is, even though you’re not working. So there’s a labour demand being created: the more people retire early, the more work the people who are left have to do. And the more wages they have, and the more it’s potentially worth working a bit longer, depending on which effect dominates, in terms of you want to work more because you earn more versus take advantage of being able to retire early because you made more money. These things, again, always go in different directions, but there’s definitely a long-run equilibrium there, and the state of that equilibrium is going in one direction, which is we’re working less — and we are, I think, happier for it.
Luisa Rodriguez: Yeah. I guess part of me is, “That’s a pretty nice story.” That’s a story where people get to have as much nice stuff or more stuff, and work less. And that just sounds pretty good. So, nice. Yay!
Michael Webb: That’s the story of the last 200 years, right? That’s exactly what’s been happening.
Luisa Rodriguez: Right. Yeah. So, yeah, well done humanity.
Michael Webb: We’ve also done lots of terrible things, obviously. Obligatory pointing out that we’ve, you know, destroyed the planet and so on. But being purely self-interested for right now — and not caring about our grandchildren and not caring about the planet and other stuff — we have more stuff, and we work less, and it’s nice for those people.
Luisa Rodriguez: Well hopefully, all of this surplus that we’re getting is also going to help us solve some of these problems.
Michael Webb: Exactly. 100% yes. You know, the hope is as you ascend Maslow’s hierarchy of needs, you suddenly start being like, “You know what? We can afford to do this now. We do care about the planet, and we do care about…” et cetera. Yeah.
Luisa Rodriguez: Yeah. Is there a dark side of the story? One thing I’d worry about is that, as seems bad now, some people work much harder than others. Some people have much less than others. Is there a story where this exacerbates inequality? Because many people who are wealthy now work less and less, but some people work loads still, and that’s unfair. Or is the overall increasing wealth and productivity going to lift everyone up, such that, at least relative to now, basically everyone’s better off?
Michael Webb: I think the politics really comes in here, and things like the minimum wage become really, really important. You know, there are interesting arguments about whether the minimum wage increases or decreases the total amount of labour demand. I think there’s not a clear consensus on that.
But I think it is pretty clear, at least among labour economists, that if you look at income inequality over the last few decades, particularly at the lower end, a really, really important thing is where the minimum wage is set at. And so if you’re saying, you know, maybe in a completely free market, human labour will be at such a level that you have to be working many more hours than you would like. The government can just say, we’ll do the equivalent of increasing workers’ bargaining power: we’re just going to literally set minimum prices higher. And those people are much happier, right? They’re getting paid more. There’s whole debates we can have about the impacts of minimum wage, and whether it’s good or bad and so on.
But broadly, thinking about if you’re an employer and you’re employing workers: the worker is producing some output, some amount of stuff, and they get paid. And they certainly have to be paid less than their output. Because if they’re paid more, then the company would literally lose money by employing them, so they wouldn’t employ them. But generally, there’s some amount of work that the worker was producing. And in some cases, the worker captures most of that amount; in other cases, they capture very little of that amount — depending on how much bargaining power they have, whether they’re in a market, whether there’s loads of people just like them, or they’re the only person who can do what they’re doing. So this surplus gets divided between the worker and the employer.
But broadly, I think there’s lots of reasons to think that the wages you can earn as someone with the least skills in society, that wage keeps ratcheting up over time. Whether it is because of increasing minimum wage, whether it’s because of increasing technology, increasing your productivity, whether it is because of increased education. That’s like the biggest thing, right? People get so much more education these days than they did 20 or 40 or 100 years ago.
There’s a quick side note on that. It’s amazing. A hundred years ago, most people would be illiterate. If you go on the Tube in London, the Underground, there’s always interesting coloured patterns on the walls of different stations. The reason they are there is because most people on the Underground, when the Underground was built, could not read, and therefore they had to have some other indicator of what station they were at, right? I think we forget how uneducated everyone was 100 years ago.
And the story of the last 100 years, the US was kind of fastest here. They got their act together by far the fastest compared to most of the other economies, in terms of educating people much more at the high school level and beyond. But Europe eventually caught up, and the rest of the world, we went through this amazing amount of what you might call “educational upgrading.” And not only do people become literate, we’re now in a world where half of people are getting degree-level qualifications.
And I don’t think that suddenly will stop. I think that can keep going and going. Maybe the institutions will change, and I certainly personally hope they will, but I think in terms of can people keep getting more education and learning more? Absolutely. And that’s as much about people who currently are only getting up to age 18 level as much as it is about those getting master’s, now getting PhDs, or more vocational training. There’s so much more room for every human to learn more stuff.
And to avoid going on another huge tangent, but obviously GPT-4 and large language models will have a huge impact — very quickly, I think — on education, in terms of how easy it is to learn new stuff, and how personalised that can be and how well explained it can be and all that kind of thing. And so I think we’re also, hopefully, about to see a sort of golden age for education. And people who maybe would have been quite low-skilled today or 50 years ago, are now able to be closer to what we now regard as, you know, a college-educated person.
Everyone’s going to get to that point, as it suits them and the kind of things they want to do. The kinds of things one might want to cover in a college education might look very different, and will be much less around sitting still and studying a book, and much more about other things. And those things, I think, will be broadly available to everyone. Or certainly, you know, that should be one of the first tasks of societies, is to make sure that’s the case. And I think we will get there.
Luisa Rodriguez: Yeah. I guess one reason to be pessimistic — that, again, we’ve covered in other episodes — just to add the caveat that AI also poses huge risks, from misuse and from unsafe development. Are there things besides that? If we were to both align AI and sufficiently prevent misuse, are there things besides that that worry you? Or are we just going to keep trending toward this pretty great-sounding world, where we work less and have more?
Michael Webb: Again, I would emphasise that I’ve just been focusing on the economic picture here. So I think about COVID: we could have sat here five years ago and talked about how great the economy will be in 2020, and yet it was way worse than we expected because of this thing that’s kind of extra-economic. And yet, it caused a huge, huge negative impact on the economy, right?
So what I’m talking about here is: just set aside America, China, huge World War III; set aside the alien invasion, and the COVID Mach 2, and the AIs take over, and the asteroid and all that stuff. Assume away all of that. What happens in the economy? I think then there’s a broadly positive story. And I should emphasise it’s positive because I believe pretty fundamentally in the power of democracies — and I guess maybe other forms of government as well; I’m less sure about those — to ensure that humans being humans, right, slowing things down. The humans actually do successfully slow things down, and that generally benefits some humans more than others, but overall, it keeps things manageable for us.
And I think that will continue, because there are all these natural forces that stop things happening as fast as you might think if you were naively thinking, “Hey, this is possible now, surely it’s going to happen and be everywhere tomorrow.” It’s like, no, it doesn’t quite work as fast as that. It doesn’t work that way.
But let’s step back and say, is democracy going to survive for other reasons? Is there going to be a US–China war, all that kind of stuff? I think I’m probably quite pessimistic about all of that stuff, actually. But on the pure economic side, I think I’m perhaps less worried, having thought about it quite a lot, than others.
Luisa Rodriguez: OK, so we should think of this story as the story of what could be if we manage a bunch of these risks, and the economy does what the economy does. Which is basically relatively efficiently allocate these resources, such that people broadly end up having more, working less, and being better off.
Michael Webb: Correct.
Luisa Rodriguez: Cool. That makes sense. And I really hope we solve these safety issues, because I do find this story just very, very inspiring. It would be pretty incredible to get close to there.
Michael Webb: Let’s hope we get to experience it, yeah.
Career advice for a world of LLMs [02:56:46]
Luisa Rodriguez: Let’s push on to our final topic. I want to talk about some concrete career advice. A lot of our listeners are interested in how the development of AI should influence their own career planning. I guess to start, to what extent should people be keeping all of this AI progress in mind when making their career plans? Is it even a given that they should be?
Michael Webb: So I think I’m going to start with a non-answer that hopefully is still actually interesting, which is: they should totally be taking AI into account by using it to help them with career planning.
When I was thinking about careers as a 20-year-old, you have no idea what these jobs actually involve. You can read the websites on the company, and now you can, these days, read the wonderful career profiles on the 80,000 Hours website. But you still often have no idea what’s actually involved in many of these things, right? If you’re lucky, maybe you know someone who’s done the job, but often you don’t.
And now you can literally go on Claude or whatever and say, “Please pretend you are a McKinsey consultant. Do you love your job? Please be honest.” I just did this, and the answer was, “I have mixed feelings about my job as a McKinsey consultant.” It gives you a long list of pros and cons.
Luisa Rodriguez: Oh, that’s really funny.
Michael Webb: So you can do this for all kinds of stuff. The thing I always remember when I was at that point of choosing first careers, I was like, “But what do you actually do all day? You can give me all this amazing exciting stuff around the impact and the great colleagues and whatever. But what are you actually doing? And what’s the division of your time?”
And these language models actually often know the answers quite well. They’re certainly much better than whatever your first guess is. And so they can help you with that. They can help you brainstorm who to reach out to. They can help you prepare questions to ask someone that you’re having an informational chat with. You can tell them what you’re interested in, and it will suggest careers for you. You can tell it, “You are my career coach. Please figure out what questions to ask me to help me decide what to do.” All these things, it can just do an amazingly good job at out of the box.
And then in terms of how it will actually affect the choices you make, the fact the world is changing in this way: I think both it will massively influence the economy and what one should think about doing.
But at the same time, 90% of people are not going to be thinking very hard about it as they make their career plans. So it’s not that you’re going to be completely disadvantaging yourself by not thinking about it. It’s more just like, if you think about it, then you’re instantly in the top 10% of people thinking about these things.
And so, a few concrete thoughts. I think the easiest generic piece of advice is: You should think really hard about how AI is going to be useful and impact the particular things that you care about and might want to work in.
I think that there’s going to be, sure, the shortage of AI researchers directly right now, and certainly alignment researchers. But people who are like any old other thing — so people in government, people working for all the kinds of career profiles that you like to write about — who know about about those, and they know about AI in some real detail: you are instantly going to have a very rare and valuable skill set.
So learning about AI as it is for its own sake, and then thinking really hard and carefully about how it interacts with the kinds of work that you’re doing, but really specialising it to your particular industry and thinking in great detail about particular tasks. And thinking about how a whole production process could be reimagined. And thinking about, as well, we have not talked much about the dangers and the risks and the errors — it’s sort of come up in passing, but there are many podcasts we could have just on that, and I think you’ve had some on that. And so thinking about those in great detail and being really thoughtful about the risks in any particular application area, as well as the more general and scary ones: I think you’re going to be so unique and rare and valuable that you’re going to have an instant, massive leg up.
So that’s one generic piece of advice. The other generic piece of advice is: You can also now upskill much more quickly for any particular question or career path. So you’re like, “What does a consultant need to know? What does a grants evaluator need to know? What does a safety researcher need to know?” All those specific things, you can actually use GPT-4 or et cetera to teach yourself, or to give you a huge leg up in that autodidactic exercise, teaching yourself, than you could before.
Luisa Rodriguez: OK, great. Good advice. What jobs would be sensible to go into now, because demand for them is going to go up?
Michael Webb: I kind of think of career paths more than specific jobs.
Luisa Rodriguez: Sure. Yeah. I think that’s what I mean.
Michael Webb: And these things will kind of evolve a lot over the coming years, for all the reasons we’ve been talking about.
One theme that’s come up a lot in this conversation is that entering a highly regulated industry, you’re going to be more safe and not exposed to radical changes rather than entering one that is not regulated. And so if you’re after a nice, sturdy, stable thing, go after the strong unions / professional bodies, like American Medical Association. Because you’ll probably have a relatively easier time of things. If, however, you’re someone who wants to lean into risk — which probably many people should much more than they in fact do, certainly when you’re young — go for the areas where there’s kind of the most change and the most likelihood of AI actually having more near-term impact.
So those are kind of industry-level thoughts. In terms of specific what you’re doing, day-to-day occupation or skills: First, a sort of generic thought. So one is, you want to be adding value on top of large language models. So pretend that doctors are not regulated. Imagine that we can just change things however we want as a pure market system. In that case, what I expect to see is very quickly, GPT or Claude — or whatever fancy startup that just raised lots of money, that’s just going to build a health LLM; there’s lots of these companies now — they can just do a much better job than the doctors of diagnosis and prescription and all kinds of other stuff. Particularly the GP more entry-level stuff that requires a wide body of knowledge and everything’s changing all the time, and you don’t have much time with people and whatever, and not that much physical stuff either.
In that world, what does a human do on top? Well, probably in that world, certainly for some of the GPs of this world — again, in the pretend world where it’s not regulated, so this is not how it’s going to happen, I claim — but supposing it did, you would probably say we don’t need GPs to have anywhere near as much training as they currently are required to have, because the algorithm can do a much better job of diagnosing and prescribing. And now, their job is really much more about the empathy, and it’s much more the small number of things which are more physical — like I had to smell you or something, until we have SmellGPT (I’m sure it’s coming), but for now, humans have to do that — and this making you feel appreciated and loved and cared for, and all those kinds of things.
Now the thing is that, I guess in a sense luckily for us, but also sadly in terms of what it means for wages: if you’re going into a job where all you’re doing is layering on empathy on top of GPT in an in-person setting, there’ll be tonnes of demand for that — but that skill of empathy is not that rare, and so you won’t get paid that much to do it. It’s not anywhere near as rare as the doctor who, today, there’s not many of them, and they have to do all these years of training, and they get paid loads and loads of money.
So it’s adding value on top of language models, and the value you’re adding is somewhat rare. Now let’s be specific: What actually are those things?
I think the first thing is social skills. There’s wonderful work by an economist called David Deming, who’s at Harvard, and his kind of breakout paper a few years ago was characterising occupations in terms of whether they’ve required high technical skills or high social skills — in particular, a two-by-two grid of low technical, low social; low technical, high social, et cetera — and looking over the last, I don’t know, 30 years at wages for people in these different buckets. And it turned out that the only kind of jobs that have seen real outlier, fantastic wage growth are those with high technical skills and high social skills. So if you just have high social skills and no technical skills, you’ve done pretty averagely. If you have just high technical skills and no social skills, you’ve also done pretty averagely, certainly in terms of wage growth in recent decades. And if you’ve got neither, then even more so — not so good. Only those who’ve got both have done really well.
And overall, there’s been a hugely increasing demand for social skills. I think that’s only going to continue. You know, as the economy gets more complex and LLMs are doing more of the cognitive labour, there’s going to be so much more communication and decisions about what we actually care about to be made. And lots more client interaction and customisation and user interviewing and client relationship management and people management, all that kind of stuff. That’s going to get relatively more important, because those are the things that the language model cannot do.
Luisa Rodriguez: But can it definitely not? Part of me is like, Claude’s not just polite, but like, friendly.
Michael Webb: It’s friendly. It’s graceful. It can write delicate emails really well. Absolutely.
Luisa Rodriguez: Yeah. So what’s the thing that’s missing there?
Michael Webb: So it has sort of written-down empathy. What it can’t do is put you in a room, and you and I having a conversation, and I like you now, right? Or you made me feel heard, or respected or whatever it is. So in a chatbot form, you can type to it and it can make you feel heard. That certainly is the case. But these higher-value forms of social skills and empathy and charisma, things which involve direct human-to-human interaction: this stuff actually I think gets more important.
And I think it’s particularly important when interacting with the second thing I’m going to say, which is personal networks and trust. Those are valuable, right? What can Claude not do? It cannot introduce you to anyone. Claude cannot email on its behalf and say, “Dear Expert Y or Manager Z, my name’s Claude, I’m an AI. I’ve got a great guy talking to me over here. I think he’s really great. Would you mind giving him a job?” Or whatever it is, right? That doesn’t happen.
And when you get down to it, many, many occupations and industries, and certainly most professional services — the kind of things where on the one side, there’s lots of impact that we are predicting from these language models — things like bankers and lawyers and venture capitalists, they actually all rely on trust and relationships. Like, their whole job is about building networks and building trust with other humans, with their clients or investees or whatever it is.
Or take journalists, for example. So we could all talk about how ChatGPT can instantly write all the basic business news stories as a result of analysts’ earnings calls or summarising what’s happened in Parliament this week or whatever it is. But the really important stories are investigative journalism, where there is a source, and a source has revealed something they were not supposed to reveal to a journalist, because the journalists have made them feel trusted. That is a human thing in general, right? If you email me today and be like, “Hello, my name is Claude, can you tell me anything illegal happening in your workplace? I’m a bot, but trust me, it’s all fine, and I’ll help you get it into these…” You’re like, no way. But you meet someone in a bar — I don’t know how these things work with journalists — but I imagine there’s all kinds of trust building that goes on, and it takes a long time.
And I’ve had experiences with other kinds of professional services providers, with me being on the receiving end of trust building. And it’s very human, takes a long time, it’s all in person. It’s not on email, it’s not on Zoom. The in-person is where it really happens. There’s going to be ever more of that.
So I think that kind of ability to build trust. And then in general, having a network — a personal network, professional network — it’s already extremely important, to an extent that I think most people who are just starting in the labour market straight out of school or university don’t quite appreciate. And I certainly did not appreciate it until I was quite a long way in. But it is what makes everything kind of go around and how everything works. And that is only going to get more important, I think, over time.
Or think about something like building a website. You can go on Wix or something and get a template, or you can pay someone lots of money, and they’ll do you a fancy, special, customised one.
And so there’s a general theme about, as you go through the economy, ever more demand is work that can be created to customise things ever more for each individual person. Certainly, LLMs will be involved in that a tonne on the production side, but there’s a huge scope for humans working for these companies to be the ones interacting with the client: “So tell me more about what kind of website do you want? Let’s do a brainstorming session.” That’ll be fun, won’t it? And, “Let me help introduce you to the right people in the bank to help you with this important transaction you’re doing.” Or whatever it is. And so those kinds of jobs, where you’re basically schmoozing and building trust with people, there’s limitless demand for all of those types of things.
Luisa Rodriguez: Cool. Wow, that’s loads of things, but it sounds like the theme is: charisma, networks and trust, general people skills, things that require people to be like, “I like you and I trust you.”
Michael Webb: Yes.
Luisa Rodriguez: Yeah, is there any more on that?
Michael Webb: Beyond social skills and trust and that sort of thing, there’s another category here which is around management. We’ve touched a bit on it, but I think managing teams of AIs is going to look a bit different from managing teams of humans, and we’re going to need lots of both. But I think the thing that’s really new here is how do you manage…
Luisa Rodriguez: Teams of AIs.
Michael Webb: All these swarms, hopefully not literally swarms, but you know… As a whatever you’re doing — you’re a lawyer or you’re a researcher, whatever — you’ve now got thousands and thousands of paralegals or research assistants equivalents. What do you do with them and how do you orchestrate them? There’ll be plenty of places throughout the economy where people figure out standardised ways of orchestrating that kind of work, and it’ll be embedded in software or have performance guarantees and whatever.
But there’ll be also tonnes of the economy — the more creative parts, you know, I very much include artists and others here, as well as the more creative kind of researchers or indeed lawyers — where it’s like, no, you can’t use software for this. You can’t use a pre-baked prompt and input/output, understood thing: your job is to figure out a completely novel problem. It’s like the edge of the art. You’ve got all these very smart RA-equivalents. What do you ask them? What do you do with them? How do you check whether what they’re telling you is correct in this non-understood area where you can’t have performance guarantees because it’s brand new?
So that will be a huge, really important skill. To be honest, that is today an incredibly important skill, particularly in research. If I was in the middle of writing papers right now, writing that paper we discussed, about the impact of AI on the job market, that paper would have been so much easier to write in so many ways if I had access to GPT-4. And that paper would be much higher quality, and it would have been produced in probably a third of the time or a fifth of the time or something.
And so if you are a researcher and you’re not using these things, or figuring out how to use them right now, then I think people who are thinking about that are going to probably outcompete you quite quickly — at least if they’re competing directly with you in some research niche or whatever it is.
Luisa Rodriguez: Cool. OK, I guess flipping the question around: What jobs will not be sensible to go into now, because they’ll be more automated soon, maybe completely?
Michael Webb: That’s a trickier one, because there aren’t that many jobs which consist only of the things that GPT-4 can do. So there are examples. There was a news story recently of a bunch of people in the US and often in the Philippines as well who were content writers — very much minimum-wage-type content writers, sort of content factories — and literally it was very direct and very clear, they were laid off and they were told directly it’s because ChatGPT is better than you are. Because they were kind of outsourced labour; it was like text in/text out. And it was like, we don’t need that any more. And so people shouldn’t go and do that.
But I think there’s tonnes of writing jobs that potentially now you can be even better at because of GPT, and you can earn more money or have a better career because you’re bringing a really important extra thing. I guess one thing we haven’t mentioned, but it’s really important, is what you often will bring is the context. We obviously have to give context to these algorithms when we ask some questions. And these days, the context is like, “What do I want?” And it’s an effort of imagination to figure out how to grill yourself first about what you want, and then put that into the algorithm.
And if you’re in an organisation of some kind doing work, or indeed, working by yourself to do something of value, there’s going to be a tonne of context out there that the algorithm is not going to… You know, it by default pays attention to nothing, but it could pay attention to literally anything. And so you’re pointing it to a bit of the space of things it should pay attention to. And that might just be things that it already knows about, but also might be things that are just happening in the world, or things that are secrets only you know, because you’ve spoken to someone who knows something, and that’s not on the public internet yet, or whatever it is.
And again, there’s a lot of that in the world. A huge amount of the economy, of economic value, is people knowing things before other people know them, or other context that is not public. Huge, huge, huge factors of the economy are things that are secrets, basically, or not publicly available. You, as a human, can build up your stock of secrets.
And those secrets could be that you did some user interviews with some humans and you talked to them, and you persuade them to talk to you, and no one else can get them to talk to you, and you’re asking the right questions, and you now know a lot about this kind of person. And that’s really helpful for building products for them. Or you’ve been talking to politicians about what they really care about when it comes to this area of regulation, and they would not talk to anyone, but they talked to you, and now you’re the only person who knows what these different politicians think about this question.
Again, Claude cannot do that. GPT-4 cannot do that. Only you can do that. Maybe they’ll help you write the questions, but only you can get the context. And so other areas, of which there are many, where the scarce thing is the context, that feels to me like a pretty safe bet for the future.
Luisa Rodriguez: Nice, that makes sense to me.
Relieving talent bottlenecks [03:16:53]
Luisa Rodriguez: Moving on to another topic, we actually haven’t talked about what you’re working on now…
Michael Webb: Yeah. So what I’m working on now is actually a secret project that I can’t yet tell you about, but I can tell you a little bit about the motivation and wider context.
Luisa Rodriguez: Wow. That sounds great. And what a way to pique listeners’ interest.
Michael Webb: Yeah. Apologies; I’m not trying to do that. In government, one of the things that one does is work on strategies. So one thing the government does, it writes lots and lots of strategies, and you end up with strategies coming out of your ears. And so I was involved tangentially or very directly in the UK government’s National AI Strategy, National Quantum Strategy, UK Innovation Strategy, National Space Strategy, the Integrated Review of Security, Defence, Development and Foreign Policy, all these different things. And I started to notice a pattern, which was that pretty much every single one of these strategies had pillar one, paragraph one, clause one was always like: The most important thing is talent. The most important thing is we need more people with the right expertise in this particular area — whatever it is — throughout the economy. So that might be in companies, in startups, in university labs, in corporate labs, in the government itself. And that was always the most important bottleneck.
So I did a lot of work figuring out how do we solve this? And sitting in the government, you might think you have all kinds of levers to pull, to help solve talent problems. And we did a lot of work on the “easy” thing you can do, which is making it easier for people with the right skills, very advanced skills, to come to the UK from other places. But there’s only so many of those people, and every other country also is trying to attract them. So you have to do much more to actually build and grow the talent right here.
And so I did what seemed to me to be the obvious thing, which is going to the existing higher education sector — because we’re talking really here about expertise at the level of at least a master’s degree, if not a PhD and beyond — and had all kinds of meetings with vice chancellors and all these kinds of people. And, you know, our universities are amazing; they are one of the jewels of this country and the world. But it’s also the case that they are not really naturally set up to help with this problem at the requisite scale.
So my timeline of what we need is more experts fast, right? That’s what we need. So for any cause area, if you like — so policy areas from within government, maybe here we talk more about cause areas — so taking AI safety as an example: what we need is more people who can do this work, do this research, as fast as possible.
And you go talk to universities, and I will try and avoid naming names, but there’s a few problems they have. One problem is that they often don’t actually have the expertise. So in many areas — like all these ones I mentioned in terms of all these strategies — these cutting-edge technologies, or the places with the most societal importance and urgency, are also the places where there’s often the most investment going into these areas anyway from the private sector. And often, the cutting edge is not in the universities: it is in these corporate labs.
So AI is the most obvious example. And what this means is that there were a bunch of great researchers in these areas at — I won’t name any particular universities, but the ones you would obviously think of as like “the really good ones” — they had many of these great researchers 10 years ago. But 10 years ago, they all left, and they’re all at DeepMind now, and similar companies. Right? Why would you stay in a university when you can get paid 10x+ as much, and have way more resources and be on the cutting edge? So these universities have really been sucked dry, many of them, of the best talent and the people who can do the teaching.
And the other problem is that they are governed, generally, by the faculty as a broader body. And because they have these constraints in terms of space, physical space constraints for the most part, that means that if you want to have more, say, computer scientists — which you may not think is a good idea, but suppose you think it’s important to have more computer scientists being trained — that means that some professor of English literature has to say, “That’s great. Yep. Let’s go for it. I will happily accept fewer English students in order to make room for more computer scientists.” At least that’s just the way it works in the UK. And believe it or not, that generally doesn’t happen, because people don’t want to say, “Sure, let’s have fewer of my people and more of your people.” And because of the governance, these people actually make the decisions.
I will name one name. I believe these numbers are correct. Last year, Oxford University accepted something in the region of 300 people to read English Language and Literature, and something in the region of 30 people to read Computer Science — across the entire university at undergraduate level. It’s completely crazy, I think, to most people.
Luisa Rodriguez: Whoa. Yeah. That’s wild.
Michael Webb: Right. And then another quick example, I picked on Oxford so let’s pick on Cambridge as well. How many people do you think are able to be accepted or enrolled in Cambridge University’s course on NLP? So natural language processing, which is the bit of AI that ChatGPT and all that stuff kind of came out of originally: how many places are there on that course? Fifteen. Per year. So, this is not going to solve what we’re trying to — what, as a country, as the world, we need to do. And also these courses, again, just to pick on Cambridge: this course is based on a textbook from 2008. So not only pre-large language models; it’s pre-deep learning. It’s incredibly out of date.
And so I decided, after lots of banging my head against these thousand-year-old immovable brick walls, that it would be easier to solve this problem by actually leaving government and building something new that would directly solve the problem, than stay in government and try to persuade other people to solve it, which was not going very well. So I can’t say much more than that at this point, because it’s all kind of under wraps, but I hope that gives a sense of what we’re building. I can’t say more than that, other than probably when this goes out, I’ll be hiring.
Luisa Rodriguez: Oh, amazing.
Michael Webb: So if you’re interested in learning more or being involved, please get in touch. My email is on my website, which is michaelwebb.co. Please get in touch.
Luisa Rodriguez: That’s really exciting. I can’t wait to hear what that is, but it sounds like relieving some of the talent bottlenecks maybe around… is it STEM issues in particular?
Michael Webb: Yeah. Well, I think we’ll start with AI, and AI safety in particular, because that’s where it feels like there are the most important bottlenecks right now.
Luisa Rodriguez: Great. I would agree. I guess we’ll all just eagerly await for whatever that is to become public. But thank you for the teaser.
A musician’s take on AI music [03:24:17]
Luisa Rodriguez: So this has been a marathon interview. We should go to our final question now. You’ve been a choral conductor and an organ scholar, and I think this is super cool. I have sung in choirs for many years, and I love music quite a lot. It seems like we might have that in common. And I don’t really know what I think about AI creating music yet. I’m assuming that you’re pretty passionate about music. Do you have a take on how you’ll feel about AI music once it starts getting really good?
Michael Webb: I think it already is really good under some definition of “really good.” They can already produce music that experts can’t distinguish from the real thing for many genres, and that’s only going to continue. I think that, in a sense, composers are not the scarce thing. At least compositional ability: that sort of pure, can you replicate some style? Or whatever it is.
The scarce thing, one, for me, what I love about music is it’s all about the live performance. And in choral music, that’s often in a concert hall, it’s often part of a church service. And there’s a choir, people who come together every week, or indeed every day in some cases. And that is something that is very, very special. And indeed, some of these choirs, there’s some in Oxford and Cambridge Colleges that have not missed singing a service for 800 years. There’s this one continuous thing, and they’ve been singing some of the same music for 800 years. And they’re singing a whole bunch of newer stuff as well.
And as a human, I really value that. And I value this huge, fascinating tradition — both of the music and where it’s come from and of the kind of people who’ve been singing it. And it’s a religious experience, listening to this music. Or call it what you want, transcendental. And listening to it through your headphones, through a recording, it’s definitely not quite the same as being there, particularly in these very special times and places.
So on the performance side, I think it’s very clear that we are going to — sorry to jump back into the meat of the podcast — but we’re going to strongly want to see ever more human performers doing performing arts, seeing them live. That’s been a trend for a very long time.
But back to the AI music, we’re seeing composers who’ll be playing around and doing cool stuff using AI tools. You could imagine — and we’re kind of already here, with Spotify being able to just, on the fly, generate some music that it thinks you will like based on your listening history —
Luisa Rodriguez: Which I love.
Michael Webb: Right, which many people love. And Spotify loves, because it means they don’t have to pay any royalties to real musicians: they can just have the AI generator, and then they don’t have to pay for it. It’s like pure money for them. And so that I’m sure will continue happening, and there’ll be lots more of that.
But again, because there’s so much of that — there’s literally unlimited quantities of that — the stuff that becomes ever more scarce, and therefore valuable, is the human. Like, the human wrote this, and here’s the backstory of the human, and here’s why they wrote it, and here’s how it links to things they’ve written before, and here’s how it interplays with this human’s relationship to other music that we all also know and love. And so on and so forth.
And that’s what you see today in, I think, basically all art forms. And so precisely because AI-generated art is unlimited, and we can all get as much of it as we want at all times, it almost means the human-made stuff, as for many other areas, becomes ever more interesting and valuable when set against this unlimited sea of undifferentiated — or possibly differentiated, in a way that one can’t get one’s head around — sea of AI-generated content.
Luisa Rodriguez: Our guest today has been Michael Webb. Thank you so much for coming on the podcast, Michael.
Michael Webb: Thanks for having me. It’s been a real pleasure.
Rob’s outro [03:28:47]
Rob Wiblin: If you liked the sound of the new initiative Michael described to really increase the number of experts in AI safety, or you’re generally interested in accelerated learning, or in AI safety or education as cause areas, then the exciting news is that Michael is now hiring. You can find out more about the new org at quantumleap.education, that’s quantumleap.education. We’ve put a link in the podcast description.
If you liked that conversation, two related interviews Luisa did earlier in the year are:
- #150 – Tom Davidson on how quickly AI could transform the world
- #146 – Robert Long on why large language models like GPT (probably) aren’t conscious
And some interviews I’ve done which you might like are:
- #158 – Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI risk
- #155 – Lennart Heim on the compute governance era and what has to come after
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
The audio engineering team is led by Ben Cordell, with mastering and technical editing by Simon Monsour and Milo McGuire.
Additional content editing by Luisa Rodriguez and Katy Moore, who also puts together full transcripts and an extensive collection of links to learn more — those are available on our site.
Thanks for joining, talk to you again soon.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.