Transcript
Cold open [00:00:00]
Ezra Klein: My view is you try to slow this down, to the extent you do, through forcing it to be better. I don’t think “We’re going to slow you down” is a strong or winning political position. I do think “You need to achieve X before you can release a product” is how you slow things down in a way that makes sense.
So I think it would be possible to win a political fight that demands a level of interpretability of AI systems that basically renders the major systems null and void right now.
Maybe explainability, interpretability is not possible. But it’s an example of something where if Congress did say, “You have to do this, particularly for AI that does X,” it would slow things down — because, frankly, they don’t know how to do it yet.
Rob’s intro [00:00:44]
Rob Wiblin: Hi listeners, Rob Wiblin here, Head of Research at 80,000 Hours.
Over at The New York Times, Ezra Klein has been producing some great content on artificial intelligence this year, so I asked him to come share his opinions on a number of the high-level strategies for regulating AI that have been getting a lot of play recently.
I think he had some useful takes on what approaches are more or less viable, which are likely to be more or less effective, and what’s necessary to make any of this happen. Oh, and some helpful advice on dealing with sleep deprivation when you’re the parent of a young child.
One quick announcement first. If you liked #149 of this show – Tim LeBon on how altruistic perfectionism is self-defeating, or #100 – Having a successful career with depression, anxiety and imposter syndrome, then I can strongly recommend looking on our second podcast feed, called 80k After Hours, for our new interview with Hannah Boettcher about the mental health challenges that come with trying to have a big impact.
People on the team here loved it. It’s over on 80k After Hours rather than this feed because it was made for people with a big and serious interest in effective altruism in particular, which I know is only a fraction of the people listening to the show these days.
All right, without further ado, I bring you Ezra Klein.
The interview begins [00:02:13]
Rob Wiblin: Today, I’m speaking with Ezra Klein. To an audience of podcast fans, Ezra probably needs little introduction. He first rose to prominence in the mid-2000s for his individual blogging before being picked up to blog for The American Prospect and then The Washington Post. In 2014, he cofounded vox.com, where he worked as executive director, and hosted the enormously popular podcast The Ezra Klein Show. In 2020, he moved to The New York Times, where he continues to produce The Ezra Klein Show and now also writes regular columns.
Thanks for coming back on the show, Ezra.
Ezra Klein: Happy to be here.
Tension between people focused on existing AI harms and those focused on future AI harms [00:02:42]
Rob Wiblin: I hope to talk about what governments and labs ought to be doing differently in light of recent advances in AI capabilities.
But for once this time I’d actually like to start with a question we got from a listener which reads: “What does Ezra make of the tensions between people focused on existing harms caused by AI and people focused on harms that could occur in future? It’s odd to me because in most areas, people who are focused on different harms that spring from the same thing are naturally political allies, because often there will be policy responses that can help address both concerns simultaneously.”
Ezra Klein: That’s interesting. I’d want to think more if I think that’s true, that people who are focused on harms from the same thing are often allies. I often find that the deepest political divisions are between the people nearest to each other on the political spectrum. So I would not be surprised if it’s a more generalisable problem than you think.
But what you’re talking about here, as I understand, is the tension between the AI ethics and the AI risk communities. And in particular, the sort of longtermist community worried about superintelligent AGI, and the people worried about biased AI, disinforming AI, et cetera. I think you do have there things that on one level could be natural alliances.
But one place where maybe that question is missing some of the argument is that they’re not focused on problems from the same thing. In fact, they’re arguing about what kind of thing we are facing. I take the critique of at least many of the AI ethics people as being, “You longtermists, who keep saying we’re going to invent superintelligent AGI that can destroy the entire world, are, in fact, wittingly or not, participants in a ridiculous hype system that is funnelling money to this set of two or three or five companies. And on the one hand, maybe making more likely the thing you fear, but at any rate, distracting people from focusing on the things we should actually fear.” And vice versa, I think that there’s a critique within the more longtermist community that, “Yeah, sure, algorithmic bias might be a problem, but it’s sure a pretty small problem if you’re weighing it up against ‘this is going to kill everybody.'” And then there are just, I think, cultural frictions between the two communities.
The way I think AI regulation is going to happen is: something is going to go wrong. There is going to be some event that focuses attention again on AI. There’s been a sort of reduction in attention over the past couple of months. We’ve not had a major new release in the way we did with GPT-4, say, and people are drifting on to other topics. Then at some point, there will be a new release. Maybe DeepMind’s Gemini system is unbelievable or something.
And then at some point, there’s going to be a system powerful enough or critical enough that goes bad. And I don’t think it’s going to go bad in a, you know, foom and then we’re all dead — or if it does, you know, this scenario is not relevant — but I think it’ll go bad in a more banal way: somebody’s going to die, a critical infrastructure is going to go offline, there’s going to be a huge scam that exploits a vulnerability in operating systems all across the internet and tons of people lose their money or they lose their passwords or whatever. And Congress, which is nervous, that’ll be the moment that people begin to legislate.
And once you get into a process where people are trying to work towards an outcome, not just position within a debate, I suspect you’ll find people finding more points of common ground and working together a little bit more. I already feel like I see from where we were six or months ago, people coming a little bit more to Earth and a little bit nearer to each other in the debate. Not every loud voice on Twitter, but just in the conversations I’m around and in. I think you’ll see something like that eventually. I just don’t think we’re there yet.
Rob Wiblin: If legislation is going to happen here through this kind of crisis model, where something goes really obviously wrong and that causes everyone to just agree that there’s at least this one problem that has to be solved, what does that imply about what people who are worried about these issues should be doing now?
I guess one approach you might take is just to have a whole lot of quite ambitious ideas in your drawer that you’re ready to pull out if your predictions about the ways that things could go wrong do actually play out in some way, and then people are going to be very interested to hear what ideas you have for them.
Ezra Klein: Yeah. You need a couple things. You need ideas on the shelf, not in your drawer. Don’t put it in your drawer: they need to be on a shelf where other people can reach them, to shift the metaphor a little bit here. You need ideas that are out there.
So this is a governing model that in the political science literature is called “punctuated equilibrium”: nothing happens, and then all of a sudden, it does. Right? All of a sudden, there’s a puncture in the equilibrium and new things are possible. Or, as it’s put more commonly: you never let the crisis go to waste. And when there is a crisis, people have to pick up the ideas that are around. And a couple things are important then: One is that the ideas have to be around; two is that they have to be coming from a source people trust, or have reason to believe they should trust; and three, they have to have some relationship with that source.
So what you want to be doing is building relationships with the kinds of people who are going to be making these decisions. What you want to be doing is building up your own credibility as a source on these issues. And what you want to be doing is actually building up good ideas and battle-testing them and getting people to critique them and putting them out in detail. I think it is very unlikely that AI regulation is going to come out of a LessWrong post. But I have seen a lot of good ideas from LessWrong posts ending up in different white paper proposals that now get floated around. And you need a lot more of those.
It’s funny, because I’ve seen this happening in Congress again and again. You might wonder, like, why do these think tanks produce all these white papers or reports that truly nobody reads? And there’s a panel that nobody’s at? It’s a lot of work for nobody to read your thing and nobody to come to your speech. But it’s not really nobody. It may really be that only seven people read that report, but five of them were congressional staffers who had to work on this issue. And that’s what this whole economy is. It is amazing to me the books that you’ve never heard of that have ended up hugely influencing national legislation. Most people have not read Jump-Starting America by Jonathan Gruber and Simon Johnson. But as I understand it, it was actually a pretty important part of the CHIPS bill.
And so you have to build the ideas, you have to make the ideas legible and credible to people, and you have to know the people you’re trying to make these ideas legible and credible to. That is the process by which you become part of this when it happens.
How to slow down advances in AI capabilities [00:09:27]
Rob Wiblin: Back in March, when you interviewed Kelsey Piper, you were kind of positive on the idea of just trying to slow down advances in AI capabilities so that society would have more time to notice the problems and fix them. Do you have any view on what might be the best mechanism by which to slow down the rate at which the frontier advances?
Ezra Klein: My view is you try to slow this down, to the extent you do, through forcing it to be better. I don’t think “We’re going to slow you down” is a strong or winning political position. I do think “You need to achieve X before you can release a product” is how you slow things down in a way that makes sense.
So I’ve used the example — and I recognise this example actually may be so difficult that it’s not possible — but I think it would be possible to win a political fight that demands a level of interpretability of AI systems that basically renders the major systems null and void right now.
If you look at Chuck Schumer’s speech that he gave on SAFE Innovation — which is his pre-regulatory framework; his framework for discussion of a regulatory framework — one of his major things is explainability. And he has talked to people — I know; I’ve been around these conversations — and people told him this may not be possible. And he’s put that in there, but he still wants it there, right? Frankly, I want it too. So maybe explainability, interpretability is not possible. But it’s an example of something where if Congress did say, “You have to do this, particularly for AI that does X,” it would slow things down — because, frankly, they don’t know how to do it yet.
And there are a lot of things like that that I think are less difficult than interpretability. So I think the way you will end up slowing some of these systems down is not, you know, “We need to pause because we think you’re going to kill everybody” — I don’t think that’s going to be a winning position. But you need to slow down, because we need to be confident that this is going to be a good piece of work when it comes out. I mean, that’s something we do constantly. I mean, in this country, you kind of can’t build a nuclear power plant at all, but you definitely can’t build one as quickly as you can, cutting all the corners.
And then there are other things you could do that would slow people down. One of the things that I think should get more attention — I’ve written about this — or at least some attention is a question of where liability sits in these systems. So if you think about social media, we basically said there’s almost no liability on the social media companies. They’ve created a platform; the liability rests with the people who put things on the platform. I’m not sure that’s how it should work for AI. I think most of the question is how the general underlying model is created. If OpenAI sells their model to someone, and that model is used for something terrible, is that just the buyer’s fault, or is that OpenAI’s fault? I mean, how much power does a buyer even have over the model? But if you put a lot of liability on the core designers of the models, they would have to be pretty damn sure these things work before they release them, right?
Things like that could slow people down. Forcing people to make things up to a higher standard of quality or reliability or interpretability, et cetera, that is a way of slowing down the development process. And slowing it down for a reason — which is, to be fair, what I think you should slow it down for.
Rob Wiblin: You’ve now brought up most of the different kinds of regulatory philosophies that I was going to ask about. So maybe we can go through them one by one.
On the liability one, it’s a really interesting question to me. So if a company trains a model that then is used by someone else down the line to successfully create a bioweapon, or successfully harm people in a really big way, who should be legally accountable for that? I think currently, our idea with product regulations is that, if you manufacture a weapon and then someone uses it, it’s the person who uses it who’s responsible; you’re not on the hook. But maybe the incentives for these firms would be a lot better, and a lot more aligned with society, if we said, “No, if you train and release a technology that is then used to harm people in a massive way, you’ve been negligent, and you should be held accountable in some legal framework for the harm that has resulted from your decisions of what to do.” What do you think of that?
Ezra Klein: The way a lot of legal regimes work around questions like this is they put a lot of weight on words that are like “reasonably” or “predictable” or something like that. So if you think about liability in this context, even if what you were doing was shifting liability a little bit back onto the core model builder, I think the way it would work is not to say that anything that happens is their fault. But it would be some language like, “Anything that happens that reasonably should have been predictable” — or prevented or tested for — is their fault. And then you would have functionally court cases over what is “reasonable,” which is what you have all the time in different areas of law. And I wouldn’t try to decide that perfectly at the outset.
But I think what you would think about as a company, if something like that happened, is you would say, “We need to have done a level of red teaming here that, if a court needs to see what we did, it is extremely impressive, it is extremely thorough. And if they see it and it’s not impressive, we could be on the hook for a lot of money.”
And so I think it would be crazy on some level to create a level of liability that OpenAI or Google or whomever is liable for anything that is done with their models. But this is a place where we actually have a lot of experience in consumer law. I mean, if I pick up my microwave and I hit you with it, my microwave maker is not on the hook for that. If my microwave blows up because they made it poorly, they actually are. And the difference there is that they don’t need to take into account that somebody might pick up the microwave and use it in a totally unintended way to bash somebody else’s head in. But it is on them to make sure that the thing doesn’t explode if my four-year-old begins pounding his hand on all the buttons all at once.
So I don’t think this is actually as weird as sometimes people suggest. Around consumer products, we have a lot of experience saying that this has to be pretty well tested to not create a problem under normal parameters, even of misuse or imprecise use. And it’s actually social media, I think, and the internet, that got this huge carve-out from liability and slightly reset people’s expectations — such that it’s like, well, things that are digital, the core company has almost no relevance to it. But that’s not how we’ve done other things.
Rob Wiblin: Yeah. I think it’s interesting that today, if one of these models was used to create a bioweapon, then I’m not sure it would pass a commonsense standard of reasonableness and foreseeability, or at least you would get off the hook. That you could say, “People were shouting about how this was a potential risk. It was all over the media. And there’s all of these jailbreaks that allow people to get around all of the controls that you have on your model. So maybe you have been negligent here in releasing this product in this way.”
Ezra Klein: I don’t think there’s any doubt. I mean, there was no doubt in my mind at least, that if these models currently were good enough to provide real help at building bioweapons — which I don’t think they are — they’d be negligent to be releasing them in their current forms. I just think that is a totally clear thing. They know they cannot protect their models from being jailbroken. So the saving grace here is the models are not good enough to actually do that much damage if they’re jailbroken. But if they were, then you cannot release a model that can easily be jailbroken.
That is what a liability standard like that is trying to get at: It is on you to make sure you can’t do this. And if you release something where actually we know, when we do discovery, it turns out there are a bunch of emails, you know, inside OpenAI, where people were like, “I don’t think this is quite ready. We still think there are a lot of ways to jailbreak it,” but, you know, the leadership is like, “No. We gotta get this out. We gotta beat Google to market.” That’s where we get into a lot of trouble.
Rob Wiblin: So that’s creating incentives through the tort law system, perhaps.
A different philosophy often is called independent auditing, evaluations, licencing, and so on. And on that approach, basically, before a product could go to market, before a model could be trained and released, it might have to be sent to a third-party auditor who would actively try to get the model to do something, like spread autonomously to new servers in order to avoid being turned off, or to help someone produce a bioweapon, or commit crimes of various other kinds if it’s instructed to do that. And if it could be successfully made to do those things, then it’s determined that it’s clearly not yet ready for the general public, and it would just have to go back for further refinement to fix those problems. What do you think of that broad approach to regulation?
Ezra Klein: I think the question there is it depends on how good we think the auditors are, where that auditing happens, and just how much we believe there’s a process there that can stay ahead of systems that are getting released — even as we don’t understand them, number one. And then as we get systems that have more working memory — so there are systems that are learning post-release — how are you auditing a system that is changing in theory in a dynamic way on the market? I learn things every day. Right now, the systems don’t really learn things every day, or at least a lot of them don’t. They’re not reabsorbing the data of my conversation with them and using that to get smarter. But if they were, or if they were doing that in real time rather than kind of in batches, what would that imply for the auditing?
So I think auditing is a good idea. And to a point I was making earlier about building institutions, I think you want to think about building institutions for things like auditing, and you want to get a lot of talent into things like auditing. But I’ve talked to some of the auditors, and I personally am very far from convinced that we understand these models well enough to audit them well. And if you believe what is basically lurking in your question — which is huge exponential continued curves in model capability — then I’m even more sceptical.
So I’m not sceptical of the idea in a theoretical way: if we could audit, auditing is great. I am sceptical. I’m a little worried about basically audit-washing AI capabilities. Like, “Oh, this went through audit, so now we know it’s fine.” Like, do we? How would we know that? So that’s a little bit of my concern there. And that’s a place where we probably just need to do a lot more work and research, and spend money and get great people into that field, and so on.
Rob Wiblin: If that’s right, though — that we can’t tell what these models are capable of doing, and they’re constantly changing, so it’s a moving target, so we’re never really going to have solid answers — isn’t that completely alarming? It seems like that itself should give us massive pause about rolling these things out.
Ezra Klein: I mean, I do think it’s quite alarming. I don’t know what to tell you. I think that the place where it’s very alarming is if you believe in a very, very rapid capabilities curve. And this is a thing that I’m currently watching to see. I don’t know when GPT-5 is coming, or Gemini is coming, or whatever. I want to see if the next jump is big. I’m not totally convinced yet that, at least on the large language models, it will be. And so I’m just interested to see that.
Because one thing that I think lurks in the head of the AI risk people is foom, right? This constant sense that we’re going to be on this curve, that it’s going to get better so quickly we can’t keep up with it. If that’s not true, then actually auditing makes a tonne of sense. If it is true, then yeah, we’re in a really weird place, where probably we don’t have a lot of very good policy options.
Rob Wiblin: Yes. I think it’s just a really open question whether we’ll see the rate of progress speed up or slow down, and both seem like really live options.
Ezra Klein: Policy does not stay ahead of exponential progress curves. Let me just say that as a flat finding from my years doing this work: policy is a lagging field.
How policy actually gets done [00:22:02]
Rob Wiblin: On that point of general lessons that you’ve learned from following policy debates, I imagine you’ve probably seen a lot of cases of ideas being turned into legislation, and then gradually being converted into agencies, which then actually have to take actions that impact people. Have you learned any general lessons about what factors people need to keep in mind at the idea-generation stage that seem relevant here?
Ezra Klein: Yes. But I’m going to do this in a weird way. Let me ask you a question: Of the different proposals that are floating around Congress right now, which have you found most interesting?
Rob Wiblin: Hmm. I guess the interpretability stuff does seem pretty promising, or requiring transparency. I think in part simply because it would incentivise more research into how these models are thinking, which could be useful from a wide range of angles.
Ezra Klein: But from who? Whose package are you most interested in? Or who do you think is the best on this right now?
Rob Wiblin: Yeah. I’m not following the US stuff at a sufficiently fine-grained level to know that.
Ezra Klein: So this is the thing I’m getting at here a little bit. I feel like this is a very weird thing happening to me when I talk to my AI risk friends, which is they, on the one hand, are so terrified of this that they truly think that all humanity might die out, and they’re very excited to talk to me about it. But when I’m like, “What do you think of what Alondra Nelson has done?” They’re like, “Who?” She was a person who ran the AI Blueprint Bill of Rights. She’s not in the administration now. Or, “Did you read Schumer’s speech?” No, they didn’t read Schumer’s speech. “Are you looking at what Ted Lieu is doing?” “Who’s Ted Lieu? Where is he?”
And one answer to your question in terms of how policy gets done is it gets done by policymakers. I am really struck, and have been now for many months, by the distance between the community that understands itself as so worried about this and policymakers. That they’re not really trying to reach out; they’re not really trying to familiarise with them. And so what you actually are having happen, which I don’t really think is great, but I think there’s actually a weird level of reliance by the policymakers on the people building the AI systems right now. Where like, who does Biden have to talk to? You know, he talks to Sam Altman. He talks to Demis Hassabis. He talks to other people making the systems.
So one just very basic thing is that there is a beginning right now of this kind of relational, what gets called on the Hill an “educational phase.” So what Schumer really announced was not that he’s going to do interpretability or anything else, but he’s going to convene a series of functionally forums through which he’s going to try to get him and other members educated on AI. And if I was worried about this around the clock, I would be trying to get my people into these forums. I’d be trying to make sure Chuck Schumer’s people knew that they should be listening to us. And this person in particular: we think this is the best articulator of our concerns.
I would just say that it is unbelievable how human and relational of a process policymaking is. It is crazy how small a number of people they rely on. It is just nuts that a key policy will just be because, like, the person in charge of the subcommittee happened to know this policy analyst going way, way, way back. And that’s a big part of it. I think that there’s a lot more, weirdly, interest right now in people wanting to talk to other people who share their level of concern, and I think are not really enjoying the process, or not really engaging that much in the process of trying to get beyond that.
I know you’ve been in a little bit of a spat with Tyler Cowen. I saw you tweet, like, “The people who are worried about x-risk have won, and we don’t need to talk to the deniers anymore.” And he says, “No, they haven’t.” And I’ll say I’m a little bit more on his side of the “No, they haven’t,” but even putting that aside, the question, really — which actually a lot of us don’t even know the answer to — is: What even do the key members of Congress here believe? What are their intuitions? Who needs to be convinced? Because a couple members of Congress are going to be the people all the other members of Congress listen to on this. And I just cannot emphasise enough to people who’ve not covered policy, which I have for many years: it ends up being, on everything, it’s like seven people end up mattering. And it’s really important to identify the seven people and then figure out who they’re listening to.
Rob Wiblin: Yeah. The message I was trying to send with those tweets that you’re referring to was that my impression was that, for me, as someone who’s been worried about this for 10 or 15 years, there’s now been such an increase in awareness and concern among other communities about the possibility that AI could go really wrong, that now I feel there’s a sufficient level of interest and concern that it’s possible to make a whole lot of progress potentially. And that rather than try to convince everyone to go from like 50% support to 100% support, people should be trying to come up with ideas now, trying to actually come up with concrete ideas for what people ought to be doing, and harnessing the support that is out there.
Do you think that is a kind of sensible attitude to have? That enough people are troubled and on board now that useful work can be done, and it doesn’t all have to be advocacy in the way that it used to be?
Ezra Klein: I do think a lot of useful work can be done. I think I’ve seen more things and covered more things where you would have thought the consensus for action had existed for a very long time, and yet nothing happened year after year after year. So this feels a bit like that to me right now. When I listen to the policymakers, what I would say in general is there is much more fear of slowing innovation down, or getting the wrong regulation in place, than there is of what happens if innovation moves too fast.
So if you look at, say, Schumer: I think the single most important statement here is Schumer’s speech, because that’s the Senate majority leader, and he’s taken a personal interest in this issue. And he calls it “SAFE Innovation.” His point is that the innovation has to be the thing we protect. And I’m not saying he’s wrong on that, but I do think that’s an interesting signal. He is more worried, I think, in a lot of ways, that you will get the innovation side of this wrong than that you’ll get the safety side of this wrong. And maybe that’s unfair, because I don’t want to say I’m seeing into his mind here.
But it is always much harder to have anything happen in Congress than not happen. And right now, where we are is in the “not happening” side. And so the fact that there are a lot of news articles and the fact that more extreme opinions on this get a lot of attention, I just take that as a much more distant signal from anything happening than I think it might look like. In many cases, that’s actually a reason things don’t happen. It would, in some ways, be likely you would get strong legislation if there was a special committee who was working on this already — and there wasn’t a tonne of attention around it, but for whatever reason, there was a process — than for it to be such a huge point of contention. The more polarising and the more heated disagreement a question gets, oftentimes, the harder to get anything done on it.
So the dynamics here I think are less linear from attention to action than one might hope. And that’s true on a lot of things. Climate change has been like that. Immigration is like that. Making something a big issue does not signal that it will become a successful issue.
Rob Wiblin: It’s interesting to me that here, living in London, it seems like the extinction risk from AI is more prominent in the policy conversation than it is in DC. And I think in the EU as well. You’ve got Sunak taking meetings with people who are worried about extinction. It’s higher on the agenda for the global summit on AI safety. They’ve appointed someone to lead the Foundation Model Taskforce who’s definitely concerned about extinction risk, among other things. If that all goes very well in the UK, I wonder whether that would have an influence on the US or the EU, or whether these are just separate ecosystems, largely?
Ezra Klein: I’ve been interested in this, which also kind of looks to me like a cultural divergence. And I get the sense that the EU and particularly the UK sees itself as playing the more regulatory role. I think even though DeepMind is based in London, it’s owned by Google, so functionally, the AI race — to the extent it is a race — is between the US and China, and Europe doesn’t see itself as dominating the technology or having the major corporations on this. And as such, they can be more worried about the harms of it. But because the technology is going to be developed in the US and China, what happens there is going to be more meaningful.
The viability of licencing [00:30:49]
Rob Wiblin: There’s another big cluster of proposals, and maybe the largest there is a combination of requiring organisations to seek government licences if they’re going to be training really large or very general AI models. And in the process of getting a licence, they would have to demonstrate that they know how to do it responsibly — or at least as responsibly as anyone does at the time. Those rules could potentially be assisted by legislation saying that only projects with those government licences would be allowed to access the latest and most powerful AI specialised supercomputers, which is sometimes called “compute governance.” How do you think that would come out of a messy legislative process?
Ezra Klein: I’m interested in that. I don’t know. I could see this going a lot of ways. And that one, in particular, I’ve really gone back and forth on this, because I’ve talked about it with a lot of people.
The reason you’re hearing me hesitate is that I think it’s actually a very… So here’s the question. On the one hand, if you take the metaphor that basically, what you’re developing now is a very powerful weapon, then of course, if you’re developing a very powerful, very secret weapon, you want that done in a highly regulated facility. Or you want that done by a facility that is highly trusted, and workers who are highly trusted in everything from their technical capacity to their cybersecurity practices. So that makes a tonne of sense.
On the other hand, if what you say is you’re developing the most important consumer technology of this era, and in order to do that, you’re going to need to be a big enough company to get through this huge regulatory gauntlet. That’s going to be pretty easy for Google or Meta or Microsoft to do, because they have all the lawyers and they have the lobbyists and so on.
I could imagine, as that goes through Congress, people get real antsy about the idea that they’re basically creating an almost government-protected monopoly — entrenching the position of this fairly small number of companies, and making it harder to decentralise AI, if that’s something that is truly possible. And some people believe it is. I mean, there’s this internal Google document that leaked about how there’s no moat. Meta has tried to talk about open sourcing more of their work. Who knows where it really goes over time. But I think the politics of saying the government is going to centralise AI development in private actors is pretty tough.
There’s a different set of versions of this, and I’ve heard many of the top people in these AI companies say to me, “What I really wish is that as we get closer to AGI, that all this gets turned over to some kind of international public body.” You hear different versions and different metaphors: A UN for AI, a CERN for AI, an IAEA for AI — you pick the group. But I don’t think it’s going to happen, because it’s first and foremost a consumer technology, or is being treated as such. And the idea that you’re going to nationalise or internationalise a consumer technology that is creating all these companies and spinning all these companies off, there’s functionally no precedent for that anywhere.
And this goes maybe back a little bit to the AI ethics versus AI risk issue, where it looks really, really reasonable under one kind of dominant internal metaphor — “we’re creating the most dangerous weapon humanity’s ever held” — and it looks really, really unreasonable if your view is this is a very lucrative software development project that we want lots of people to be able to participate in. And so I imagine that that will have a harder time in a legislative process once it gets out of the community of people who are operating off of this sort of shared “this is the most dangerous thing humanity’s ever done” sort of internal logic. I’m not saying those people are wrong, by the way. That’s just my assessment of the difficulty here.
Rob Wiblin: Yeah. It does seem very challenging to get the level of support that you would require, to get the level of coverage to truly be safe, if you think that these are incredibly dangerous weapons. But I wonder if, as you were saying earlier, there’s some kind of catastrophe. Like, what if someone does use AI technology as a weapon, and a million people end up dead? Does that change the game enough that these things that currently seem not really viable might become viable?
Ezra Klein: Yeah. I mean, if a million people end up dead, then yes. It does. If a couple people at a time? I mean, well, look at US gun control laws.
Rob Wiblin: Yeah. So you think it would just depend on the nature of the…
Ezra Klein: Yeah. It would depend on the nature of the problem also. I mean, it’s not crazy for the solution to be proportionate to the size of the problem. If what you have is a critical infrastructure failure, but the outcome of that is that Houston, Texas has no electricity for three days, I mean, that’d be bad, but that would not lead to the nationalisation of all AI. That would lead to a set of regulatory safeguards and testing and so on about putting AI or some kind of system in charge of critical infrastructure. Or a cybersecurity thing would have a different set of ideas.
I think the thing where there’s an AI powerful enough that somebody uses it to somehow get in touch with the wet lab somewhere that doesn’t know what it’s doing and print a synthetic biology superweapon, and we only break up that pod at the last minute, or it does kill a bunch of people, and then we, whatever it is, then you could get into scenarios like that.
Rob Wiblin: So right now, it makes sense that the frame that people are thinking about this through usually is the consumer product frame. But looking forward — I guess we don’t know how long it’ll be, but like five, 10, 15, 20, 35 years — at some point, these models presumably will be capable of causing a lot of havoc. They will be up to that task. And then I wonder, what will the national security establishment think once it just becomes very clear that these could be used for terrorism, or they can be used for military purposes in a way that’s really troubling? At that point, do they jump into action? And this now packs a punch within their framework?
Ezra Klein: Yeah. But does it pack a punch in the sense that they want to regulate it, or that they want to have the most of it and control it? That, I think, is a danger of how the national security system operates around these things. On the one hand, yeah, there are international treaties and work governing nuclear weapons. And on the other hand, we shore up a hell of a lot of nuclear weapons, because the main lesson a bunch of the countries took is “We need to have the most” or “We at least need to have deterrence power.”
I think that’s one reason to worry a little bit about that sort of metaphor or approach: national security tends to think in terms of dominance over others, not really in terms of just generalised risk to the population. And so…
Rob Wiblin: It doesn’t necessarily help.
Ezra Klein: I have a lot of concerns about national security here.
Rob Wiblin: Yeah. I think that’s true about the competition between countries aspect. But if you’re trying to limit access within a country, then the national security establishment is familiar with the idea of wanting to limit access to really dangerous biological weapons, for example, for people who are inside the United States.
International Atomic Energy Agency for AI [00:38:23]
Rob Wiblin: We’re kind of dancing around how a lot of people have suggested — including Sam Altman and actually the secretary-general of the UN — this idea of doing the International Atomic Energy Agency but for AI. And the bargain of the International Atomic Energy Agency is that under the nuclear Non-Proliferation Treaty, the IAEA inspects nuclear facilities in basically all countries to ensure that they’re only being used for peaceful purposes. And in exchange, the nuclear superpowers transfer peaceful nuclear applications to other countries to allow them to use it for medical purposes or for energy purposes. I guess that’s something that the superpowers wanted because they didn’t want proliferation of this; they wanted to maintain the monopoly.
And I wonder, could we imagine a bargain like that in future at the point where it is just very clear to everyone how these could be used as very dangerous weapons in a war?
Ezra Klein: I have a lot of questions about this, to be honest. So let me carve out the part that I think we should definitely have it, and that would be very high on my list right now. Because I think you want to begin building these institutions nationally. You need really strong national institutions stocked: they should have high pay scales, given how much money you can make in AI right now. You need really strong national institutions with people who understand this technology really well and can be in an advisory, a regulatory, an auditing, et cetera capacity. Maybe even are creating autonomous public capacities: just like AI models for the public good, oriented towards things that the public wants that don’t have a business model. But whatever it is, I think it’s actually really important to begin standing up, and probably on its own, just places in the government where it’s like you just have 300 excellent AI experts from different domains. So that’s one thing.
The question of the international IAEA model is just really tough. I’m not saying I oppose it. Just when I try to think about how it would work, on the one hand, a lot of what makes it possible to do that is that uranium is kind of hard to get and hard to enrich. Also, the system has only been so effective. I mean, look at Israel, look at Iran, look at North Korea, look at Pakistan. So that’s a little tricky.
Also, again, the reason you could do it is that nuclear weapons were, from the beginning, nuclear weapons. I mean, we dropped the bomb on Hiroshima. We dropped it on Nagasaki. And that’s why you have something like that, because from the beginning, what people saw here was the unbelievable destructive power of these weapons. Right now, most people — whatever the stories are that pop around the media — just don’t think these are that destructive. So I think one of the most worrying things in this whole area is that it doesn’t look that bad until it’s too late, until you have something that’s actually genuinely destructive.
But I don’t think you’re going to have a powerful preventive regulatory structure that is going to keep other countries from having their own autonomous, really profound AI models. I mean, if Brazil wants to create a really good AI, and wants to give it some national defence authority, are we going to bomb Brazil? Like, what is the implied threat that is being offered here? Because in some cases, we would go to war, right? I mean, we went to war to stop Iraq from getting nuclear weapons and it wasn’t even trying to get. So, you know, there are cases where we would actually take that as a reason to go to war in the nuclear weapons case. Are we really going to go to war with other countries on AI? Or maybe just sanctions?
And then the more central AI becomes to economies, to kind of everything, the more countries are going to want ones that they control — which is completely natural. It’s just a hard equilibrium for me to imagine working, which doesn’t mean it won’t. And again, specifically in a case where you have these kind of super AGI models, and there’s a disaster, you can imagine very different worlds coming out of very big disasters. But in this case, it’s just very hard for me to picture.
Manhattan Project for AI safety [00:42:47]
Rob Wiblin: Yeah. Another broad approach that’s out there is sometimes branded as a Manhattan Project for AI safety: basically, the US and UK and EU governments spending billions of dollars on research and development to solve the technical problems that exist around keeping AGI aligned with our goals, and having sufficiently strong guardrails that they can’t easily be retrained to commit all sorts of crimes, for example. The CEO of Microsoft, Satya Nadella, has talked in favour of this, and the economist Samuel Hammond wrote an article in Politico that we’ll link to. What do you think of that broad approach?
Ezra Klein: That I’m very much for. I don’t think I would choose a metaphor of a Manhattan Project for AI safety, just because I don’t think people believe we need that, and that’s not going to be much of a political winner. But AI is a great thing to spend lots of R&D money on and have a really strong public research infrastructure around. A good amount of that research should be on safety and interpretability. And we should really want this to work, and it should happen. I think that makes a tonne of sense, and I think that’s actually a possible thing you could achieve.
Look, I don’t trust any view I hold about takeoff rates. But what I do think is that if we are in a sort of vertical takeoff scenario, policy is just going to lag so far behind that we almost have nothing we can do but hope for the best. If we’re in more modest takeoff scenarios — which I think are more likely in general — then building institutions can really work, and we can be making progress alongside the increase in capability capacity and danger.
So that’s where I think coming up with ideas that also just play into the fact that different countries want to dominate this, different countries want to get the most that they can out of this, different countries want to make sure a lot of this is done for the public good. And that it’s actually not that expensive. I mean, it is expensive for most companies, which is why OpenAI has to be attached to Microsoft and DeepMind had to be part of Google and so on. But from the perspective of a country’s budget, it’s not impossible to have real traction on this. Now, getting the expertise and getting the right engineers and so on, that’s tougher, but it’s doable.
And so, yeah, I think that’s somewhere where there’s a lot of promise. And the good thing about building institutions like that, even if they’re not focused on exactly what you want them to be, is that then, when they do need to refocus, if they do need to refocus, you have somewhere to do that. You know, if you have a Manhattan Project just for AI, well, then you could have a Manhattan Project for AI safety — because it was already happening, and now you just have to expand it.
So that’s where I think beginning to see yourself as in a foundation-building phase is useful. Again, it’s why I emphasise that at this point, it’s good to think about your policies, but also think about the frameworks under which policy will be made. You know, who are the members of Congress who understand this really well? And you’re hoping will be a leader on this, and you want to have good relationships with? Then, you know, keeping their staff informed and so on. And what are the institutions where all this work is going to be done? Do they need to be built from scratch? And what kind of people go into them? And how do you get the best people into them? And all of that is not, like, the policy at the end of the rainbow — but you need all that for that policy to ever happen, and to ever work if it does happen.
Rob Wiblin: I guess the dream here would be, I think at the moment, the ratio of research that enhances capabilities in AI versus trying to steer them and align them is something like 100:1. And maybe we could get that to 10:1 or something like that.
Ezra Klein: Yeah. I totally agree.
Rob Wiblin: What sort of design details might affect whether the Manhattan Project for AI safety, or whatever we end up branding it, actually ends up helping? I mean, you could imagine a failure scenario where almost all of it ends up being co-opted for capabilities research anyway, because that’s, to many people, more appealing, and it’s certainly more profitable. Would you have any advice on how people can kind of guide a broad project like that towards funding the kinds of things that they think are most valuable?
Ezra Klein: I think that’s pretty straightforward, which is that, in the appropriation, the goals of the research are written into it. I mean, that happens all the time. When you think about how money is apportioned for, you know, ARPA-E or different programmes at the Department of Energy, or at the NIH — you know, when Joe Biden had his Cancer Moonshot from a few years back. It isn’t any kind of new or unsolved political problem of how you tell an agency what this appropriation is actually for.
So that’s about getting congressional support to do the thing you want it to do, as opposed to doing the thing you don’t want it to do. And again, that goes back to relationships. And again, one thing I am trying to emphasise in this conversation a little bit is that there is just a lot of boring work here that I don’t exactly see happening. That it’s a lot of making sure that the people who eventually are going to write this bill are listening to you when they write it.
Critiques of the AI safety community [00:47:49]
Rob Wiblin: Yeah. I mean, the sheer number of people who have experience on this or are working on this is really very small, I think — relative to the size of the problem, and certainly maybe relative to the appetite for assistance that exists now. Do you have any advice on how to scale up a community that’s interested in a policy problem, when maybe it needs to be 10 or 100 times bigger than it is?
Ezra Klein: I don’t think it’s that small, actually. And again, part of this is my experience of I’ve lived in DC for 14 years, I cover politics: you cannot imagine how small the organisations that dramatically affect what happens in Washington, DC are. I mean, the Center on Budget and Policy Priorities is just one of, over a long period of time, the most effective, consequential nonprofits like anywhere. The amount of good they have done on the social safety net is incredible. And there’s not 20,000 people working at CBPP. I’d be surprised if there were more than 100. I mean, there might be more than 100. I don’t actually know their staffing. But it’s not going to be more than 500. I mean, it’s not going to be more than 200. And so I don’t think this is that small.
I don’t think that people are located in the right place. I don’t think they’ve been trying to build a bunch of DC institutions. I noticed this on crypto a few years ago. Jerry Brito is in DC trying to do crypto regulatory work. And it’s a little crypto outfit, a little crypto regulatory nonprofit, trying to create crypto-favourable laws. And I think it had, like, six people in it, a dozen people in it. And then when there was this big fight over crypto in Congress, all of a sudden, this group was important, and they were getting calls because they’ve been there, working on building relationships. And when somebody needed to call somebody, they were actually there.
So it is not, by any means, beyond the capabilities of this community, these companies, these organisations, these nonprofits, to be setting up fairly well-funded shops in Washington, DC, where the point is that they’re turning out good research and trying to meet people. This does get a little bit to, like, how scared are you? If you’re so scared that you want to devote your life to this, but not if you have to live in Washington, DC — you’re not that afraid. A lot of people want to be out in San Francisco where the action is, but the regulatory action is going to be in DC.
Rob Wiblin: On the question of where to locate, when you were talking about the takeoff speeds, it occurred to me that in a slow or medium takeoff scenario, then the DC policy seems really quite important. In a fast takeoff scenario, the policy and governance that seems to matter is the policy and governance inside the AI lab. I mean, it’s an extremely bad situation to be in in the first place if things are taking off really quickly. But then the organisation that can potentially react and do something useful is, you know, OpenAI itself perhaps. And who’s making the decisions there, and on what basis, and what sort of information that they have to rely on: that stuff seems like it might be able to help in that case.
Ezra Klein: I find the number of AI risk people who seem to me to be working inside AI shops, building the AIs they are terrified of, caught in a competitive dynamic — they are perfectly happy to admit to me that they cannot stop — to just be a little bit of a puzzling sociological outcome here. And I think it’s because working on AI is really cool and fun. I don’t think it’s specifically because they’re motivated by profit, but they do want to work on AI. Where, you know, spending your time in DC working on AI regulation is kind of a pain in the ass.
But I don’t know. I think there’s something a little bit weird about this. Again, as somebody who’s been, as you know, very friendly to this community, and is probably, among national political columnists, in touch with more AI risk people than just about anybody else, I find the number of them who seem to me to be accelerating the development of AGI to be a little weird compared to the number who seem to have set up shop in Washington to try to convince Washington to not let AGI happen. It doesn’t look to me like it’s working out the way they wanted it to, but I don’t see people all radically leaving the companies and then setting up the shops. There’s just something here that makes me wonder what’s actually going on in people’s motivation systems.
Rob Wiblin: We have an article on exactly this question of whether it’s good or bad to take roles at AI labs that we’ll stick up the link to in the show notes.
I think one thing that is driving that phenomenon is that, until recently, I think people were just extremely pessimistic about whether government would be able to have a useful role here. I think most people thought that there was just not going to be significant interest from mainstream politics. And to me, that seems like it was a massive blunder, and thinking through more concretely how this would play out would have revealed that there was going to be a big policy opportunity here; there was potentially going to be a big role for government to make things better or worse. So that’s maybe something that I wish had gone differently.
Ezra Klein: One thing I will say is that I don’t want to suggest that there’s absolutely nobody doing this work. So a really good group at Georgetown, CSET — the Center for Security and Emerging Technology — that they’ve been doing this work. And it’s really notable, I think, that when Chuck Schumer, the Senate majority leader, wanted to give a speech announcing his big SAFE Innovation Framework, he went to them. Like, they’re not a huge deal — they don’t have 6,000 people; they’re not the Brookings Institution — but there they were. And that’s where Chuck Schumer gave his speech, and he’s clearly in touch with them and thinking about things they say. So there are some people doing this. And also, I know that they were funded by people in the EA community. So I would just say that there is payoff to that.
Rob Wiblin: Hey everyone, I just wanted to note that when we were looking up a link for this one we realised that Schumer had actually given this talk not at CSET but the similarly named CSIS which is just a different think tank in DC. OK back to the interview.
Ezra Klein: And my point is not that nobody should be working on AI in the AI organisations, but that — a little bit like what you were saying about the quantity of resources going into capabilities development versus safety research — I think it is weird among people who say they’re worried about AI risk, the quantity of resources going into developing AI versus developing policy shops that have the relationships and so on.
And again, I’m maybe just a little more cynical than you on this. I think the reason is that people who are really into AI like to be around other people who are really into AI, and like to actually work on AI, and have totally wild conversations about AI, and worry about these things together. I’m connected to that community in San Francisco; it’s actually like being at the centre of things. It’s wonderful. It’s exciting. You know, you’re part of the bleeding edge. And being a person who worries about AI in DC is being an exile from that.
And so I’m a little more sceptical than you are that this was just that nobody could have predicted that when AI systems got powerful, there would be a level of regulatory interest. Like, I don’t know. I could have told you that would happen. But I think that people wanted to be where the action was.
But now the action is, in a way, moving. Maybe another way of putting this, which is a little less provocative, is the action is moving. Now the AI shops still have a certain amount of optionality in terms of what they’re doing, but not as much as they did a couple years ago. A lot more of their decisions are now being driven by their parent companies or their major investors, and I think that’s clear. So even if Sam Altman wanted to say, like, “We, OpenAI, we think this has gone too far” — I don’t think Sam Altman would keep his role that long in that world.
So the space of movement where you can shape what is happening has shifted. Maybe you could have really shaped it there a couple of years ago, but now you can really shape it in Washington or in Brussels or in some state capitals, or whatever. And have people actually adapted to that world? Are people making the investments in terms of their time and energy and money and institution-building that fit where we are now, as opposed to where we were four or five years ago?
Rob Wiblin: Maybe it’s hard for me to fully buy into that explanation, just because personally, I find AI so boring. I feel like I’ve been dragged kicking and screaming into having to think about AI from a technical point of view, just because I think it’s so incredibly important. But have you ever tried to sit down and read an AI safety paper? I guess, because I’m not a technical person, it just doesn’t get me that excited.
Ezra Klein: I don’t really believe you.
Rob Wiblin: You really don’t believe me?
Ezra Klein: Listen, I’ve read “[What does it take to catch a Chinchilla?(https://arxiv.org/abs/2303.11341)” and all that. Some of the papers are boring. I think this stuff is interesting.
Rob Wiblin: It’s gotten more interesting recently. Maybe you gotta go back to the 2017 stuff.
Ezra Klein: Yeah. I have heard a lot of your podcasts on AI, and I think I’m pretty good at telling, as a professional here, when a podcast host is not into the thing they’re talking about. And even if you don’t wish you were talking about this, I think you’re pretty into it.
Rob Wiblin: I mean, well, I’m interested in a lot of different topics. I guess I’ll just have to accept that you’re not convinced on this one.
Business incentives around new AI models [00:57:22]
Rob Wiblin: There’s a strikingly large number of different mechanisms by which AI could end up causing harm, which various different people have pointed to. One can, of course, try clustering them into groups that have something in common — like misalignment, misuse, algorithmic bias, the kind of natural selection perspective, and so on.
I know from listening to the extensive coverage of AI on your show over the last year that you’re personally engaged with a wide range of these possibilities, and take many of them pretty seriously. What possible ways that advances in AI could go wrong are you likely to prioritise in your coverage of the issue over the next year or two?
Ezra Klein: I don’t know if I’m going to prioritise any one over a set of others. I find the whole question here to be almost unbearably speculative; we’re operating in a space of pretty radical uncertainty. And so a number of the most plausible and grounded ways that AI could go wrong are also in certain ways the least spectacular. AI will be bad in the ways our current society is bad because it is trained in the data of our current society. That is both a clear harm that is going to happen and is not civilisation-ending. And then, as you get up the ladder to civilisation-ending harms, or civilisation-threatening harms, you are working with obviously more speculative questions of how AI will develop, how it will be used, et cetera.
So one of the things that I’m interested in is not so much trying to tell policymakers or my audience that you should think about this harm and not that harm — but that we need a structure; we need systems. We need expertise and institutions, and expertise in the correct institutions, to have visibility on how artificial intelligence is developing. We need to be thoughtful about the business models and structures around which it is being built.
So this is something I keep emphasising that I think other people really underemphasise: The kinds of artificial intelligence we have are going to be highly governed by the kinds of artificial intelligence that get a quick market share and that seem to be profitable. So already, I think it is a kind of harm that is emergent that more scientifically oriented systems like AlphaFold are getting a lot less attention than just an endless series of chatbots — because the chatbots have such a clear path to huge profitability.
And so systems that I think could be better for humanity are much less interesting to the venture and financier class than systems that could be plugging into search engines right now. So being thoughtful about what the monitoring systems are, what the business models are, how we’re doing audits: I think we’re in a period more of institution-building and information-gathering than saying, like, “This is what’s going to go wrong, and here’s how we’re going to prevent it.”
Rob Wiblin: You’ve made this point about business models quite a few times, and I think it’s a good one, and it’s not one that comes up a whole lot elsewhere. Do you have a view on what sort of business model would be the best one to take off, if we could affect what sort of business model AI companies are using?
Ezra Klein: Yeah. I think I do, on a couple of levels. One is I just think the competitive race dynamics between the different companies are worth worrying about. I basically understand the incentive structure of AI development right now as being governed by two separate races: one between different companies — you have Microsoft versus Google versus Meta, somewhat versus Anthropic, and then you have some other players — and then between countries: the US versus China. Or you can maybe say, given that DeepMind is in London, the West versus China, something like that. And then, of course, as time goes on, you’re going to have more systems coming out of more countries.
And so the problem — and this is a very banal point that many other people have made — is that there is going to be more direct pressure to stay ahead in the race than there is to really do anything else. You can have all these worries and all these concerns, but it’s really a trump card — or it certainly acts in our system like a trump card — to say, “If you don’t do this, or if you slow down to do that, they’re going to get ahead of you over there.” And so that, to me, is one set of problems I think we should worry about around business models: If there’s a very near-term path to massive profitability, people are going to take that path, and they’re going to cut a lot of corners to get there.
And I think when people think of business models, they’re primarily then thinking of things like hooking it into advertising. And I am too. But I also just think about, you know, algorithmic trading funds that have billions of dollars to throw at this, and that might want to create — but not really understand what they’re creating — in terms of some kind of artificial system that is inhaling data from the markets, that is hooked up to a fair number of tools, and that is turned loose to try to make as much money as it can in an automated way. Who knows what a misaligned system like that can end up doing? So how you make money, that I think is important.
And in general, one reason I focus on it, I should say, is that I think it’s something that the people who focus on AI risk somehow have a bit of a blind spot here. I think there’s a little bit of a weird forgotten middle between what I think of as the AI ethics concerns — which are around algorithmic bias and misinformation and things like that — and what I think of as AI risk concerns, which are more sort of existential. And I think that the sort of more banal, like, “How is everybody going to make money on this?” and “What is that race going to do to the underlying technology?” has been a little neglected.
Rob Wiblin: Yeah. I wonder if one reason it might be neglected is that people aren’t sure. You know, even if we would prefer the scientific AI models to flourish more than others, and to be more profitable, people might wonder what policy options are there to really influence which of these business models ends up being most successful. Did you have any ideas there for how one could push things in one direction rather than another?
Ezra Klein: Given where I am and where I’m talking, I think one reason it’s neglected is that, in general, one blind spot of effective altruism is around capitalism. And for a lot of reasons, there is just not that much interest or comfort with critiquing incentives of business models and systems and wealthy people within effective altruism. So I just want to note that, to not let you and your audience off the hook here. I don’t think it’s totally accidental that this has happened.
Rob Wiblin: I think many people have said, more or less, it looks like capitalism is going to plausibly destroy the world, basically, because of this race dynamic that you described. That’s a very common line. So I think people are at least open to noticing some ways in which the incentives are poorly aligned.
Ezra Klein: Yeah. I think all of a sudden, people now see the race dynamic. But I just think in general, this is a slightly neglected space in the EA world. Anyway, the point is not to make this into a critique of EA.
Look, I think this is hard. Do I have a plausible policy objective in my pocket? Not really. If it were me at the moment, and I were king, I would be more restrictive on business models rather than less. I would probably close off a lot of things.
Like, I would say you can’t make any money using AI to do consumer manipulation. I think the possible harm of having systems that are built to be relational — so think of things like what Replika is doing, or I’m very impressed by Pi — what Inflection.ai has built, the Reid Hoffman-aligned company. I think that’s a very impressive model: it’s very personal; it’s really nice to talk to. I think if you imagine models like that, that build a long-term personal relationship with people, and understand things about the people they’re talking to, and then use that to manipulate what they do, I think that’s pretty scary.
So I’d do things like that. But on the other hand, I would be putting a lot more public money and public resources into AI. I mean, something that I’ve talked about at different times on the show, and talked about with other people, is I would like to see more of a vision for AI for the public good. Like, what do we want out of AI? Not just how do we get to as fast as we possibly can, but what do we want out of it? What would it mean to have some of this actually designed for public benefit, and oriented towards the public’s problems? It might be that “the public” is much more worried about a set of scientific and medical problems as opposed to how to build chatbots or help kids with tutoring or something. But because the others have more obvious business models, we’ll get the latter and not really the former.
And so I think that some of this is just you would have to actually have a theory of doing technology for the public good, as opposed to just having a regulatory opinion on technology, to the extent you have any opinion at all on it. And we tend to be more comfortable, at least in America, with the latter. And some of the reason it’s hard to come up with the things I would like to talk about is that they feel very distant from our instincts and our sort of muscle memory about how to approach technology.
Rob Wiblin: Yeah. I guess one change of incentives you could try to make is like, so very narrow systems that are just extremely good at doing one thing — like a model that is extremely good at folding proteins — they don’t tend to generate nearly so much concern, because they’re not likely to be able to act that autonomously because just their abilities are so narrow. And it seems like to do an awful lot of good, we don’t need necessarily general AIs that are capable of doing most of the things that humans are able to do. We probably can do an awful lot of good just by training these narrow systems, and those ones are just a lot less troubling from many different points of view.
Ezra Klein: This is my gut view. And in addition to that, there’s always the prospect out there of achieving generalised artificial intelligence. And if you can get to AGI, then you get to sort of pull out of your argumentative pocket that once we hit that moment, then what that self-improving, generalisable intelligence can do will so outpace all the narrow systems that it’ll be ridiculous that we wasted all this time doing these other things. So blah blah blah. But if you’re sceptical — and I do have still a fair amount of scepticism that we’re going to hit AGI or the kinds of super-capable AGIs that the people believe in anytime soon — then actually you would want a lot more narrow systems.
And one reason you’d want them is you might believe, as I believe, that the chatbot dynamics don’t actually orient themselves to things that are that good for society. So technology always comes with a point of view. Technology always comes with things that it is better at and worse at. And something I have said on my show before, and talked about in conversation with Gary Marcus — who’s more of a critic of these systems, but this is a point I agree with — is that I think you’re basically, in chatbots, creating systems that are ideally positioned to bullshit. And I mean here “bullshit” in the Harry Frankfurt version of the term, where bullshitting is speaking without regard to the truth: not specifically lying, just not really caring if it’s true or not, not even really knowing if it’s true or not.
That’s in some ways the whole point of hallucination, or the whole point of when I go to an AI system and I say to it, “Can you write me a college application essay that is about how I was in a car accident as a child?” — and it wrote me an amazing essay when I did that, and it talked about how I got into martial arts and learned to trust my body again, and how I worked at a hospital with other survivors of car crashes. Just none of it had happened, right? It just made up this whole backstory for me off of, like, a one-sentence prompt.
And so when you have a system like that, what you have is a system that is well-oriented towards people doing work without much regard for the truth. And I think there’s actually a lot of reason to think that that could be a net negative on society. And you don’t even have to be thinking about high levels of disinformation or deepfakes there. Just a gigantic expansion in the amount of garbage content that clogs up the human processing system and the collective intelligence of humanity: that too would just be sludge. That would just be a problem if everything got way more distracting and way harder to work with and way harder to separate signal from noise. That would just be bad.
Meanwhile, a lot of these narrow systems, I think there’s incredible work you can do. And if the amount of money and investment and excitement that’s going into the chatbot race was going into trying to figure out lots more predictive systems for finding relationships between real things that human beings don’t have the cognitive capacity to master, I think that would be great.
And so to me, that’s where, again, business models matter. But also, that’s somewhat on the public and on the government. You don’t just want the government to say, “This business model is bad” — you want it to say that one is good sometimes, or you want it to make that one viable. I mean, the whole idea of something like carbon pricing — or separately, what we actually did in the Inflation Reduction Act, where you put huge amounts of subsidies into decarbonisation — is you are tilting towards a business model. You’re saying, “If you do this, we are going to make it more profitable for you to do it.”
You can imagine prizes with AI, where we set out this set of drug discoveries we would like to make or scientific problems we would like to solve. And if you can build an AI that will solve them, like the protein-folding problem, we will give you a billion dollars. It’s a problem to me that DeepMind made no money from AlphaFold. Or, I mean, I’m sure they did in some kind of indirect way, and obviously they’re trying to spin it out into Isomorphic, which will do drug discovery. But AlphaFold is great, right? They solve the protein-folding problem. Nobody, to my knowledge, cut them a cheque for doing so. And there should be something that is cutting cheques if you could invent an AI to solve fundamental scientific problems — not just cutting cheques if you can invent an AI that is better at selling me Hydro Flask water bottles as I travel around the internet. Like, that’s just a problem.
Parenting [01:11:23]
Rob Wiblin: I know you’ve got a sick kid, and you’ve got to go. But a final question for you is: I recently got married, and I’m hoping to start a family in the next few years. And I guess you’ve been a dad for a couple of years now. What’s one or two pieces of advice you’ve got for me if things work out?
Ezra Klein: Oh, what a fun question. Could do a whole 80,000 Hours [episode] on parenting. Not that I’m an expert on it.
I think one is that — and this is a very long-running piece of advice — but kids see what you do; they don’t listen to what you say. And for a long time, they don’t have language. And so what you are modelling is always a thing that they are really absorbing. And that includes, by the way, their relationship to you and your relationship to them.
And something that really affected my parenting is a clip of Toni Morrison talking about how she realised at a certain point that when she saw her kids, that she knew how much she loved them, but what they heard from her sometimes was the stuff she was trying to fix, right? “Your shoes are untied, your hair’s all messed up, you’re dirty, you need to…” whatever. And that she had this conscious moment of trying to make sure that the first thing they saw from her was how she felt about them. And I actually think that’s a really profound thing as a parent: this idea that I always want my kids to feel like I am happy to see them; they feel that they are seen and wanted to be seen. So that’s something that I think about a lot.
Then another thing is you actually have to take care of yourself as a parent. And you know, I worry I’m a little more grumpier on this show today than I normally am, because my kid had croup all night, and I’m just tired. And the thing that I’ve learned as a parent is that just 75% of how I deal with the world — like, how good of a version of me the world gets — is how much sleep I got. You’ve gotta take care of yourself. And that’s not always the culture of parenting, particularly modern parenting. You need people around you. You need to let off your own steam. You need to still be a person.
But a huge part of parenting is not how you parent the kid, but how you parent yourself. And I’m just a pretty crappy parent when I do a worse job of that, and a pretty good parent when I do a good job of that. But a lot of how present I can be with my child is: Am I sleeping enough? Am I meditating enough? Am I eating well? A I taking care of my stress level? So, you know, it’s not 100% of parenting a child is parenting yourself, but I think about 50% of parenting a child is parenting yourself. And that’s an easy thing to forget.
Rob Wiblin: Yeah. It is astonishing how much more irritable I get when I’m underslept. That’s maybe my greatest fear.
Ezra Klein: Yeah. It’s bad. Again, like, even in this conversation, I’ve been probably edgier than I normally am, and I’ve just felt terrible all day. It’s a crazy thing when you become a parent and you realise other parents have been doing this all the time. You see them it’s cold and flu season, and you understand that you didn’t understand what they were telling you before. And somehow, all these people are just running around doing the same jobs they always have to do, and carrying the same amount of responsibility at work and so on, just operating at 50% of their capacity all the time and not really complaining about it that much. A whole new world of admiring others opens up to you. Like, I have two kids and now my admiration of people who have three or four is so high. So, you know, it’s a real thing.
But it does open you up to a lot of beautiful vistas of human experience. And as somebody who is interested in the world, it was really undersold to me how interesting kids are, and how interesting being a parent is. And it’s worth paying attention to, not just because you’re supposed to, but because you learn just a tremendous amount about what it means to be a human being.
Rob Wiblin: My guest today has been Ezra Klein. Thanks so much for coming back on the podcast, Ezra.
Ezra Klein: Thank you.
Rob’s outro [01:15:39]
Rob Wiblin: I worry I might have offended some technical AI safety people a minute ago by saying that I found their work hard to get into — boring even, I think I might have said.
The trouble — and I’m saying this because I expect I’m not the only one who has experienced this — is that I don’t feel I’ve had enough of a gears-level understanding of how ML works to judge which ideas in the field are good or bad, at least not so long as some fraction of domain experts say they’re into it. Which in practice makes it a bit unrewarding to dig into proposals, because I know at the end I’ll just have to walk away shrugging my shoulders.
That was more so the case five years ago, when there weren’t really products to make how AI was working concrete in my mind, and even more so 10 years ago, when nobody had a clear picture of what general AI systems would end up looking like.
This is one reason we’ve been doing more episodes on AI policy issues, where I think I do have some non-zero ability to pick winners and losers.
That is changing though, now that, well… the rubber has hit the road, and it’s becoming clearer what we’re dealing with and maybe what needs to be done. Yesterday I spoke with Jan Leike — who leads OpenAI’s alignment work — and I think I basically understood everything he was saying, and I reckon could even visualise how he hopes it’s all going to work.
Anyway, if, like me, you didn’t study computer science and have felt at sea reading about technical AI progress in the past, know that I sympathise with you — and indeed have been secretly sympathising with you since about 2009!
And if you’re a technical alignment researcher, know that I’ve been really appreciating your work from the bottom of my heart, even if my head has been finding it hard to fully understand.
Finally, before we go, a reminder about an excellent new interview we’ve done, which is available on 80k After Hours: Hannah Boettcher on the mental health challenges that come with trying to have a big impact.
And if you’re enjoying these AI-focused episodes then you might like the compilation we put together of 11 excellent episodes of the show looking at many different aspects of AI. That compilation is titled The 80,000 Hours Podcast on Artificial Intelligence, and you can search for and listen to that feed anywhere you’re listening to this!
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
The audio engineering team is led by Ben Cordell, with mastering and technical editing for this episode by Milo McGuire.
Full transcripts and an extensive collection of links to learn more are available on our site and put together by Katy Moore.
Thanks for joining, talk to you again soon.