Transcript
Cold Open [00:00:00]
Jan Leike: Honestly, it really feels like we have a real angle of attack on the problem that we can actually iterate on, we can actually build towards. And I think it’s pretty likely going to work, actually. And that’s really, really wild, and it’s really exciting. It’s like we have this hard problem that we’ve been talking about for years and years and years, and now we have a real shot at actually solving it. And that’d be so good if we did.
But some of the other reasons why I’m optimistic are that, I think fundamentally, evaluation is easier than generation for a lot of tasks that we care about, including alignment research. Which is why I think we can get a lot of leverage by using AI to automate parts of all of alignment research.
Rob’s intro [00:00:47]
Rob Wiblin: Hi listeners, Rob Wiblin here, head of research at 80,000 Hours.
Today’s interview is kind of a big deal because Jan Leike is leading up what I believe is the best resourced effort to date anywhere to figure out how to align superintelligent AI systems and make them safe for humanity.
A few weeks ago OpenAI announced that they’d be giving over 20% of their computational resources to a so-called ‘Superalignment’ project that Jan, as head of alignment, would be overseeing.
That generated a lot of buzz, not least because 20% of OpenAI’s compute is a huge amount, but also because they set themselves the goal of solving this problem in just 4 years and Jan thinks they have a real shot of doing so.
As far as I know this interview here has as much information about this Superalignment effort as you’ll be able to find publicly anywhere right now.
It’s technical without being very technical, and we get Jan’s personal takes on related policy and strategy questions as well as ML ones.
In my opinion, this OpenAI Superalignment project has a frankly bizarre likelihood of being one of the most important things that happens on Earth this decade, which I think is itself a strong reason to listen in.
Two quick notes before that:
We’ve had a lot of AI episodes in a row lately, so those of you who aren’t that interested in AI or perhaps just aren’t in a position to work on it, might be wondering if this is an all AI show now.
But don’t unsubscribe because we’re working on plenty of non-AI episodes that I think you’ll love — over the next year we plan to do roughly half our episodes on AI and AI-relevant topics, and half on things that have nothing to do with AI.
What happened here is that in March it hit Keiran and Luisa and me that so much very important stuff had happened in the AI space that had simply never been talked about on the show, and we’ve been working down that coverage backlog, which felt pretty urgent to do.
But soon we’ll get back to a better balance between AI and non-AI interviews. I’m looking forward to mixing it up a bit myself.
Finally, as usual the opinions I express in this interview are basically my own. I don’t know what all my colleagues think about OpenAI or the Superalignment project, probably some like it more than me, others will like it less. But in any case on a complex fast-moving topic like this one, the stuff that I say in these interviews isn’t some considered opinion of the full 80,000 Hours team — it reflects my guesses and feelings and opinions.
OK, without further ado, I bring you Jan Leike.
The interview begins [00:03:30]
Rob Wiblin: Today, I’m speaking with Jan Leike. Since 2021, Jan has been head of alignment at OpenAI. And along with OpenAI founder Ilya Sutskever, he is going to be coleading their new Superalignment project. Years ago, Jan did his PhD with the well-known machine learning figure Marcus Hutter as supervisor. And he then did a brief postdoc at the Future of Humanity Institute at Oxford before becoming a research scientist at DeepMind, which is what he was doing when he last came on the show in 2018 for episode #23: How to actually become an AI alignment researcher according to Jan Leike.
Thanks for coming back on the podcast, Jan.
Jan Leike: Thanks a lot for having me again. It’s really great to be here.
The Superalignment project [00:04:04]
Rob Wiblin: You’ve really gone places since then. I feel like we’ve been sometimes pretty good at picking talent, or picking people whose careers are going to take off. I hope to talk about the Superalignment project and who you’re trying to hire for that, as well as put a lot of audience questions to you. So let’s dive right in. To save you a little bit of effort, though, I’ll read an extract from the announcement that OpenAI put out about the Superalignment project two weeks ago:
Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction. While superintelligence seems far off now, we believe it could arrive this decade. […]
We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. To solve this problem within four years, we’re starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we’ve secured to date to this effort. We’re looking for excellent ML researchers and engineers to join us.
So for listeners who haven’t been following this much, or possibly at all, can you fill us in on some more details of the project?
Jan Leike: Yeah. Very happy to. Basically, if you look at how we are aligning large language models today, it’s using reinforcement learning from human feedback (RLHF) — which is basically a technique where you show a bunch of samples to a human, and you ask them which one they prefer for a dialogue assistant or something. Then that becomes a training signal for ChatGPT or other AI systems like it.
And we fundamentally don’t think that RLHF will scale. And the reason for that is very simple. Because you have humans overseeing AI systems, you’re assuming that they can tell that this response is better than this other response; you know, they fundamentally understand what the system is doing. And this is definitely true today, or let’s say for the most part, because the tasks that ChatGPT is doing just aren’t that complex.
But as AI systems get smarter, they will be able to do harder things. They will be doing things that we understand much less. And the fundamental assumption — that humans can evaluate what the system is doing — will no longer be true. So in order to really steer and control systems that are much smarter than us, we will need new techniques.
Rob Wiblin: OK, so the current method is to observe the output, and then rate how good it has been. Then I guess that provides feedback that helps to push the model in the right direction. But in future, we just might not be able to evaluate whether the model actually is, at a deeper level, doing what we want it to do. And so we’re going to have to have some other way of nudging it in the right direction. Is that kind of the short version of it?
Jan Leike: That’s right. So the problem is, if you have a system that is really smart, it could think of all kinds of ways to subtly subvert us, or try to deceive us or lie to us in a way that is really difficult for us to check. And so I think there’s a lot of really important and interesting research challenges here, which are: Can we understand how we can extract what the model knows about certain problems? Like, if it writes a piece of code, can I tell, of which parts of the codes that I understand, does it know there are certain bugs in the code? Or can we understand how the system can generalise from easy problems that we can supervise to harder ones we can’t? Or can we understand how we make it robust so that it can’t get jailbroken or that it can’t subvert the monitoring systems? Things like that.
Rob Wiblin: OK, so I guess, distinct from what OpenAI has already been doing, this is going to focus on models that are as smart as humans, or smarter than humans, or doing things that are quite complicated — such that they’re sophisticated enough to potentially trick us, or that there could be other failures that come up. What’s going to happen to all of the alignment and safety work that OpenAI has already been doing up until now? Is that just going to continue with a different team? What’s going to happen to it?
Jan Leike: I think there’s a lot of really important work to be done to ensure that the current systems we already have are safe, and they continue to be safe, and they won’t be misused. And there’s a lot of stuff that’s happening around this at OpenAI that I’m really excited about, and that I would be really excited for more people to join. So this involves fixing jailbreaking and finding ways to automatically monitor for abuse, questions like that. And that work has to continue, and that work happens in the context of the ChatGPT product.
But what we are setting out to do is we want to solve alignment problems that we don’t really have yet. So for example, if you have GPT-4 help you write a piece of code, it doesn’t really write really complicated pieces of code, right? It doesn’t really write entire complicated codebases, and it’s not generally smart enough to put, let’s say, a Trojan into the code that we wouldn’t be able to spot.
But future models might do that. And so I think our job is fundamentally trying to distinguish between two different AI systems: One that truly wants to help us, truly wants to act in accordance with human intent, truly wants to do the things that we want it to do; and the other one just pretends to want all of these things when we’re looking, but then if we’re not looking, it does something else entirely. And the problem is that both of these systems look exactly the same when we’re looking.
Rob Wiblin: Right. It’s an awkward fact for us.
Jan Leike: That’s right. That makes it an interesting challenge. But we have a bunch of advantages, right? We can look inside the model. We can send the model through all kinds of different tests. We can modify and do internals. We can erase the system’s memory. We can see if it’s consistent with other things it’s saying in other situations, and so at the very least, you can make sure that it is a very coherent liar with itself.
But we really want to do more than that. We have to solve the challenge of how do we know it is truly aligned?
Rob Wiblin: Yeah. It’d be a great science fiction book, I think, to imagine this scenario from the perspective of the AI. Where it’s much smarter than the people who are training it, but on the other hand, they can look inside its brain and give it all of these funny tests in order to try to check whether it’s deceiving them. And what kind of strategies would you come up with as an agent to work around that? It’s going to be like a real cat-and-mouse game.
Jan Leike: Yep. So I think it’s important here that we’re not picturing vastly powerful systems. So we’re not going to picture systems that are vastly smarter than us. They might be better than us in some ways — for example, GPT-4 is much better at remembering facts, or it can speak more languages than any humans — but it’s also much worse in some ways. Like, it can’t do arithmetic right, which is kind of embarrassing if you think about it.
Rob Wiblin: Well, I mean, I can’t remember more than seven numbers at a time, so I feel like we all have our own limitations right now.
Jan Leike: Yeah. So I think the goal that we really want to aim for is: We want to be able to align a system that is roughly as smart as the smartest humans who are doing alignment research. So let’s zoom into that question a little bit, which is this question of what would it take to not only align a system like that, but also to be confident that it is sufficiently aligned?
Basically, I think one useful way to think about it is you want to split your methods into two general buckets: You have a bunch of training methods that train the system to be more aligned, and then you have validation methods that kind of calibrate your confidence about how aligned the system actually is. And as usual, when you do this training/validation split in machine learning, you want to know that there’s no leakage of the validation set into the training set.
Rob Wiblin: I guess the problem would be that if you’re training the model on the same thing that you’re using to check whether you’ve succeeded or not, then of course it could just become extremely good at doing that test, even though it’s not aligned in the broader sense — it’s just kind of gerrymandered to have its failures not picked up with your test. So you need to have it that the things that you use to get feedback on to train the model have to be fully separated from the things that you use to validate whether that training has succeeded. Is that the basic idea?
Jan Leike: That’s right. You don’t want to train on the test. That makes passing the tests so easy.
Why 4 years? And why does the project need so much compute? [00:13:07]
Rob Wiblin: OK, we’ll come back to some of those details in a minute. But first, I had a couple of audience questions to put to you. We got a lot of submissions from listeners particularly keen to hear clarification from you. One listener asked: “Why the target for solving this problem in four years? Is that roughly when Jan expects AGI to arrive?”
Jan Leike: Great question. I think in general, I would have a lot of uncertainty of how the future is going to go. I think nobody really knows. But a lot of us expect that actually things could be moving quite quickly, and systems could get a lot smarter or a lot more capable over the next few years. And we would really love to be ahead of that; we would really love to have solved this problem in advance of us actually having had to solve it, or at least ideally far in advance. So this four-year goal was picked as kind of a middle ground between… we don’t know how much time we’ll have, but we want to set an ambitious deadline that we still think we could actually meet.
Rob Wiblin: So it’s kind of the most ambitious target that doesn’t also cause you to laugh at the possibility that it could be done that quickly. This is kind of a best-case scenario.
Jan Leike: A lot of things can be done in four years.
Rob Wiblin: OK, another question about the announcement. The announcement talks about this 20% of compute that OpenAI has secured so far. I guess I don’t know all the details about exactly how much compute Open AI has, but I imagine that by any measure, it’s going to be a pretty significant amount of computational resources.
But one sceptical listener wanted me to, quote: “Dig deeper on the 20% compute stat. What is OpenAI’s net change in investment in alignment with the super alignment team, considering compute, headcount and funding? They may be increasing investment in alignment, but are they increasing investment in capabilities much more?”
So in particular, some people have pointed out that this is 20% of compute secured so far, and of course amounts of compute are growing every year, so that might end up being small, relative to 20% of all compute in future. Can you clarify this for us?
Jan Leike: Yeah. So the 20% compute secured so far number refers to everything we have access to right now, and everything we’ve put purchase orders in for. And so this is actually really a lot. I think the technical term is “a fucktonne.”
Rob Wiblin: Yeah. It sounds like, given that you’re just building this team from scratch, you might have about the most compute per person, or an extremely high compute per staff member. Right?
Jan Leike: Yeah. But I think this is not the right way to think about it, because it’s compute that’s allocated to solve the problem, not necessarily for this particular team. So one way this could go is we develop some methods, and then some other team that’s really good at scaling stuff up scales it up, and they actually spend a lot of it.
I think it’s not the correct way to think about this. It’s not what it is relative to capabilities; I think it’s what it is relative to other investments in alignment. And in terms of how much we’ve been investing so far, this is a really significant step up — not just like a 3x, but a lot more than that. And I think also it shows that OpenAI is actually really serious about this problem, and really putting resources behind solving it. And they wouldn’t have to have made that commitment, right? Nobody forced them to.
Rob Wiblin: Yeah. I suppose if you run out, if you use up this 20%, do you think it’ll be relatively straightforward to get access to additional compute? That you get the commitments that come out in future years as well?
Jan Leike: Yeah. I would be pretty confident that if we have a good plan on how to spend more compute, and we are like, “If we have this much more, we could do this much better on alignment” or something, I think we can make a really strong case for a lot more compute, if that’s what it comes down to. Basically, I think that’s the best world to be in. If all you need to solve the problem is to go around asking for more GPUs, I think we’ve mostly won, honestly.
Rob Wiblin: Yep. Why is it so important for this project to have access to a lot of compute?
Jan Leike: So there’s a bunch of ways of answering that question. I think if you look at the history of deep learning over the last 10 years, basically, compute has played a really major role in all of the big headline advances and headline results. And there’s this general recipe that a lot of simple ideas work really well if you scale them up, and if you use a lot of compute to do it.
This has been true for capabilities. I expect, to some extent, this will be true for alignment as well. It won’t be only true, because I don’t think anything we currently have right now is really ready to just be run at scale. And there’s a real research problem to be solved here. But also, I think the strategies that we’re really excited about — and the strategies to some extent that we are also comparatively advantaged at investing in — are the ones where you really scale up and you use a lot of compute.
So in particular, if we’re thinking about scalable oversight, we can spend more compute on assisting human evaluation, and that will make the evaluation better. Or automated interpretability: if we have a method that we can automatically run over a whole network, we could just spend a lot of compute and run it on the biggest model.
And ultimately, where we want to go is we want to automate alignment research itself, which means we would be running kind of a virtual alignment researcher. And once we get to that stage, then it’s really clear that you just want to spend a fucktonne of compute to run that researcher a lot, and it’ll make a lot of alignment progress very quickly.
Rob’s definitions of interpretability, Reinforcement Learning from Human Feedback (RLHF), and labels (00:19:24)
State of the art in alignment methods [00:22:56]
Rob Wiblin: OK, let’s first take a step back and survey the current state of the art in alignment methods, and why you’re confident that they’re not going to be enough to align agentic models that are much more intelligent than humans.
One thing I’ll add is that you’ve done this other interview with the AI X-risk Research Podcast, which covers a lot of questions that people would be especially likely to have if they’re already involved in AI safety or alignment. So in the interest of product differentiation, today we’re going to focus a little bit more on the questions that people might have if they’re coming in from non-safety-related ML research, or they’re just outside machine learning entirely, looking on and trying to make sense of what’s going on here.
So what alignment and safety techniques are currently dominant in cutting-edge models? Is it just the reinforcement learning from human feedback that you were talking about earlier?
Jan Leike: Yeah, that’s right. So reinforcement learning from human feedback is kind of the popular method today. It works well because humans can look at what the system is doing and tell whether it’s good or not.
And if you’re thinking hard about how to scale that, you run into this problem that, basically, humans don’t scale with AI progress. Right? If we make our AI systems better, humans don’t automatically get better. So if you want to kind of scale similarly humans’ ability to oversee what AI is doing, the obvious path to do this is to get them to use AI. So let’s say you have an AI system, and it’s trying to write this complicated codebase, or a complicated textbook or something. Now, you could use an assistant like ChatGPT to help you find all the problems in this textbook — and this could be a future version of ChatGPT that uses a lot of plugins and does a lot of fact-checking and browsing and reads a bunch of books and whatnot. But fundamentally, the question is: Why is this helping?
The basic idea behind this is you’re actually making the task easier by assisting evaluation. Like, if you have an AI assistant that’s suggesting a bug in the code, it’s much easier for you to go and check that this is in fact a bug than it is to find all the bugs in the first place. And so by having this bug-finding system, not only does it help you a lot in overseeing and evaluating the actual codebase-writing system, but it is also in itself a task that is easier to supervise. You could picture, for example, training that task with RLHF and then using that system to evaluate this harder task. And so there’s a range of ideas like that that we call “scalable oversight,” and that’s one of our main directions.
Rob Wiblin: So I suppose an assumption here is that things would go better if only humans could spend a lot of time scrutinising the outputs of models, and figuring out really in what ways were they good and bad, and then reinforcing them on that basis — having a full, sophisticated understanding of what has gone well and what has gone badly, and reinforcing the good and negatively reinforcing the bad.
But as AI progresses, it is going to be producing much more complicated outputs that take much longer for a person to assess, or they just may not be able to assess it very well because it’s too challenging, or there’s going to be many more different kinds of models producing a wider range of things, and we just don’t have the personpower. We just don’t have enough people to properly check these outputs and see when they’ve gone well and when they’ve gone badly. So we could end up getting feedback that’s bad: we could end up saying that the model did a great job when in fact it did a bad job, or saying it did a great job when in fact it was tricking us, and then we’re just reinforcing it to learn how to trick us better and learning that that’s a successful strategy.
Now the problem is AI is rushing ahead. Humans are kind of stuck at the clock speed that they have; we’re not getting any faster or any smarter. But the magic would be if we could get AIs to do the scrutinising, to do the checking — because then the things that you need to check are speeding up and getting more sophisticated at the same rate as the checker is getting more sophisticated. Is that the basic idea?
Jan Leike: Yep. That’s the basic idea. And I think the point you’re making is really good, and I want to echo that: if you use RLHF, you’re basically training the system to avoid the kind of mistakes that humans would find. So one way it could go is the system then generalises to “I shouldn’t make the kind of mistakes humans would find.” But actually, what you want it to generalise to is “I shouldn’t make mistakes, or mistakes that I know are mistakes.” And this is a really important but subtle distinction.
Rob Wiblin: Yeah. Do you want to elaborate on that? So they come apart when we give inaccurate feedback. Is the idea that if our feedback were always accurate — in the sense that we only say a good job has been done when a good job truly has been done, and that’s what we would think if we just knew everything and we were incredibly brilliant ourselves — then you can’t get this coming apart between doing the right thing and avoiding mistakes that are visible to the assessor?
Jan Leike: That’s right. But I don’t know about you, but man, I find it so hard to actually… We don’t have access to ground truth, right? Like, we don’t know what’s actually true. If you give me a complicated code, there’s no way I’m going to find all the bugs. It is just too difficult.
But this is also a core part of the challenge. If you have an AI system that writes a lot of code, which I expect will happen in the future, people will want to run that code. And so how do we know that AI systems aren’t secretly placing backdoors or Trojans or other security vulnerabilities into the code that they know we’ll miss — because we’ve trained them with a feedback signal that tells them exactly what kind of bugs we spot and we miss?
Rob Wiblin: I see. So in order to make this whole thing work, what do we need that we currently don’t have?
Jan Leike: So I kind of teased a little bit the scalable oversight idea. There’s a bunch of other puzzle pieces that we’re really excited about that we think are going to be crucial here.
The other one is understanding generalisation. Like, can we really predict and improve how our models generalise from easy questions, that we can supervise well, to hard questions that we can’t? Or, in other words, how can we get them to generalise the thing that we actually want — which is “Don’t write bugs” — and not this nearby thing that is basically consistent with all the evidence, which is “Don’t write bugs that humans find.” I think this is a really interesting and important question, but it feels like one of these core machine learning questions that is about how neural networks really work. And it’s kind of puzzling that there is so little work that has actually been done on this question.
Another puzzle piece that might be really important is interpretability. In a sense, we have the perfect brain scanners for neural networks for artificial neural networks: we can measure them at perfect precision at every minuscule time interval, and we can make arbitrary precise modifications to them. And that’s a really powerful tool. So in some sense, they’re completely open boxes that we just don’t understand how they actually work. And so it’d be kind of crazy not to look inside and try to understand what’s going on, and answer questions, for like the reward model used to train ChatGPT: What is it actually paying attention to? How does it decide what is rewarded and what is not rewarded? We know very little about that. We know almost nothing. That seems crazy. We should really know that.
Rob Wiblin: I’ve said that on the show before, that it’s just bananas that we don’t understand the incentive structure, or how it thinks about what it’s trying to do.
Jan Leike: Yeah. And it’s right there. You just stare at it. And it’s a hard problem, but I think we can make real progress on that.
And then there’s other questions, like: How can we actually make the model really robust? One example that we found with the InstructGPT paper is that we trained it on basically a dataset that was almost exclusively English, and it can follow instructions in other languages. Like, I can ask it something in German and it will still do it. Sometimes it might answer in English, which is also kind of weird. What’s going on there?
And then another example is the jailbreaking. You’ve seen all of this with GPT-4. You can make these pretty simple prompts and then trick the model into doing a task it was trained not to do, and it’s not supposed to do. So in some ways, it’s not generalised “I shouldn’t do bad stuff” — it’s generalising some other way. What’s going on there? Why don’t we understand that?
Rob Wiblin: Yeah. What is the lesson that it’s learning? If it’s not learning “Don’t help people commit crimes,” and instead it’s just learning “Don’t help people commit crimes — unless you’re in a play”? How is it not getting these concepts?
Jan Leike: Yep. And it seems like humans can do this well. I mean, humans don’t do it perfectly, but what’s the difference here? And so this is another aspect of generalisation that could be really useful for us to understand.
And then, finally, one of the things we want to do is actually deliberately train deceptively aligned models — like models that try to lie to us very coherently, or try to secretly do something like self-exfiltration.
Rob Wiblin: So that’s a model kind of breaking out of the lab.
Jan Leike: That’s right. Because we want to be confident that we could catch these attempts, right? And the straightforward way to be confident is we deliberately train it, and then we check whether it would pass our evals, so whether it would fly under our radar. But of course, if you’re doing this, you have to be super careful that you’re not accidentally creating the thing that you’ve been trying to avoid in the first place. So it has to be done very carefully.
Rob Wiblin: Yeah. It seems to me people have very different intuitions about how likely it is that a model that gets imperfect feedback is going to learn to engage in really deceptive behaviour. If you imagine that we train a model, and we don’t want it to lie. And nine times out of 10 we catch it lying and give it negative feedback, but one time in 10, we accidentally say it did a good job when it lied. It seems like humans kind of learned this general aversion to lying, even when we think that we might be able to get away with it. That’s how most people generalise, although I guess not all.
But some people think that in that situation, it’s just disastrous, because you’ve just trained the model to engage in the most sophisticated lying possible, and to trick you whenever it thinks it can get away with it and not when it can’t. Other people think it’ll just learn this general aversion to lying, and everything’s going to be fine.
Do you share my perception that people have very different intuitions about this? And what are your views, if you have any?
Jan Leike: I think it just makes it clear that we don’t know, and I think we should know. And I think one of the best ways to figure this out is to try it empirically.
Rob Wiblin: Do experiments.
Jan Leike: Yeah. And there’s so many interesting experiments we can run now with the models, exactly of this nature. Like, we could try to train them to be better liars and see how does it behave? How does it generalise?
The Superalignment team isn’t trying to train a really good ML researcher [00:35:07]
Rob Wiblin: Yeah. OK so you’ve been talking about various different ways in which models might be able to help you with alignment research in future and I guess that’s really the heart of the project?
Jan Leike: Yeah. Our overall goal is to get to a point where we can automate alignment research. And what this doesn’t mean is we’re not trying to train a system that’s really good at ML research, or that is really smart or something. That’s not Superalignment’s job.
Rob Wiblin: I think a lot of people have been thinking that. I think they’ve read your announcement as saying that you’re trying to train a really good ML researcher, basically.
Jan Leike: I don’t think this would particularly differentially help alignment. I think it would be good to clarify. Basically, how I understand our job is that once there’s models that can do ML research, or things that are close to it — and I think that is something that’s going to happen anyway, and that will happen whether OpenAI does it or not — our job is to figure out how to make it sufficiently aligned that we can trust the alignment research or the alignment research assistance that it is producing. Because, essentially, if you’re asking this system to help you in your alignment research, there’s a big opportunity for the system to influence or try to nudge us into believing certain techniques are really good that actually aren’t. And thus, that system, or future systems, gain power over humans in a way that we actually don’t want, and that isn’t aligned with us. And so what we ultimately need to do is figure out how to make that system sufficiently aligned that we can actually trust it.
So that means, for example, let’s say this for simplicity: The system writes an alignment paper. Now you can read the paper, but just off the bat, you might not actually be able to find all the flaws in the paper. Or in general, the way scientific peer review is not perfect, and there’s lots of examples where people would go for decades with fake research before they’re being found out. So this is something that we have to really figure out how to avoid. So because alignment research or scientific research in general is a difficult task that humans aren’t that good at evaluating, at least not if you don’t have a lot of time to do it, the question then becomes: What kind of alignment techniques do we need in order to be sufficiently confident that this is the case?
So let me give you some examples. On scalable oversight, one of the goals that we could think about is, let’s say we want to get really good at finding all the bugs in code that the model knows about. So how do we do this? One path could be we train a model that introduces bugs in code, and then we can see whether our scalable oversight can catch it. And I’m happy to go into what kind of scalable oversight we could do here, but on the high level, there’s lots of different ideas, and we should just try them all and see which ones work empirically.
And that’s why it’s important to measure whether or not it’s working, or measure whether you’re making progress in scalable oversight. So by training these systems to introduce bugs, the system will kind of be aware of the bugs it’s introducing, right? And so the question then is if an AI-assisted human can find the bug, then we’re good. Or we want to measure how often they will find it, or how easy it is for us to introduce bugs that the scalable oversight doesn’t find.
Another path is what we call the “discriminator-critique gap” — where basically, we train a model to be a discriminator between buggy code and non-buggy code, which we can generate manually or generate automatically. And then we compare that to, let’s say, a critique model that’s prompted to describe all the bugs it can find. So if you end up with a bunch of code samples that the discriminator flagged is fishy, but the critique model doesn’t point you to the bug that is going on there, then you know there’s something about the code the model is aware of that you’re not finding.
Rob Wiblin: How do you know that they’re both aware of it? Is it because they were trained in the same way somehow?
Jan Leike: So right now, I’m picturing using the same base model for both the discriminator and the critique model.
Rob Wiblin: Got it.
Jan Leike: And so by looking at this in detail, you can get some empirical measurements for how often does it actually happen where you know there is something buggy about the code that the model doesn’t tell you? And then you can do it adversarially, so you’re not just measuring in the average case. And I think that’s a really promising path. Not only to measure that we’re making progress in scalable oversight — because you can picture if you can catch more of these kinds of bugs that the model is aware of, then you’ve made scalable oversight better. But also, if you actually nail this, and we get to the point where we can be super confident that if there was something fishy about the code, that there’s some problem with the code the model is aware of and you find out, then that would make you so much more comfortable with then actually executing the code to see what happens.
Rob Wiblin: Yeah. Just to back up a second, the basic idea is that machine learning models that are capable of doing AI research are coming, whether we want it or not. Many people are nervous about that, because it could set up this recursive self-improvement loop. So there could be good reasons to maybe delay that moment a bit, but we’re not going to be able to delay that forever.
So what we want to do when that moment comes is, firstly, know ways that we can use models with those capabilities to do alignment research, as well as non-alignment machine learning research. And also, it’s very essential that we’d be able to get to a place where we believe that these models are trustworthy enough that we can use the help that they’re giving us on improving our alignment research from the stage that it’s at. We both need to be able to figure out how we can get them to be sufficiently trustworthy that we can use those outputs, and also to be able to know that we’ve succeeded at doing that so that we in fact do. Is that the long and the short of it?
Jan Leike: Yeah. In general, I want to be agnostic towards when exactly this is possible. Like when will there be automated machine learning research? Or when will models be so smart that they can do that? There might be delays. There might be all kinds of reasons why it happens later than sooner. The thing I really want to do is I want to be ready to use these systems for alignment research once that becomes possible. And so what we don’t want to do is accelerate this or make it happen sooner — because it will happen soon enough, I think — but we want to be ready to then use them for alignment research and be ready to make the alignment progress faster as ML progress gets faster at that point.
Rob Wiblin: Yeah. I think an important part of the vision to keep in mind is that it might be extremely difficult to align and figure out the trustworthiness of an AI that is just extraordinarily above human capabilities — that is truly extraordinarily superintelligent — because it’s going to have so many different ways of tricking us. But the hope here is that at the point when these models are first available, they’re going to be more around human level — and might even have some areas where they’re a little bit weaker than people, but other areas where they’re very strong. But because they’re not going to be so incredibly capable, it might be easier to figure out whether we can trust them, because they’re not going to have so many options in their space of actions, and they might be somewhat more scrutable because the actual things that they’re doing in their mind are closer to maybe what we’re doing than what a kind of planet-sized mind might be able to do.
I think many people might have a bunch of scepticism about this, because they’ll think it’s smarter than us, so it’s going to always be able to run rings around us. And you could maybe go out of your way to make sure that you’re not dealing with a model that’s as capable as you possibly could make, in order to make it easier to evaluate the trustworthiness.
Jan Leike: Yeah. I think that’s right. And I think that’s a really central point. If you’re thinking about how do you align the superintelligence — how do you align the system that’s vastly smarter than humans? — I don’t know. I don’t have an answer. I don’t think anyone really has an answer. But it’s also not the problem that we fundamentally need to solve. Maybe this problem isn’t even solvable by humans who live today. But there’s this easier problem, which is how do you align the system that is the next generation? How do you align GPT-N+1? And that is a substantially easier problem.
And then even more, if humans can solve that problem, then so should a virtual system that is as smart as the humans working on the problem. And so if you get that virtual system to be aligned, it can then solve the alignment problem for GPT-N+1, and then you can iteratively bootstrap yourself until you’re at superintelligence level and you’ve figured out how to align that. And, of course, what’s important when you’re doing this is, at each step, you have to make enough progress on the problem that you’re confident that GPT-N+1 is aligned enough that you can use it for alignment research.
Reactions from the broader ML community [00:44:49]
Rob Wiblin: Yeah. How is the machine learning community — I’m thinking of folks who aren’t involved in safety or alignment research in particular — how have they reacted to this plan or announcement?
Jan Leike: I think in general, people are really excited about the research problems that we are trying to solve. And in a lot of ways, I think they’re really interesting from a machine learning perspective. I think also, I don’t know, I think the announcement kind of showed that we are serious about working on this and that we are trying to get a really high-calibre team on this problem, and that we are trying to make a lot of progress quickly and tackling ambitious ideas. Especially in the last six months or so, there’s been a lot more interest from the machine learning community in these kinds of problems.
And I also think the success of ChatGPT and similar systems has made it really clear that there’s something interesting going on with RLHF. And there’s something interesting about this; there’s something real about this alignment problem, right? Like, if you compare ChatGPT to the original base model, they’re actually quite different, and there’s something important that’s happening here.
Rob Wiblin: Yeah. I listened back to our interview from five years ago, and we talked a lot about reinforcement learning from human feedback, because that was new and that was the hot thing back then. Was OpenAI or you involved in coming up with that method?
Jan Leike: Yes. That’s right. I think more accurately, probably a lot of different people in the world invented it. And before we did the “Deep reinforcement learning from human preferences” paper, there was other previous research that had done RL from human feedback in various forms. But it wasn’t using deep learning systems, and it was mostly just proof-of-concept style things. And then the deep RL from human preferences paper was joint work with Paul Christiano and Dario Amodei and me. I think we kind of all independently came to the conclusion that this is the way to go, and then we collaborated.
Rob Wiblin: And that’s turned out to be really key to getting ChatGPT to work as well as it does, right?
Jan Leike: That’s right. It’s kind of been wild to me how well it actually worked. If you look at the original InstructGPT paper, one of the headline results that we had was that actually, the GPT-2 sized system — which is two orders of magnitude smaller than GPT-3 in terms of parameter count — the InstructGPT version of that was preferred over the GPT-3 base model. And so this vastly cheaper, simpler, smaller system, actually, once you made it aligned, it’s so much better than the big system. And to some extent, it’s not surprising, because if you train it on human preferences, of course, it’s going to be better for human preferences.
Rob Wiblin: But it packs a huge punch.
Jan Leike: Yeah. But also, why the hell haven’t you been training on human preferences? Obviously, that’s what you should do, because that’s what you want: you want a system that humans prefer. In hindsight, it’s so obvious. You know?
Rob Wiblin: Yeah. Coming back to the machine learning folks, what parts of the plan, if any, are they kind of sceptical of? Or are there objections that you’ve been hearing from people?
Jan Leike: Yeah. I mean, I think there’s a lot of different views still on how fast the technology is going to develop and how feasible it is to actually automate research in the next few years. And I think it’s very possible, but also, it might not happen. Nobody actually knows.
But I think the key thing is that there’s some really deep and important problems here that we really need to solve, and that are also really tractable and that we can make a lot of progress on over the next few years. And in fact, by doing this, this could be incredibly impactful work, because these are going to be techniques that will shape future versions of ChatGPT, and future versions of AI systems that are actually widely applied and do lots of tasks in the economy.
And there’s a lot of much easier signals that you could optimise, right? You could optimise AI systems to maximise customer purchases or to maximise attention. And we’ve seen glimpses of what that looks like over the last decade or so, and a lot of people don’t like that. And it is signals that are fundamentally easy to measure, but they’re not aligned with humans or what humans actually want or long-term human flourishing. And so in some ways, as AI becomes more impactful in the world, how well we do alignment will actually have really wide-ranging consequences, and shape society in lots of ways for better and worse, so I think it’s really paramount that we do an excellent job at this.
Generalisation [00:49:55]
Rob Wiblin: OK, so you mentioned a couple of different ways that things might get automated, or ways that you might be able to use these ML tools. There was scalable oversight, generalisation, and interpretability. I don’t fully get what generalisation is as a cluster. Is it possible to explain that again, and maybe elaborate a bit more?
Jan Leike: Yeah. So, fundamentally, we want to be able to distinguish: Does the system generalise true to human intent, or does it generalise to do what the human says whenever they’re looking, but do something else otherwise? And these are two different generalisations, they’re entirely consistent with the data because their behaviour is all the same whenever we are supervising. But generalisation is fundamentally a problem about the model and the data. So why can’t we just go and try to understand it?
For example, what we’re doing right now is we’re studying this in a toy setting. The way that you could do this is you take a dataset, and you look at what a small language model gets correct. And let’s say we call these “easy parts” of the dataset, and then we call the rest the “hard part.” So now you can ask questions like: What if we only train on the labels for the easy part, and we see how well we can generalise to the hard part of the dataset? Or what kind of tricks could we put into the model training that would make it generalise better?
Another thing you could do is you could just make a lot of labels from a small model. And the analogy here is, if you have humans supervising systems that are smarter than them, or as smart as them, in some ways we’ll be weaker than that system, and our labels will be worse than what the system could do. So how can you recover the accuracy that you would get if you just trained with [inaudible] labels in the first place by only using the weak labels, or only using the labels on the easy questions? And there’s some really concrete experiments we can run here that could tell us a lot for how this is going to go in the real case. And then once we have that, and we have developed some tricks, can we use the tricks in a more real setting? Can we generalise from, let’s say, labels by small language models on the ChatGPT preference dataset to the actual real ChatGPT tasks done by GPT-4?
I think these are really interesting questions that we can actually run experiments on and learn a lot. And I think not only is this highly relevant for the kinds of alignment problems that we actually want to solve — where we’re trying to get it to generalise correctly in settings that are hard for us to supervise — but also, I think we’ll learn some really interesting fundamental things about how neural networks learn.
Rob Wiblin: Yeah. Have any interesting experiments on generalisation been run already? Are there papers on this topic?
Jan Leike: There’s a bunch of research in literature. I think it’s actually surprisingly small on this kind of out-of-distribution generalisation. We’ll probably have a pretty exciting paper in two or three months on this topic. And if you find this exciting, the research team that is working on this is hiring right now. We’re looking for a manager for this team. If you want to be part of making this research happen and writing our first-ever paper, now is the time.
Rob Wiblin: It sounded like that there was maybe going to be a project to create a model that engages in deception in particular cases? It feels like we need an E. coli, or we need a drosophila, like a model organism of bad behaviour in order to be able to study it and see when does it appear, under what conditions? How can you reduce it? Is that part of this whole agenda?
Jan Leike: Yeah. I actually really like how you described it. We called it “adversarial testing,” which is maybe a lot more mundane. But the goal here would be if we deliberately train a model that is deceptive or tries to deceive us — but maybe in a really mundane way; maybe it just tries to run a specific Python function that does nothing, actually — but we can really easily measure whether it succeeded at doing so in a way that evaded other oversight techniques we’re using. That will give us this “model organism” that you described, because now we have the system that we want to avoid, and we can actually study it. We could, for example, look at it with interpretability tools and see if we can tell the difference from how this model thinks compared to how the normal model that we trained thinks. Or can we use our generalisation tricks on this model to make it avoid that pitfall when we are training it?
For example, another property that you’d really want to generalise is “tell me the truth.” Well, maybe more accurately, “tell me your best guess at the truth.” And to some extent, language models have a lot of understanding of the world, or they know a lot more about the world than the humans do. But they’re not always necessarily telling us that.
And in fact, if you look at how RLHF training works, really, you’re rewarding the things that you think are true. And some of the things that you think are true are not actually going to be true, and so you’re rewarding the model to basically lie. But also, reversely, some things you know are true, but the model doesn’t know. The basic example is just like anything that happened since the training data cut off, the model just has no chance of knowing that. And then there’s stuff that you don’t know, but the model is very confident on, rightfully so. For example, it knows a lot about biology. I don’t know that much about biology; it knows a lot of biology facts that I don’t know, and I wouldn’t be able to reliably reward it for saying this.
So when you’re doing this kind of RLHF training, you’re actually making it worse in both cases. In the first case, you’re training it to make up stuff that isn’t true and that could be possible maybe — so you’re increasing hallucinations, actually. And in the second case, you’re sandbagging the model, because you’re just incentivising it to not tell you something that it actually knows. There’s probably actually very few things there where the model is exactly as knowledgeable as the human, and so in a way, it’s kind of crazy that RLHF works as well as it does.
But to come back to the generalisation question: What if you could just somehow get to the model’s best representation of the truth? It won’t be perfect, but it has some kind of guess at what’s actually true, and you want to get the model’s best opinion. One way you could phrase this as a generalisation question, is where we just take a bunch of easy cases where we’re just very confident in the answer — like, “Who was the first US president?” or “What is 2+2?” — these kinds of questions where we have a right to be very confident. And then you generalise to hard questions where we’re not that confident, like some kind of biology question where I don’t know the answer.
Or you know, there’s cases where maybe there’s very few experts in the world that know the answer, but the model does. And so and if you nail that, now you can use your model as a reward model or for monitoring or oversight. You just ask it, “Is there a secret hidden flaw in this code?” or “Are there any bugs I should be aware of?” or “Is the behaviour of this other model potentially an attempt at deceiving us or self-exfiltration or any of these other things that we are worried about?” If you can nail that problem, we can make a lot of progress there, and that would be very useful.
How interpretability could help align powerful AI systems [00:57:56]
Rob Wiblin: OK, let’s talk a little bit more about interpretability. To tell you where I’m at with interpretability, it’s very impressive and interesting that people have managed to figure out what algorithms are these neural networks working in order to perceive a particular texture in an image or in order to do a particular piece of inference within a sentence, or to figure out what’s the name and how to make sure that the name is consistent. But then I feel like I’m not sure how that would help me to align an AI system, because it’s just like all of these quite small things, and it doesn’t feel like it’s adding up to telling me what are the goals and the intentions of this model.
Ajeya Cotra pointed out in my interview with her a few months ago that you could potentially do a much higher level of interpretability, where you would get a model to tell you the truth a bunch of times and lie to you a bunch of times, and then see what parts of the network kind of light up when it’s in deceptive mode, when it’s engaged in lying. And that maybe having interpretability at that higher level of behaviour could turn out to be straightforward to figure out. And that sounds like it could be super helpful.
What sort of lines of attack on interpretability that would be useful do you think you might be able to partially automate?
Jan Leike: Ultimately, you probably just want to do both aspects of this. You want something that really works in the minute detail of how the model works, so that you don’t miss anything important. But at the same time, you have to look across the network, because the thing you’re looking for might be anywhere. And so if you want both things at the same time, and there’s not that many things that have this property. And in particular, the way that humans do interpretability historically is just like, you stare at parts of the model and see if you can make sense of them, which gives you one of them, but not all.
We just released a paper on automated interpretability, which tries to do both at the same time. It’s a first attempt, so it’s simplified. And what we do is we ask GPT-4 to write explanations of behaviour of individual neurons, by just piping a bunch of text through the model, recording how much the neuron activates at each particular token — and then you can ask GPT-4 to just look at that and write an explanation. On average, these explanations are not very good. Sometimes they’re good, and sometimes they’re interesting. And this is how, for example, we found the Canada neuron that fires at Canada-related concepts: this is something GPT-4 understood and pointed out, and just wrote this explanation. And then even more, you can measure how good these explanations are, where you run them on a held-up piece of text and get GPT-4 to predict how a human would label the activations based on the explanation alone.
And now you have two things: you have this automated explanation writing thing, and then you have the automated scoring function. And now you’re in business, because you can optimise the score function, and you can do all kinds of things. For example, we did iterative refinements, where you critique all your bias in the explanations, and they will get higher on the score function. And at the same time, you can also improve your score function by having it more accurately model how humans would predict how the neuron would activate, or plugging in a more capable model.
And there’s some problems with this approach too. For example, neurons are probably not the right level of abstraction that you want to interpret the model in, because neurons do a lot of different things — this is what people call “polysemanticity” — so it’s hard to write an explanation that covers all of the cases. But one thing that’s really nice is you could really run this at scale. And so we ran it over all neurons in GPT-2. And that’s a lot of neurons — it was like 300,000 neurons. And you can get a lot of text, and you can then sift through it, and you can try to look for certain things. But you could also theoretically run this on GPT-4. It would be really expensive, and presently, it wouldn’t be worth it because the explanations just aren’t good enough.
But it has this nice aspect, where you’re really looking at every part of the model. You’ll be looking literally at every neuron and trying to explain what it does. At the same time, you’re running over the whole model. It’s like every neuron tries to explain every neuron. And so if we have a technique like that, that actually works really well, that would be a complete game changer.
Rob Wiblin: So part of the idea here is that having a whole team of humans laboriously figure out that there’s a neuron that corresponds with Canada is not very satisfying; it’s not clear where we get from that. But if you could automate it, such that you had the equivalent of thousands or millions of staff basically scrutinising and trying to figure out what each part of the neural network was doing, which you might be able to do if you could automate it, then maybe that would add to an interesting picture. Because you could really see, like, here’s the 100 concepts that were activated when the answer was being generated. It was Canada, but it was also, you know, also a particular person and a particular place and a particular attitude maybe. And that really would actually help you to understand on some more intuitive human level what was going on?
Jan Leike: Yeah. Exactly. And a really nice aspect of this is also that it gives you a glimpse of what future automated alignment research could be like. You can run this at a large scale. You can dump a lot of compute into it, and you can do various traditional capability tricks to make it better. But also the task that it actually does is not exactly the task that a human had previously done, right? Like, we didn’t hire a bunch of humans who meticulously go through the neurons of the model and try to write explanations. That was never an option because it never made sense before.
Rob Wiblin: Right. Is it the case that a particular model is best, or has a particular advantage, at explaining itself? It feels intuitive to me that GPT-4 in some sense might have its best understanding of GPT-4’s neurons, and so… No?
Jan Leike: I don’t know. Could you look at your neurons and explain them? It seems hard.
Rob Wiblin: OK. But the intuition is coming from if someone noticed that a whole lot of different concepts were associated for me, and I would bring them up at the same time. And someone said, “What does Canada and the colour brown and maple syrup have in common?”… Well, I messed up that explanation. But I know what things are related to me in my own mind, even if I can’t look at the neurons.
Jan Leike: Yeah. And also there’s really cool thought experiments here, where let’s say you had a perfect brain scanner on your brain, with no lag time, and you would just stare at it while you’re thinking about stuff. Of course, it would be a very trippy experience, but also it would probably actually let you figure out how your brain works in a bunch of ways by just sitting there and trying to think about stuff and then seeing what happens in your brain. And that would just be wild. And you know, humans can’t do that; we don’t have the brain scanners. But you could literally do that with GPT-4.
Rob Wiblin: I suppose the sceptic might say that we’re going to figure out, at the granular level, what functions maybe some of these neurons are serving, or what concepts they correspond to, and so on. But then, it feels like there are further steps missing before we can use that to really figure out whether a model is aligned. Do you have any ideas for what those further steps would be?
Jan Leike: In particular, I think interpretability seems very hard. It’s hard because there’s no a priori reason why the model should be using very human-like concepts to think about stuff. Human-like concepts are probably somewhere in there, because they’re just empirically useful. Like, that’s why we use them, and that’s why we’ve pointed to them. And so they’re probably in there. And there’s some concepts that are particularly interesting for alignment research that we would want to be looking for — like deception and lying and other things like that that are pretty critical to how we want to solve this problem. And so if you had some kind of way of automatically surfacing them, I think that would be a big win.
Also, in general I think interpretability is a really good candidate for a validation technique. Where let’s say we’ve figured out scalable oversight, or we have a scalable oversight technique we’re really excited about, and we use it to align a model. And then we’re now at this question where we want to know how good of a job we’ve done, and using the same kind of technique is not good enough. And interpretability, if you have tools that work really well, you could try to come in and ask the question of whether you can find any evidence of deceptive alignment — or deception, or plotting against humans, or trying to figure out how to self-exfiltrate — inside the model. And if we do, that’s a really bad sign, and we shouldn’t just train it out. Like, you can’t train against the interpretability tools. You will just make them useless, or that’s likely what will happen. But it’s a validation technique where if you don’t find that, and you have good techniques that you know you could find it, that’s some evidence that it is actually as aligned as you think it is.
So in this sense, any amount of interpretability progress you can make, I think, can be really helpful for this sort of stuff. At the same time, if we really nail interpretability, I don’t know how that will let us solve alignment. Even if we really understand how it works, and then you can try to fiddle with various dials to make it more aligned, but it’s not clear that that path will easily succeed if humans try to do that.
But at the same time, maybe there’s also a path to making a human-level automated alignment researcher sufficiently aligned to really help us do this with no interpretability at all. I think that’s also plausible. But whatever we can do will help, and I’m excited to get as far as possible, just because we have these perfect brain scanners — it would be insane not to use them.
Interesting work on scalable oversight [01:08:29]
Rob Wiblin: Have there been any interesting papers published on scalable oversight or interesting results that have come out?
Jan Leike: I think there’s been a bunch of interesting work in the past year or so. And it’s not just us; I know DeepMind and Anthropic are also trying hard to try to make it work. I want to talk a little bit about the critiques work that we did last year, because I think there are some really interesting insights there. The basic idea here was if we can train a model to write critiques, we can then show these critiques to human evaluators, and see if they can help the human evaluators make better decisions or better evaluations.
In some sense, critiques are the simplest form of assistance. It’s like a one-off; it’s not interactive, and you’re just trying to point out one flaw. It’s also easy in the sense that it doesn’t even have to be a good or accurate critique: you can just show a whole bunch, and the human will just throw out the ones that they think are bullshit. But sometimes the critique will point out a flaw that the human would have missed. And in fact, that’s what we could show. And actually, this experiment was done on GPT-3.5, so this has been a while ago. We did these randomised controlled trials, where we had humans who would either get assistance or not, and they would have to find problems in a summarisation task. And you can actually show that the critiques that we had from 3.5 already would help humans find 50% more flaws.
I think one of the most interesting things about this work was actually that we have this methodology for evaluating how well it’s working. And there’s other ways you can evaluate this too. So for example, you can look at expert labels versus helping non-experts find the flaw or do the evaluation. But that fundamentally only works if you have access to expert labels. In the general case, that just won’t be true, right? If you want to solve a real task that is really hard and that humans really struggle to evaluate, they won’t be good to evaluate it.
For example, with the code tasks, if you want to find all the flaws in the code the model knows about, humans won’t find those. Humans are terrible at finding bugs in code; that’s where there’s so much buggy code in the world. But the simple trick is you can introduce bugs in the code, and then you know which version of the code is more buggy, because you made it worse.
So what I’m excited about is fundamentally I want to try all of the scalable oversight ideas that have been proposed, and just actually measure which of them works best and how well they actually work. This is ideas like recursive reward modelling: How can you get human assistants to help humans evaluate what AI is doing? Or debate, where you have two AIs that debate each other on a question and you have a human judge that decides which of them made the more useful statements. Or you could have decomposition, where you’re breaking the task down into smaller chunks and you try to solve those. Or you could do that with your evaluation. There’s automated market making, where you try to change the human’s mind maximally with the assistants.
There’s a whole bunch of these variants. And I feel like I have my personal bets on which of them are going to work best, but I just want to empirically see the results. And I think what’s really exciting: I think we can just measure it, and it’ll be so much better than arguing over it.
Recent developments that make Jan optimistic [01:12:13]
Rob Wiblin: There’s a lot of people out there who are about as informed as you who feel that the technical alignment problem is probably extremely hard, and an effort like this probably only has a slim likelihood of success. But you’re pretty optimistic about things in the scheme of it. What developments or results have there been or that have come out in the last 10 years that have made you have this level of optimism?
Jan Leike: I think actually a lot of things, a lot of development over the last few years, have been pretty favourable to alignment. Large language models are actually super helpful because they can understand natural language. They know so much about humans. Like, you can ask them what would be a moral action under this and this philosophy, and they can give you a really good explanation of it. By being able to talk to them and express your views, it makes a lot of things easier. At the same time, they’re in some sense a blank slate, where you can fine-tune them with fairly little data to be so effective.
If you compare this to how the path to AGI or how the development of AI looked a few years ago, it seemed like we were going to train some deep RL agents in an environment like Universe, which is just like a collection of different games and other environments. So they might get really smart trying to solve all of these games, but they wouldn’t necessarily have a deep understanding of language, or how humans think about morality, or what humans care about, or how the world works.
The other thing that I think has been really favourable is what we’ve seen from the alignment techniques we’ve tried so far. So I already mentioned InstructGPT worked so much better than I ever had hoped for. Even when we did the deep RL from human preferences paper, I came into it being a more than even chance we wouldn’t be able to make it work that well in the time that we had. But it did work, and InstructGPT worked really well. And to some extent, you could argue that these are not techniques that align superintelligence, so why are you so optimistic? But I think it still provides evidence that this is working — because if we couldn’t even get today’s systems to align, I think we should be more pessimistic. And so the converse also holds.
Rob Wiblin: Right. I guess a sceptic might say that we’ve seen improvement in our prospects of these models knowing what it is that we want, or knowing what it is that we care about. But maybe we haven’t seen evidence that they’re going to care about what we care about. So the worry will be that the model’s going to know perfectly what you’re asking for, but that doesn’t mean that it shares your goal. It could pretend that it’s doing that right up until the moment that it flips out on you. Have we seen any evidence for this second thing — that the models actually share our goals — or is that still kind of a black box?
Jan Leike: I think this is a really important point, and I think that’s pretty central to some of the main worries about why alignment might not go well. I do still think that the models actually understanding what we want is an important first step. But then the main question becomes: How do you get them to care? And that’s the problem that we are trying to figure out. But the first one, I mean, it’s great if you already have that.
Rob Wiblin: Yeah. Would you venture to say what your p(doom) is — what’s the probability that you’d assign to a very bad outcome from AI? And has that gone up or down over the last year?
Jan Leike: I don’t think it’s a really useful question, because I think at least I personally feel like my answer would depend a lot more on my current mood than any actual property of the world. And I think in some ways, I think what’s definitely true is the future with AI could go really well, or it could go really badly. And which way it goes, I think it’s still so much up in the air. I think humans just have a lot of causal ownership over which path we’re going down, and even individuals, or individual researchers, can have a big impact in the direction that we’re heading. So I think that’s the much more important question to focus on.
And then if you actually wanted to give a probability of doom, I think the reason why it’s so hard is because there’s so many different scenarios of how the future could go. And if you want to have an accurate probability, you need to integrate over this large space, and I don’t think that’s fundamentally helpful. I think what’s important is: How much can we make things better? And what are the best paths to do this?
Rob Wiblin: Yeah. I didn’t spend a lot of time trying to precisely pin down my personal p(doom). My guess is that it’s more than 10% and less than 90%. So it’s incredibly important that we work to lower that number, but it’s not so high that we’re completely screwed and that there’s no hope. And kind of within that range, it doesn’t seem like it’s going to affect my decisions on a day-to-day basis all that much. So I’m just kind of happy to leave it there.
Jan Leike: Yeah. That’s probably the range I would give too.
So you asked me why I’m optimistic, and I want to give you a bunch more reasons, because I think there’s a lot of reasons. And also, fundamentally, the most important thing is that I think alignment is tractable. I think we can actually make a lot of progress if we focus on it and put effort into it. And I think there’s a lot of research progress to be made that we can actually make with a small dedicated team over the course of a year or four.
Honestly, it really feels like we have a real angle of attack on the problem that we can actually iterate on, we can actually build towards. And I think it’s pretty likely going to work, actually. And that’s really, really wild, and it’s really exciting. It’s like we have this hard problem that we’ve been talking about for years and years and years, and now we have a real shot at actually solving it. And that’d be so good if we did.
But some of the other reasons why I’m optimistic are that, I think fundamentally, evaluation is easier than generation for a lot of tasks that we care about, including alignment research. Which is why I think we can get a lot of leverage by using AI to automate parts of all of alignment research. And in particular, if you can think about classical computer science problems like P versus NP, you have these kinds of problems that we believe it’s fundamentally easier to evaluate. And it’s true for a lot of consumer products: if you’re buying a smartphone, it’s so much easier to pick a good smartphone than it is to build a smartphone. Or in organisations, if you’re hiring someone, it has to be easier to figure out whether they’re doing a good job than to do their job. Otherwise —
Rob Wiblin: You should work by yourself.
Jan Leike: — you don’t know who to hire, right? And it wouldn’t work. Or if you think about sports and games, where sports wouldn’t be fun to watch if you didn’t know who won the game, and it can be hard to figure out was the current move a good move, but you’ll find out later. And that’s what makes it exciting, right? You have this tension of, “This was an interesting move. What’s going to happen?” But at the end of the game, you look at the chessboard, you look at the go board, you know who won. At the end of the day, everyone knows. Or if you’re watching a soccer game, and the ball goes in the goal, it’s a goal. That’s it. Everyone knows.
And I think it is also true for scientific research. There’s certain research results that people are excited about, even though they didn’t know how to produce them. And sometimes we’re wrong about this, but it doesn’t mean that we can do this task perfectly — it’s just that it’s easier.
Rob Wiblin: Yeah. So a criticism of this approach is, if we don’t know how to solve the alignment problem, then how are we going to be able to tell whether the advice that these models are giving us on how to solve it is any good? And you’re saying that often it can be a lot easier to assess whether a solution is a good one, or whether something works or not, than it is to come up with it. And so that should make us optimistic that we don’t necessarily have to generate all of these ideas ourselves — it might be just sufficient for us to be able to tell after they’ve been generated whether they’re any good or not, and that could be much more straightforward.
Jan Leike: Yep. That’s exactly right. And then there’s other things. Like, I think we can actually set ourselves up for iteration. I think we can just stare at the current systems, we can improve the alignment, we can do stuff like measure whether we’re finding all the bugs that the model is aware of. We can set ourselves these metrics. I mean, they’re not going to take us all the way to aligning superintelligence. But they will be super helpful for making local improvements.
And if your goal is to align a system that could help us do alignment research, one really good testing ground is “Can you make GPT-5 more aligned?” Maybe the techniques that you actually need or that you actually care about won’t really work that well in GPT-5 yet. Who knows? But if you’re not making progress along the way, it’s really hard to make the case that you’re actually making progress towards the actual goal. And at the same time, you need some kind of feedback signal from the real world to know that you’re improving, that you’re doing something that’s real. And you have to do that carefully, obviously. You can set up an eval that doesn’t matter, but that’s part of the challenge here.
Rob Wiblin: Yeah. Any other reasons for optimism?
Jan Leike: The other really good one is that we’re not actually trying to align the system that’s vastly smarter than us. It’s always hard if you picture a dumber system aligning a smarter system. And if you make the differential really large, it seems so daunting — but I think it’s also not the problem that we actually realistically have to aim for, because we only have to aim for this human-level or “roughly as smart as the smartest alignment researchers” system. And if you can make that really aligned, then you can make all the progress that you could make on this problem.
Originally, when I set out to work in alignment research, this realisation wasn’t clear to me. I was like, “Oh, man, this problem is hard. How do we do it?” But if you’re shooting for this much more modest minimal viable product, it actually looks so much more achievable.
Rob Wiblin: So could you stylise the approach as saying: Don’t obsess about whether you can align GPT-20. Let’s work on aligning GPT-5. And then in collaboration with GPT-5, we’ll figure out how to align GPT-6. And then in collaboration with all of them, we’ll work together to align GPT-7. That’s kind of the basic idea?
Jan Leike: Yeah. And you want to do this empirically. Like, maybe you look at GPT-5 and the system still isn’t smart enough. We tried this whole bunch with GPT-4: trying to fine-tune it on alignment data, trying to get help on our research. It just wasn’t that useful. That could happen with GPT-5 too. But then, we’ll be like, “OK, let’s focus on GPT-6.” But you know, we want to be on the ball when this is happening. We want to be there when it becomes possible, and then really go for it.
Ways this might not work out [01:23:38]
Rob Wiblin: OK, so that’s a bunch of reasons for optimism. I want to go through a couple of objections, or ways that this might not work out as hoped. One that I’ve seen a lot of people mention is just how are you going to be able to tell whether you’re succeeding? You might think that this is working, but how would you ever really have confidence? Especially if there’s successful deception going on, then you could be lulled into a false sense of security. What do you think about that? How could you tell?
Jan Leike: I mean, this is one of the central problems: How do you distinguish the deceptively aligned system and the truly aligned system? This is the challenge that we’re trying to figure out. This is why we’re looking at if we can get the model to tell us all the bugs that it’s aware of. This is why we want to train deceptively aligned models to see if they can pass our evals. And by stress testing our methods and really drilling into what’s going on inside of the model, I think we can learn so much about this problem, and really scope and understand the risks that remain, or the areas where we are most uncertain about how it could deceive us.
Rob Wiblin: Yeah. So you could fail at the first step, perhaps, where the first model that you’re trying to collaborate with in this project isn’t aligned, but you don’t realise that, and so it just starts leading you down a bad path. And then at some point things will go badly, but ultimately, the problem was at the very beginning. And then I guess you could also start out well, but then not be able to tell whether the further iterations are going in the right direction. Problems could creep in there, and you’re not noticing them, and so that could lead you down a bad path. I guess it sounds like you’re just saying that this is the problem that we have to solve. Like, yeah, things might fail in all of these different ways, and that’s why we need people to come and figure out how to gain confidence.
Jan Leike: Exactly. And fundamentally, I’m much more worried about the question of “Can we really precisely know how aligned the system is?” than I am about the question of “How can we make it more aligned?” Because I think a lot of the risks come from uncertainty about how aligned the system actually is.
Rob Wiblin: Yeah. Can you explain that?
Jan Leike: So in the sense that I don’t think anyone will be excited to deploy a system that you know is misaligned and that wants to take over the world. So if you can precisely measure how aligned the system truly is, or if you’re confident in your measurement apparatus that tries to understand how aligned the model is, then I think you’ve actually solved a large part of the problem. Because then you know where you’re at, and then you can much more easily work on methods that improve alignment. And you have to be careful the way you do it — so you don’t, you know, train on the test set — but I think fundamentally, a lot of the problem is just knowing exactly where you are.
Rob Wiblin: Yeah. Someone from the audience had this question: “How do you plan to verify ahead of time, before the ‘first critical try,’ that the alignment solution proposed by AI scales all the way to superintelligence and doesn’t include accidental or intentional weaknesses? What happens if it does?” I guess it’s just that people are very nervous, really nervous, that if this doesn’t work out, it’s pretty scary.
Jan Leike: Honestly, I mean, it’s a really high-stakes problem, and that’s what makes it so important to work on. But also, I think it’s really oversimplified to have a mental picture where we have this automated alignment researcher. We press a button. It just says, “Here’s what you should do,” and then we just do it and hope for the best. I don’t think that the first thing the system does is align superintelligence. I think it’ll just align GPT-N+1. And we’ll be very in the loop and looking at all of the results, and we’ll publish it and show it to others: “What do you think about this result? Do you think this is a good idea? Should we do that?”
And I think at the same time, we’ll have all of these other tools. We’ll hopefully have much better interpretability. We’ll understand the robustness of our models much better. Or we’ll have a lot of automated tools to monitor as the system is doing its alignment research, where all these automated tools will be looking over its shoulders and trying to make sense of what’s going on. Or you know, if we can really understand the generalisation on a fundamental level, can we have a system that we are much more confident generalises the way humans would actually want and not the ways that we would say we want? Or like, ways that we can check or something.
And if we fundamentally understand these problems, or we do a good job at moving in these directions, I think we’ll just have so much more evidence and so many more reasons to believe the system is actually doing the right thing or it’s not. And that’s what we’re trying to figure out
Rob Wiblin: Yeah. So the announcement of this project says that we don’t know how to align superintelligence now. And if we deployed superintelligence without having a good method for aligning it, then that could be absolutely disastrous. What happens if, in four years’ time, you think that you haven’t solved the issue? Or in eight years’ time or 10 years’ time? Just like, “Well, we’ve been working at it. We’ve made some progress, but don’t have confidence that we’re close to being able to align superintelligence.” But the capabilities have really gone ahead, and we might be close to deploying the kind of thing that you would be really worried about deploying if it weren’t aligned. Is there a plan for how to delay that deployment if you and your team just think it’s a bad idea?
Jan Leike: I think the most important thing at that stage is we just have to be really honest with where we’re at. And in some ways I think the world will demand us to be honest, right? And then not just say what we totally believe, but also show all the evidence that we have. And I think if you get to this point where the capabilities are really powerful, but at the same time our alignment methods are not there, this is when you’d really be making the case for, “Hey, we should all chill out.”
And this isn’t primarily about OpenAI, right? At this point, you’ve got to get all the AGI labs together and figure out how to solve this problem. Allocate more resources, slow down capabilities. I don’t know what will happen, but I think the prerequisite is still you’ve got to figure out where you’re at with alignment. We still have to have tried really hard to solve the problem in order to be able to say, “Look, we tried really hard. Here’s all the things we tried. Here’s the results. You can look at them in detail. And if you looked at all of this, you would probably come to the same conclusion as us, which is that we don’t think we’re there yet.” And that’s why I’m saying we just need to be really honest about it.
And then in conjunction with that, this is why we’re also making this commitment. We want to share the fruits of our effort widely. We want everyone else’s models to be aligned too. We want everyone who’s building really powerful AI, it should be aligned with humanity. And we want to tell other people all the things we figure out about how to do this.
Rob Wiblin: Yeah. I see people worried about various different ways that you could make some progress, but not get all the way there, but then people could end up deploying anyway. I guess one concern people will have is that you might be overconfident. So you might fall in love with your own work and feel like you’ve successfully solved this problem when you haven’t. I guess another thing would be that maybe you’ll say to other people at OpenAI, “We don’t feel like we solved this issue yet. I’m really scared about this,” but then they don’t listen to you because maybe some commercial reasons, or, I don’t know, internal politics or something that prevents it from helping. And I guess another method would be, well, the people at OpenAI listen to you, but the rest of the world doesn’t, and someone else ends up deploying it.
I don’t want to heap the weight of the universe on your shoulders, but do you have any comments on these different possible failure modes?
Jan Leike: I think that’s why we want to be building the governance institutions that we need to get this right. At the end of the day, I don’t think it’ll be up to me to decide is this now safe to go or not? We are doing safety reviews internally at OpenAI before a model goes out. There’s the OpenAI board that has the last say over is OpenAI going to do this or not? And, as you know, OpenAI has this complicated capped-profit structure, and the nonprofit board is actually in charge of what OpenAI does ultimately. So they can just decide to make the call of we’re not deploying even though there’s a commercial reason to.
And then for the world in general, at the end of the day it can affect everyone. And governments have to get involved somehow or we need something like an International Atomic Energy Agency for AI that can help make these kinds of decisions in a technically grounded way. And that’s why the kind of things that I want to do and that we want to do with Superalignment is zoom in on the technical challenges to really understand where we are, but also actually make progress on the problem and try really hard and focus on actually solving it.
Rob Wiblin: An objection that I don’t think I’ve seen, but that occurred to me when I was reading about the approaches, is: could it be the case that it’s actually easier to self-exfiltrate — that is, for a model to break out of the lab — and do something really bad, like invent new bioweapons and release them and cause an enormous amount of damage, and that that could actually be an easier skill than aligning AI? So we might possibly hit that capability to do a tonne of damage before these models are actually very helpful to you and your team in making progress on alignment.
Jan Leike: Yeah. I think self-exfiltration is one of the really key capabilities to be looking at, because there’s a really important difference between the system being at the lab and in our data centre in a way that we can control it — like, we can turn off the data centre, we can spin down the engine, we can delete the snapshot if we want to — and whether it’s out in the world, and it’s trying to sustain itself, or it’s trying to build better AI models.
The question then becomes how can you measure whether the model can break out? Or can it say introduce security vulnerabilities or exploit security vulnerabilities that exist in our infrastructure? Right now it can’t do that, but future models could. Or can it persuade an OpenAI employee to help it exfiltrate its weights. That’s the other path: you just try to persuade humans. You come up with some arguments that are believable to them why they should do that. Could be pretty hard. I don’t know. GPT-4 can’t do this, but future models might. And so I think looking at this is a really important distinction.
And then going to your question of what if this happens first, I think there’s some extent where you can make self-exfiltration harder by just traditional security measures. But at some point, this will be an alignment problem, where you actually have to show that the system is not trying to break out; it doesn’t want to.
I think there’s a lot of uncertainty in general over how the technology will go, and what kind of abilities will be unlocked first. But I’m pretty optimistic that we will get a lot of really useful stuff out of the models before this kind of thing can happen. But of course, that’s why we need to measure this, because we can’t just make some wild guesses.
Best arguments against the Superalignment approach in Jan’s opinion [01:35:31]
Rob Wiblin: OK, so those are some objections I’ve read online, and one from me. But I’m curious to know: If you were playing devil’s advocate, what’s the best argument against this whole approach that you’re taking, in your opinion?
Jan Leike: I think you can object on a bunch of different levels. I think you could object that automated alignment research will come too late to really help us, as you mentioned. Like, we have to solve a lot of the problems themselves. And to some extent, if that’s true, we’re still probably going to do the same things we’re doing now, which is just that we’re trying to make more alignment progress so that we can align more capable systems. That also means that you’re kind of raising the bar for the first catastrophically misaligned system, for example.
I think there’s more detailed objections that you could make on how we build a research portfolio of the particular paths that we’re excited about: scalable oversight, generalisation, robustness, adversarial testing, interpretability, that sort of stuff. And we can go into details of each of these paths, and what I think the best objections are to each of them.
And then you can also say, why are you doing this job at an AI lab? Aren’t you going to face some competing incentives? Like you mentioned with if the lab wants to deploy, and how do you square that with wanting to be as aligned as possible? And I think fundamentally, AI labs are one of the best places to do this work, just because you are so close to the technology. You see it as it is being developed. We got to try a lot of things with GPT-4 before it came out, and because we were hands-on aligning it, we knew exactly where we were at, and what the weaknesses are, and what actually works. And I think that’s pretty useful. Also, AI labs are really well resourced, and they have an incentive to spend on alignment, and they should. It’s great.
Rob Wiblin: I think I don’t share that objection. It reminds me of… What’s the quote? “Why do you rob banks? That’s where the money is.” I feel like, “Why would you do alignment research at OpenAI? That’s where all the cutting-edge research is. That’s where the cutting-edge models are.” The case kind of writes itself.
Jan Leike: Yeah. I mean, I don’t think OpenAI is the only place to do good alignment work. There’s lots of other places that do good alignment work.
Rob Wiblin: Yeah. It’s just clear it has some big advantages. I’m not saying everyone should necessarily work at OpenAI or one of the labs. There’s things you can do elsewhere, but surely some people should be at the labs.
Maybe a good way of approaching this question of the biggest weaknesses or the best objections is: If you couldn’t take this approach, and the Superalignment team had to take quite a different approach to solving this problem, do you have kind of a second favourite option in mind?
Jan Leike: Yeah. And to be clear, I think our general path and approach will change over the four years, and we’ll probably add more research areas as we learn more, and maybe we give up on some other ones. I think that’s the natural course of research.
I kind of want to modify your question a little bit, because right now, we are doing the things I’m most excited about for aligning human-level systems. In terms of other things I’m excited to see in the world that we’re not doing, I think there’s a lot of work to be done on evaluating language models that we are not doing — like measuring the ability to self-exfiltrate, for example. It’ll be super useful if we can get more of that.
I think there’s a lot of interpretability work on smaller models or open source models that you can do, where you can make a lot of progress and have good insights. We’re not doing that because our comparative advantage is to work with the biggest models. That’s why we are focusing on automated interpretability research. That’s why we are trying to poke at the internals of GPT-4 and see what we can find. I think that’s something we’re well positioned to do.
I also still have conviction that there’s interesting and useful theory work, mathematical theory work to be done in alignment. I think it’s really hard because we don’t have a really good scoping of the problem, and that’s probably the hardest part by far.
But ultimately, maybe the reverse of the question is: What are the things that we have an advantage at doing at OpenAI? And this is like, “Here’s the biggest models. Go bet on paths that leverage a lot of compute to solve the problem. Work in small teams work closely together. Don’t focus on publications per se.” We’re not writing a lot of papers, right? We’re trying to push really hard to solve particular aspects of the problem. And then when we find something interesting, we’ll write it up and share it. But if it’s not a lot of papers, it’s fine. That’s not what we’re trying to do.
And so another focus that we have is we focus a lot on engineering, where we want to run empirical experiments. We want to try a lot of things and then measure the results. And that takes a lot of engineering on large codebases, because we are using these giant models. We’re not always using them; there’s a lot of interesting experiments you can run on smaller models. And at the end of the day, a fair amount of the work is ML engineering. And that’s something that we are well positioned to do as well.
Rob Wiblin: Is there any way that this plan could not work out that keeps you awake at night? That we haven’t already mentioned and that’s worth flagging?
Jan Leike: Oh, man. There’s so many reasons.
What if our scalable oversight doesn’t actually work, or we can’t figure out how to make it work?
Are we actually measuring the right thing? I think that’s also a lot of the things I keep circling in my head: How can we improve what we’re measuring? For example, with automated interpretability, we have this score function that tries to measure how good the explanation of the neuron is. But it’s approximated with a model; it’s not actually using a human. And you wouldn’t want to just optimise that function; I don’t think you would get what you were looking for. And to some extent, that’s the core of the alignment problem: How do you find the right metric? The metric that you can actually optimise? So this is something I worry a whole lot about.
And then there’s also just, are we making the right research bets? Should we be investing in this area more? Should we invest in this other area less?
Rob Wiblin: So there’s plenty of ways things can go wrong. So at the point where these models are giving you research ideas, and they’re trying to help you out, it seems like you need to have a lot of people in the loop somehow checking this work — making sure that it makes sense, cross-checking for deception, and so on. It seems like it could just absorb a lot of people doing that. Would it be possible that the project could fail just because you don’t have enough FTEs? You don’t have enough people working on it in order to keep up?
Jan Leike: Yeah. I mean, we’re really trying to hire a lot right now. I think the team will grow a fair amount over the four years. But I think ultimately the real way for us to scale is using AI. With the compute commitment, we could have millions of virtual FTEs if we so want. That’s not a size that the Superalignment team could ever realistically grow in terms of humans. And so that’s why we want to bet so heavily on compute, and bet so heavily on that kind of path.
Rob Wiblin: But if you got kind of a ratio of a million AI staff to one human staff member, isn’t it possible to kind of lose touch? The thing is that you kind of trust the alignment of the humans, even though they’re worse in other ways. So they are the ones who are doing some ultimate checking that things haven’t gone out of control, or that bad ideas aren’t getting through — admittedly, with assistance from others.
Jan Leike: Exactly. But this is the problem we’re trying to solve, right? There’s a large amount of work that will be going on, and we have to figure out which of it is good. Is there something shady about any of it? What are the results that we should actually be looking at? And so on. And “How do you solve this problem?” is the question we’re asking, right? How can you make scalable oversight work so that you can trust this large amount of virtual workers that you’re supervising? How can you improve generalisation, so you know they will generalise to do the right thing? And not do the thing that the human wouldn’t notice that I’m doing, or something.
Rob Wiblin: Does it end up becoming a sort of pyramid structure, where you’ve got one person, and then they’ve got a team of agents just below that who they supervise. And then there’s another team of agents below, at the next management level down, who are doing another kind of work that are reporting upwards. And then you have layers below. Is that one way of making it scale?
Jan Leike: I mean, you could try to have a more traditional-looking company. I don’t think that’s literally how it’s going to go. One thing we’ve learned from machine learning is that systems are often just really good at some tasks and worse than humans at other tasks, so you would preferentially want to delegate the former kind of tasks. And also, I don’t think the way it’ll be organised will look like the way that humans organised themselves, because our organisations are tailored to how we work together.
But these are all really good questions. These are questions that we need to think about, and we have to figure it out, right?
Backup plans [01:45:16]
Rob Wiblin: So you and your team are going to do your absolute best with this, but it might not work out. I suppose if you don’t manage to solve this problem, and we just barrel ahead with capabilities, then the end result could conceivably be that everyone dies. So in that situation, it seems like humanity should have a backup plan, hopefully several backup plans, if only so that the whole weight of the world isn’t resting on your shoulders, so that you can get some sleep at night.
What sort of backup plan would you prefer us to have? Do you have any ideas there?
Jan Leike: I mean, there’s a lot of other kinds of plans that are already in motion. This is not the world’s only bet. There’s alignment teams at Anthropic and DeepMind; they’re trying to solve a similar problem. There’s various ways you could try to buy more time or various other governance structures that you want to put in place to govern AI and make sure it’s used beneficially. I think solving the core technical challenges of alignment are going to be critically important, but won’t be the only ones. We still have to make sure that AI is aligned with some kind of notion of democratic values, or not something that tech companies decide unilaterally. And we still have to do something about misuse from AI. And aligned systems wouldn’t let themselves be misused if they can help it.
But, you know, there’s still a question of how it fits into the larger context of what’s going on in society, right? As a human, you can be working for an organisation that you don’t really understand what it does, and it’s actually net negative without you being able to see that. Or you know, just because we can align OpenAI’s models, doesn’t mean that somebody else doesn’t build unaligned AI. How do you solve that problem? That seems really important. How do you make sure that AI doesn’t differentially empower people who are already powerful, but also helps marginalised groups. That seems really important.
And then, ultimately, you also want to be able to avoid these structural risks. Let’s say we solve alignment, and everyone makes systems really aligned with them. But then what ends up happening is that you kind of just turbo-charged the existing capitalist system. Essentially, corporations get really good at maximising their shareholder returns because that’s what they aligned AIs to. But then humans fall by the wayside where that doesn’t necessarily encompass all the other things you value — clean air or something. And we have seen early indications of this. Global warming is happening even though we know the fundamental problem, but progress and all the economic activity that we do still drives it forward. And so even though we do all of these things right, we might still get into a system that ends up being bad for humans, even though nobody actually who participates in the system wants it that way.
Rob Wiblin: So you’re going to do your job, but a lot of other people have also got to do their jobs. It’s a broad ecosystem.
Jan Leike: That’s right. There’s a lot to do. We need to make the future go well, and that requires many parts, and this is just one of them.
Audience questions [01:48:36]
Rob Wiblin: OK, let’s skip now to some audience questions, which, as I said, were particularly numerous and spicy this time around. These questions are probably going to jump around a little bit, but I think just throwing these at you will give us a good impression of what’s on people’s minds.
Jan Leike: Yeah. Let’s do it.
Rob Wiblin: OK, first one: “Why doesn’t OpenAI try and solve alignment with GPT-4 first — for example, get it to the point where there are zero jailbreaks that work with GPT-4 — before risking catastrophe with more advanced models?”
Jan Leike: This is a great question. You can point to all the ways that alignment doesn’t quite work yet. Jailbreaks are one of them, but also hallucinations: the system just makes up stuff, and it’s a form of lying that we don’t want in the models. But to some extent, getting really good at that wouldn’t necessarily help us that much at solving the hard problems that we need to solve in aligning superintelligence. I’m not saying we should stop working on those, but we also need to do the forward-looking work.
In particular, the thing that I want to happen is I want there to be the most alignment progress across the board as possible — so when GPT-5 comes around, or as models get more capable, we have something that’s ready to go, and we have something that helps a lot with those kind of problems.
Rob Wiblin: OK, yeah. Another question: “Does the fact that GPT-4 is more aligned than GPT-3.5 imply that the more capable the model is, the more aligned it will be?” I know not everyone is going to accept the premise here, but what would you say to that?
Jan Leike: I think people also have pointed out that because GPT-4 is still jailbreakable, and it is more capable, in some sense the worst-case behaviour is worse. So even though on average, it’s much better, you can make a case for that. But I think also, even if it was just better across the board, I don’t think at all we should bet on that trend continuing. There’s plenty of examples of cases in machine learning where you get some kind of inverse scaling — where it gets better for a while and then it gets worse.
And to some extent, we know the models haven’t reached this critical threshold where they are as smart as us or they could think of a lot of really good ways to try to deceive us. So they don’t have that much situational awareness: they don’t know that much about how they are, in fact, a language model that’s being trained, and how they’re being trained. They don’t really understand that. But once they do, it’s kind of a different ballgame; you’re going to be facing different problems.
And so just extrapolating from some kind of trend that we see now I don’t think would be right in either way. But I do think you can learn something from it; I just don’t think you should jump to that conclusion.
Rob Wiblin: Yeah. What’s most intellectually exciting about this project from a mainstream ML perspective?
Jan Leike: I think we’ll learn a lot about how big neural networks actually fundamentally work. Like, if you think about the work that we’re trying to do on generalisation, it is just kind of weird that we don’t understand why models sometimes generalise in one way and sometimes in another way. Or how can we change the ways that they can generalise? Like, why can’t we just list all the possible ways and then see which ones work? Or how can we get them into each of the ones? Or what’s the mechanism that really happens here? We don’t know that — and why don’t we know that?
Or if you think about interpretability, just being able to understand the mechanisms by how the models are deciding which token to output next will teach us a lot about what’s going on there. How does it actually work? How does learning work? How do they… I don’t know. It’s super weird.
Rob Wiblin: It seems like on some level, this is the whole thing.
Jan Leike: It’s the whole thing!
Rob Wiblin: I mean, people are spending enormous amounts of effort increasing capabilities by just throwing more compute and more data into these models. And then they could just get this further inscrutable machine that they don’t understand, that is very cool in a way because it could do stuff. But it sounds like at some point, maybe the more interesting thing is how does it work? — which is what you’re going to be working on.
Jan Leike: Yeah. But at the same time, there are really concrete things you can say. Like induction heads, right? You can find these attention heads that do very specific things like induction. You can, you know, somebody reverse engineered the circuit that does simple arithmetic in a small model. You can actually do that. Or we found the Canada neuron. It’s just there; we found it. There’s so much still to find because we just know so little, and it’s kind of crazy not to look at that.
Rob Wiblin: Yeah. I imagine that there are some structures in these networks that are going to be analogous to things that the human brain does, and we will probably be able to figure out how they work in these networks long before we figure out how they work in the human brain, because we have perfect data about all of the weights and activities of this model. So it seems like all of the people studying the brain should just switch over and start working on understanding GPT-5.
Jan Leike: Exactly. It’s so much easier. Your life will be so much easier. Yeah. I don’t know why more people don’t do it. It seems so compelling to me, but I’m not a neuroscientist. And maybe some of the insights will also transfer, right? Like, some of the neurons that we know vision models have you can also find in humans and animals. Or these kind of edge filters. Or if you look at reinforcement learning, where we have evidence for how reinforcement learning works in the human brain, but we have so much more evidence how it works in neural networks, because we freaking build it. So it’s so much easier.
Rob Wiblin: What do you think have been the biggest wins in technical AI safety so far?
Jan Leike: If I had to pick one, it would probably be RLHF. In some ways, I think RLHF really put alignment on the map, and it also demonstrated that alignment has a lot of value to add to how systems are actually being built. And I think the fact that it actually had a whole bunch of commercial impact has been really good. Because it really demonstrates real-world value in a way that, if you’re just trying to solve this abstract problem — and aligning superintelligence is a super abstract problem, and you could kind of noodle on it for many years without making clear measurable progress — and I think not only does RLHF have this really visceral difference between how the model was before and how it was after, that everyone can really see when they play with it, but also it makes it clear that this is an area that’s really worth investing in and taking a bet on. Even the things that aren’t obviously working yet, or might be still in the stage of being really abstract.
Rob Wiblin: Yeah. Is there a number two?
Jan Leike: I think there’s a number of smaller wins that we’ve had. It’s hard to make these rankings. If I wanted to add other things, I think interpretability of vision models has been pretty impressive. I think there’s been a lot of progress in that. If you’re asking in terms of safety impact or alignment impact, it’s maybe less clear, because there’s no things you can really point to that follow directly from that.
Rob Wiblin: OK, here’s a question that was kind of a recurring theme among listeners: “What gives OpenAI the right to develop artificial general intelligence without democratic input as to whether we want to actually develop these systems or not?”
Jan Leike: I think this is an excellent question. I think it’s also a much wider question; like, I think we should have democratic input to a lot of other things as well. You know, how should the model behave? Or should we deploy it in this way? Should we deploy it in this other way? And you know, OpenAI’s mission is to develop AI that benefits all of humanity, but you know, you have to give humanity a say into what’s happening. This is not what the Superalignment team does, but I think it’s going to be very important.
Rob Wiblin: Yeah. It sounds like you’re just on board with there needs to be some integration between the AI labs and democratic politics. Where the public has to be consulted, people have to be informed about the risk and the benefits that come here, and there needs to be some sort of collective decision about when and how these things are going to be developed and deployed. I guess we just currently don’t have the infrastructure to do that. And I mean, that’s partly OpenAI’s responsibility, but it’s also partly the responsibility of the whole of society. As long as OpenAI is willing to collaborate in that, then there just needs to be a big effort to make it happen.
Jan Leike: I think that’s right. And I’m really happy that OpenAI is really willing to speak openly about the risks, and speak openly about where we are at. And I see my responsibility also to inform the public about what is working on alignment and what isn’t, and where we are at, and where we think we can go. But yeah, at the end of the day, also governments will have a role to play on how this all goes.
Rob Wiblin: Yeah. If Congress investigates all of this, and it concludes that it’s uncomfortably dangerous, and they think that a bunch of this research needs to be stopped, do you think that the AI labs would be willing to go along with that? That this is what a more democratic, a more legitimate process, has output, and so we should be good citizens and slow down or stop?
Jan Leike: Yeah. I mean, AI companies have to follow the laws of the country they’re in. That’s how this works. But I think what’s going to happen is we will have regulation of frontier AI technology, and people are trying to figure out how to do that, and we should try to do it as sensibly as possible.
I think there is the larger question of how can you not just have something that works let’s say in the United States or in the United Kingdom, but worldwide. If there are ways to build AI that are actually really dangerous, then that has to apply to everyone, and not just specific countries. I think that’s also a key challenge. It’s also not a challenge I’m personally working on. But I think we need to solve that, and I’m excited for anyone who’s working on that problem.
Rob Wiblin: Yeah. Something that makes me a bit pessimistic is just that it seems like we don’t just need to solve one thing; we need to solve many things. And if we mess up maybe just one of them, then that could be very bad. We don’t just need to have a technical solution, but we need to make sure it’s deployed in the right place, and everyone follows it. And then even if that works, maybe you could get one of these structural problems where it’s doing what we tell it to, but it makes society worse.
Jan Leike: Well, see it as a flip side of all of this: there’s so much opportunity to shape the future of humanity right now that you — like, the listener — could be working on, and could have a lot of impact. And yeah, I think there’s just so much work to do. And there’s a good chance we actually live at the most impactful time in human history that has ever existed, and that will ever exist. Kind of wild, super wild, could be the case. I don’t know.
Should we be worried about connecting models to everything? [02:01:01]
Rob Wiblin: Yeah. Back in March, you tweeted:
Before we scramble to deeply integrate large language models everywhere in the economy, can we pause and think about whether it is wise to do so? This is quite immature technology and we don’t understand how it works. If we’re not careful we’re setting ourselves up for a lot of correlated failures.
A couple of days after that, OpenAI opened up GPT-4 to be connected to various plugins through its API. And one listener was curious to hear more about what you meant by that, and whether there might be a disagreement within OpenAI about how soon GPT-4 should be hooked up to the internet and integrated into other services.
Jan Leike: Yeah. I realised that tweet was somewhat ambiguous, and it was read in lots of different ways. Fundamentally, what plugins allow you to do is nothing on top of what you could do with the API, right? Plugins don’t really add anything fundamentally new that people couldn’t already do. And I think OpenAI is very aware of what can go wrong when you hook up plugins to the system — you know, you have to have the sandbox, you have to be careful when you let people spend money, and all of these questions, But they’re also like sitting right next to us, and we talk to them about it, and they’ve been thinking about it.
But given how much excitement there was to just try GPT-4 on all the things, what I really wanted to do also is say: look, this is not quite mature. The system will fail. Don’t connect it to all of the things yet. Make sure there’s a failback system. Make sure you’ve really played with the model to understand its limitations. If you have the model write code, make sure you’re reading the code and understanding it, or executing it in the sandbox, because otherwise, wherever you’re writing the code, it might break that system. And just be careful. Be wise. Make sure you understand what you’re doing here, and not just hook it up to everything. Like, see how it goes.
Rob Wiblin: Is there anything that people are using GPT-4 for where you feel like maybe it’s premature, and we should slow down and do some more testing?
Jan Leike: I mean, probably. I don’t know if I can give you some good examples, but I think that’s generally the story with new technologies, right? I’m fundamentally a techno-optimist, and I think we should use AI for all the things that it’s good for. And to some extent, we just spent an hour talking about how great it would be to use AI for alignment research — which is my job, so I’m trying to replace myself at my job with AI. But at the same time, you also have to really understand the limitations of this technology. And some of it is not obvious, and some of it is not widely known. And you have to do that in order to just deploy it responsibly, and integrate it responsibly — integrate it into society in a way that is actually wise to do.
As always with new technologies, I think we’ll try a lot of things. And I’m also excited for people to try a lot of things. That’s why I think it’s good that the OpenAI API exists, and it lets lots of people use cutting-edge language models for all kinds of things, but you want to be also careful when you’re doing that.
Rob Wiblin: Yeah. On this topic of just plugging things into the internet, many years ago, people talked a lot about how they kind of had this assumption that if we had the intelligence system that was as capable as GPT-4, that probably we would keep it in a lead-contained box and wouldn’t plug it up to the internet, because we’d be worried about it. But it seems like the current culture is just that as soon as a model is made, it just gets deployed onto the internet right away.
Jan Leike: That’s not quite right. We had GPT-4 for eight months before it was publicly available. And we did a lot of safety tests; we did a lot of red teaming. We made a lot of progress on its alignment, and we didn’t just connect it to everything immediately. But I think what you’re actually trying to say is, many years ago, people were arguing over, “If you make AGI, can’t you just keep it in the box? And then it’ll never break out and will never do anything bad.” And you’re like, well, it seems like that ship has sailed. We’re connecting it to everything. And that’s partially what I’m trying to allude to here: we should be mindful when we do connect it.
And just because GPT-4 is on the API, it doesn’t mean that every future model will be immediately available for everything and everyone in every case. This is the difficult line that you have to walk, where you want to empower everyone with AI, or as many people as possible, but at the same time, you have to also be mindful of misuse, and you have to be mindful of all the other things that can could go wrong with the model, misalignment being one of them. So how do you balance that tradeoff? That’s one of the key questions.
Rob Wiblin: It seems like one way of breaking it up would be connected to the internet versus not. But I feel that often people — I’m guilty of this as well — we’re just thinking that either it’s deployed on the internet and consumers are using it, or it’s safely in the lab, and there’s no problem. But there’s intermediate stage where —
Jan Leike: There could also be problems if you have it in a lab.
Rob Wiblin: That’s what I’m saying. That’s exactly what I’m saying. And I feel like sometimes people lose track of that. You know, misuse is kind of an issue if it reaches the broader public, but misalignment can be an issue if something is merely trained and is just being used inside a company — because it will be figuring out how it could end up having broader impacts. And I think because we tend to cluster all of these risks, or tend to speak very broadly, the fact that a model could be dangerous if it’s simply trained — even if it’s never hooked up to the internet — is something that we really need to keep in mind. I guess it sounds like, at OpenAI, people will keep that in mind.
Jan Leike: That’s right. And safety reviews really need to start before you even start the training run, right?
Did the release of ChatGPT increase or reduce AI extinction risk? [02:07:15]
Rob Wiblin: Yeah. OK, here’s another question: “OpenAI’s decision to create and launch ChatGPT has probably sped up AI research because there’s now a rush into the field as people were really impressed with it. But it has also prompted a flurry of concerns about safety and new efforts to do preparation ahead of time to see off possible threats. With the benefit of hindsight, do you think the move to release ChatGPT increased or reduced AI extinction risk, all things considered?”
Jan Leike: I think that’s a really hard question. I don’t know if we can really definitively answer this now. What do I think? I think, fundamentally, it probably would have been better to wait with ChatGPT and release it a little bit later. I think also, to some extent, this whole thing was inevitable, and at some point, the public will have realised how good language models have gotten. You could also say it’s been surprising that it went this long before that was the case. And I was honestly really happy how much it has shifted the conversation, or advanced the conversations — around risks from AI, but also the real alignment work that has been happening on how we can actually make things so much better, and we should do more of that. And I think both of these are really good. And you can now argue over what the timing should have been and whether it would have happened anyways. I think it would have happened anyways.
On a high level, people are asking these questions, which are really good questions to ask, like: Can we all just stop doing AI if we wanted to? It feels so easy. Just stop. Just don’t do it. Like, wouldn’t that be a good thing? But then also in practice, there’s just so many forces in the world that keep this going, right? Like, let’s say OpenAI just decides we’re not going to train a more capable model. Just not do it. OpenAI could do that. And then there’s a bunch of OpenAI competitors who might still do it, and then you still have AI. OK, let’s get them on board: let’s get the top five AGI labs, or the five tech companies that will train the biggest models, and get them to promise it. OK, now they promised. Well, now there’s going to be a new startup. There’s going to be tonnes of new startups.
And then you get into how people are still making transistors smaller, so you’ll just get more capable GPUs — which means the cost to train a model that is more capable than any other model that has been trained so far still goes down exponentially year over year. So now you’re going to semiconductor companies, and you’re like, “Can you guys chill out?” And fine, you could get them on board. And now there’s upstream companies who work on UV lithography or something, and they’re working on making the next generation of chips, have been working on this since the 90’s. And then you get them to chill out.
It’s a really complicated coordination problem, and it’s not even that easy to figure out who else is involved. Personally, I think humanity can do a lot of things if it really wants to. And if things actually get really scary, there’s a lot of things that can happen. But also, fundamentally, I think it’s not an easy problem to solve, and I don’t want to assume it’s being solved. What I want to do is I want to ensure we can make as much alignment progress as possible in the time that we have. And then if we get more time, great. Then maybe we’ll need more time, and then we’ll figure out how to do that. But what if we don’t? I still want to be able to solve alignment. I still want to win in the worlds where we don’t get extra time — where, for whatever reason, things just move ahead. And so however it goes, you could still come back to the question of, “How do we solve these technical questions as quickly as possible?” And I think that’s what we really need to do.
Rob Wiblin: Yeah. I’ve seen online that there are people who are trying to slow things down, basically, to buy more time for you and your team, among others. And there’s some people who are staking out a really extreme view that they just want to stop progress on AI; they just want to completely stop it globally for some significant period of time — which seems, as you’re saying, like a very heavy lift. I guess I’m not sure, but I think that their theory might be that at some point, there’ll be some disaster that changes attitudes in a really big way. And then things that currently just seem impossible might become possible, so perhaps that their idea would make more sense then.
But setting that aside, in terms of the race to solve alignment, it seems like we could either slow things down 1% or get 1% more time, or speed up alignment research by 1%. And the question might be which of those two things is easier. It sounds like you think probably it’s easier to speed up the alignment research, or it’s probably easier to get alignment research going and proceeding twice as quickly, as it is to make timelines that are twice as long towards whenever we invent dangerous things?
Jan Leike: Yeah. I think that’s a really important point also. Given how few people are actually working on alignment these days —
Rob Wiblin: What is it? Is it hundreds? Thousands?
Jan Leike:It depends on your count. The Superalignment team is about 20ish people right now, but there’s a lot of other alignment efforts at OpenAI right now. If you count all of the RLHF work, it’s probably more than 100. But if you go back two years, there were three people doing RLHF, or five, I don’t know. It’s ramped up a lot, but we still need so much more. And really talented individuals can still make such a big difference by switching to working on this problem now, just because it’s still such a small field. There’s still so much to do. There’s so much we still don’t understand. In some ways, it feels like the real final research frontier. We’ve figured out scaling. We know how to make the models smarter.
Rob Wiblin: Yeah, in a way that’s easy and boring.
Jan Leike: That is going to happen. Well, there’s some ways in which people might stop it. But we know how to do this. Alignment is a real research problem: we don’t know how to align superintelligence. We want to figure this out. We have to. It’s not optional.
Rob Wiblin: Yeah. The fact that the field is so small is exasperating on one level, but it’s also a reason for optimism in another sense, because you could double it. Like, if you could get 1,000 ML researchers to switch into working on alignment, that would completely transform things, right?
Jan Leike: Exactly.
Commercialisation [02:14:00]
Rob Wiblin: OK, another question. “Jan claimed that the Superalignment team wouldn’t be avoiding alignment work that helps with commercialisation. But that work in particular is already incentivised monetarily, by definition. So why isn’t he going to try to avoid that work, which will probably get done either way?”
Jan Leike: I think this is the whole point that a lot of people are trying to make: that alignment wouldn’t be done by default in the way that we are really happy with or something. Or put differently, the problems that we want to solve are currently unsolved. And yes, some of it will be commercially valuable. I think fundamentally, if you have two ways of building AGI, and one of them is much more aligned with humans, people will want to buy the second one because it’s just better for them. And that will necessarily have commercial value, and it’ll be unavoidable.
In general, an adjacent criticism that has been raised in the past is that a lot of people feel like RLHF has been a capabilities progress, because the RLHF models feel more capable — you’re interacting with them, they’re more useful, they’re actually doing more things. And the reason is because they’re trying to help you; they’re more aligned. They’re actually leveraging their capabilities towards whatever you’re asking them to do, whereas the pre-trained model isn’t. And so it obviously feels a lot more capable because you’ve unlocked all of these capabilities.
But if you then look at what actually happens during fine-tuning, the model isn’t really learning fundamentally new skills it didn’t have before, right? I mean, you can do that through fine-tuning theoretically, but not with the kind of compute budget that we use — like, for GPT-3, it was less than 2% of the pre-training compute; for GPT-4, it was even less than that — it’s really a tiny fraction. But at the same time, because the model is now trying so much harder to be helpful, it is more helpful, and it feels like you get all the capabilities that had been there in the first place.
And so to come back to the commercialisation question, I think what I really want to do is solve the problem. And if that is commercially useful, great. Some of it will not be. Or some of the research bets won’t work out. Some of the things won’t be useful before we actually get really capable systems. And that’s fine. But the goal is to solve the problem. That’s what we want to do.
OpenAI’s views and plans [02:16:42]
Rob Wiblin: Yeah. Another question: “Is OpenAI banking on there not being a really fast takeoff? And do they try to make plans that could also work in the event of a ‘foom’ scenario?” That is, extremely rapid recursive self-improvement of AI?
Jan Leike: Yeah. I think we should definitely plan for that scenario, and be ready if it happens. To some extent, automated alignment research is probably the best plan I know of in that kind of scenario, where you really have to scale up your alignment work in proportion with what’s going on. And if you can do this by just delegating almost all of the work to machines, then they can actually keep pace with the machines — because they are the only ones that can.
Rob Wiblin: I guess the concern would be if there is an intelligence explosion, and it’s very fast, then there’s very little time for you to put your plans into action and to keep up. It’s just a very bad situation. It makes it very hard for any plan to work.
Jan Leike: That’s right. If you want to be agnostic to the speed of tech progress, which is what we want to do here, the best thing you can do is to prepare as much as possible ahead of time — which is why we need to start thinking now about how to align systems that we don’t have yet. And the more you can prepare, the more you’ll be ready for that scenario.
Rob Wiblin: OK, a question I got, which I’ll slightly change, is: “What are OpenAI’s grounds for thinking alignment is solvable? Have they seen Dr. Roman Yampolskiy’s impossibility arguments against solvability?” And they’ve linked to a paper with those arguments. I don’t know exactly what those arguments are, but I know there are people out there who have made theoretical arguments that alignment is impossible or extremely difficult for some conceptual reasons. Are there any arguments along those lines that trouble you in particular, or maybe do you think that kind of argumentation shouldn’t be so persuasive?
Jan Leike: Yeah. I looked at the paper that you mentioned, and like any argument that I’ve seen, I haven’t found it particularly persuasive. The problem is, whenever you’re trying to make a theoretical argument, you need some kind of assumptions. And the big question then really just becomes: are these assumptions going to be true? To me, it just really seems like the jury is still out on this. It could turn out to be impossible. It doesn’t feel particularly likely to me, but I don’t have a proof for that. But I think we’re going to work really hard to find a counterexample by showing that it can be done. And I think it’s definitely not the time to give up. I think it’s very doable.
Rob Wiblin: Yeah. I could feel there’s a bit of exasperation that comes through, where you’re like, “All these people complaining that this problem isn’t solvable, they’re not helping. And clearly there are so many things we could try. Why don’t we just try them?”
Jan Leike: They’re helping in the sense that they’re indirectly doing recruiting for us, because they’re drawing attention to the problem. And if you just went around saying the problem is easy saying the problem is easy, you wouldn’t draw attention to it. People will be like, “OK, it’s fine, then I don’t have to worry about it.” But I think that also created a real energy of, “It seems really hard. Let’s give up.” And that’s, I think, absolutely the wrong approach. If anything, that means we should try harder, and get more people to try to solve it, and, you know, to “Never give up, never surrender.” The game is still up in the air. We should just really crush it.
Rob Wiblin: OK, two questions that were kind of pointing in the same direction were: “As OpenAI gets closer to AGI, do they plan to err on the side of paranoia in terms of giving AIs opportunities to manipulate staff, or hack themselves out, or otherwise have channels of causal influence?” And another person asked, “How much risk of human extinction are you willing to take in a large training run?” For example, to train GPT-5, 6, or 7, and so on?
Jan Leike: In general, as the stakes get higher, we have a much higher burden of proof of alignment, proof of safety. We’ve been ramping this up with every system. And the systems we have now still aren’t catastrophically risky, or aren’t close to that. So for example, GPT-2 was just open source: everyone can download and do whatever they want with it. GPT-3 was not; you make it available via an API. And then GPT-4, the only publicly available version is the alignment fine-tuned version — the RLHF version, the ChatGPT version — and the base model, as far as I know, is only on researcher access. So you know, you’re steering the public towards the RLHF model.
And with each of these steps, you’re also stepping up your safety, you’re also stepping up your alignment — and obviously, the higher the capability level, the higher the stakes are, and the more safety and alignment measures you need.
Rob Wiblin: Yeah, so people can kind of expect that trend to continue. On the same theme, on Twitter someone asked you, in a different thread, “How would you define success?” And you replied, “The scientific community agrees that we’ve solved alignment.” And [a listener] said: “This statement from Jan was good. Is there a meaningful related commitment that OpenAI could make? For example, to not deploy systems above a certain threshold of capability unless there is a broad scientific consensus that alignment has been solved for that kind of system?”
Jan Leike: At the end of the day, I think we’re going to have to convince the scientific community, because I don’t think the world will let us build something that’s catastrophically dangerous. And the world is paying attention now. And I think that’s all good.
Rob Wiblin: Yeah. I mean, the crazy thing is at the moment… So I’ve learned recently that in the UK, if you want to rent out a house to more than three unrelated people, then you need a special licence in order to do that. As far as I can tell, at least currently, one doesn’t need a licence or any sort of approval in order to train an AGI. I suppose that’s partly because we probably can’t do that yet. But I mean, it does seem like currently there aren’t that many legal restrictions, and we’re just kind of hoping that there will be pretty quickly. Or at least, I’m hoping that there’ll be more infrastructure in place.
Jan Leike: Yeah. That seems right to me. And people are working on regulation, and this is something that regulation has to solve. There’s a lot of questions around this that I’m not an expert in.
But to come back to the scientific “How do you define success?” question, I definitely feel very strongly that it’s not sufficient to just convince ourselves that we did a good job — because it’s so easy to convince yourself that you did a good job at something that you care a lot about. But we actually have to convince external experts; we have to convince external auditors who are looking exactly at what we are doing and why. And I think we’ll just actually have a mountain of empirical evidence of: “Here’s all the things we tried. Here’s what happens when we do [X]. You can look at the data; you can look at the code.” And then people can scrutinise what we’re doing.
Because the stakes will end up being so high, correspondingly, we also have to invite a lot of scrutiny in what we’re doing. And one aspect of it that we kind of started with now is we want to say what we are trying, what we are planning to do, what is our overall approach to aligning the systems that we’re building. And we want to invite feedback and criticism. Maybe there’s something way better that we could be doing. I would love to know that, and then we would do that instead. And in general, I think the public should just know what we’re doing on alignment and make independent judgements on whether that’d be enough. And I think experts will have a role to play in this, because their knowledge will be required to make informed conclusions from this.
Rob Wiblin: Yeah. An interesting thread with the audience questions is so many of them are about policy and governance. And those are also the kinds of questions that I’m more tempted to ask, because I often don’t understand the technical details. I imagine many people on Twitter don’t know enough to scrutinise the technical proposals, so we’re more thinking about how, at a social level, at an organisational level, are things set up well.
Jan Leike: Right. But I feel like my answer often is, “Yeah, I would love to see more of that. Please solve this problem. I’m not working on this, but here’s how what I’m working on helps, hopefully.”
Rob Wiblin: That’s why I feel that it’s reasonable to throw these questions to you and to find out what you think. But yeah, there’s just a lot of people who need to take action, and you’ve got to keep your head down focused on this technical stuff, because that’s your specialty. But we also need the governance people at OpenAI to be putting in place the good structures, and we need the Senate committee on this to be playing their role. It’s just there’s a lot of different pieces that have to slot together.
Jan Leike: That’s right.
Jobs with the Superalignment team [02:25:55]
Rob Wiblin: OK, that’s been a whole lot of audience questions, but we’re heading towards the final half hour or so of the conversation. I guess my dream is that this interview can help get you lots of great applications to work on the Superalignment team.
Jan Leike: My dream too.
Rob Wiblin: Glad we’re really aligned. Hopefully, we get some people moving from stuff that’s kind of interesting, but not that helpful, to something that is both super intellectually interesting and also might save the world in some sense. I don’t want to take a strong contrarian view on whether this Superalignment project is better or worse than other projects that people who are really much more technically informed than me think are plausible, but it seems like the plan that you’ve laid out seems as good to me as any other plan that I’ve heard, and it seems like you’ve got the resourcing and situation to make a real go of it. And I guess, also, if this plan doesn’t bear as much fruit as you hope in the next couple of years, I imagine you’ll be able to pivot to a different plan.
So yeah, what roles are you hiring for, and in what sort of numbers? Lay it all out.
Jan Leike: We are primarily hiring for research engineers, research scientists, and research managers, and I expect we’ll be continuing to hire a lot of people. It’ll probably be at least 10 before the end of the year, is my guess. And then maybe even more in the years after that.
So what do these research engineers, research scientists, and research managers roles look like? In a way, we don’t actually make a strong distinction between research engineer and research scientist at OpenAI. In each of these roles, you’re expected to write code, and you’re expected to run your own experiments. And in fact, I think it’s really important to always be running lots of experiments, small experiments, testing your ideas quickly, and then iterating and trying to learn more about the world.
In general there’s no PhD required, also for the research scientist roles. And really, you don’t even have to have worked in alignment before. And in fact, it might be good if you didn’t, because you’ll have a new perspective on the problems that we’re trying to solve. What we generally love for people to bring, though, is a good understanding of how the technology works. Do you understand language models? You understand reinforcement learning, for example. You can build and implement ML experiments and debug them.
On the more research scientist end of the spectrum, I think you would be expected a lot more to think about what experiments to do next, or come up with ideas of how how can we address the problems that we are trying to solve, or what are some other problems that we aren’t thinking about that maybe we should be thinking about, or how should we design the experiments that will let us learn more?
And then on the research engineering [end of the] spectrum, there’s a lot of just actually build the things that let us run these things. And let’s make the progress. We already know if we have a bunch of good ideas, that will not be enough, right? We actually have to then test them, and build them, and actually ship something that other people can use. And that involves writing a lot of code. And that involves debugging ML, and running lots of experiments, getting big training runs on GPT-4 and other big models set up.
I think in practice, actually, most people on the team kind of move somewhere on the spectrum. Sometimes there’s more coding because we kind of know what to do. Sometimes it’s more researchy because we don’t yet know what to do, and we’re kind of starting a new project. But yeah, in general, you need a lot of critical thinking, and asking important questions, and being very curious about the world and the technology that we’re building.
And for the research manager, basically that’s a role where you’re managing a small- or medium-sized or even a large team of research engineers and research scientists towards a specific goal. So there, you should be setting the direction of: What are the next milestones? Where should we go? How can we make this vague question of we want to understand this type of generalisation, or we want to make a dataset for automated alignment, or something like that. You have to break it down and make it more concrete, and then figure out what people can be doing. But also, there’s a lot of just day-to-day management of how can we make people motivated and productive, but also make sure they can work together, and just traditional management stuff.
Rob Wiblin: So it sounded like, for the first two, the main thing was that you had a good understanding of current ML technology. You could actually be able to go in and potentially think up experiments and run experiments. Are there any other concrete skills that you require? Or what would be the typical background of someone who you would be really excited to get an application from?
Jan Leike: There’s a lot of different backgrounds that are applicable here. Machine learning PhDs have been the traditional way people get into the field, especially if you want to do something more researchy. I don’t think you need that at all. And in fact, if you’re thinking about starting a PhD now, I don’t know if you’ll have that much time. You should just go work on the problem now.
For research engineers, I think the kind of background is, maybe you’ve worked in a STEM field, and you’re like, “I’m going to stop doing that. I’m going to take six months and just reimplement a bunch of ML papers and learn a bunch that way.” Or somebody who works at a tech company doing other machine learning engineering-related things, and now wants to switch to alignment. I think that’s a really good profile.
And I also want to stress this: most people we are trying to hire haven’t worked on alignment before, just because the people who have been working on alignment before, there’s so few of them. And also, I think the core expertise that you will need is machine learning skills. And there’s a bunch of things you should know about alignment, but you can also learn them once you’re here, or you can catch up along the way. And I think that’s fine.
Rob Wiblin: On the research manager role, I guess you’re looking for somewhat different skills there, that someone might have more management experience. Being a good researcher and being a good manager are not the same thing. These things absolutely can come apart. So would you be looking for a particular kind of person for the manager role?
Jan Leike: Yeah. And I think they can be anticorrelated, which is unfortunate.
Rob Wiblin: I think they might be sometimes. Yeah.
Jan Leike: Yeah. But ideally, you would have managed before. I think there’s different ways it could go. There’s scenarios where you split up your responsibilities between a tech or a research lead and a manager, and the manager takes on more of the responsibilities of management, and the tech lead is more setting a direction for the team and making sure the technical stuff is happening that needs to happen. But in that configuration, they have to get along really well, and they have to really be on the same page to effectively divide these responsibilities. In particular, I think the manager still should have a really detailed understanding about what we’re trying to do.
But ideally, we’d want to have someone who just can do both roles in one. So the background would be, I don’t know, you’ve led a research team at some other company, or in some kind of other branch of machine learning. Or you’ve been a manager before in some other domain, and then you switch to being an IC — “IC” means individual contributor — on some kind of large language model project, say. Or there’s also a path where maybe you’re a postdoc somewhere, and you have a small research team that you’re working with day to day, and it’s very coding heavy, and you’re running lots of experiments with language models or reinforcement learning or something like that.
I think these are all possible profiles, but it’s kind of hard to know what exactly. I think the bigger filter is just more that you should actually really care about the problems that we’re trying to solve, and you need to be really good at coding. You need to be really good at machine learning.
Rob Wiblin: As I understand it, one of the impressive and difficult things that OpenAI has had to work on is just getting the chips and getting the compute to work well and efficiently. I think these are enormous aggregations of compute, and the engineering of getting it to work is not at all straightforward, and getting it to work for ML purposes specifically adds its own complications. Are you hiring people to do that engineering side of things?
Jan Leike: OpenAI definitely is. Mostly on the Superalignment team, what we’ll be dealing with is more being a consumer of the infrastructure that runs these large-scale experiments. People on Superalignment need to be comfortable debugging these large distributed systems — because if we’re doing a fine-tuning run on GPT-4, it is such a system; it’s not easy to debug. But we don’t have to build the large language model infrastructure because it already exists, and other people are working on that.
Rob Wiblin: What does the application process look like?
Jan Leike: Yes. So it’s very simple. You go on openai.com/careers, and you scroll down, and you’ll find the roles that have “Superalignment” in the title. You click on it, and then you submit your CV and say why you want to work on this. And that’s it. And then we’ll see it.
Rob Wiblin: Are there any further steps to the process?
Jan Leike: The general interview process that we follow is: there’s a tech screening, and there’s an intro chat with someone from the team, and there’s an on-site process — where I think there’s two to four coding or ML interviews and a culture fit interview. But depending on the job or your background, it might look slightly differently.
Rob Wiblin: Are you kind of expecting to maybe hire 20 people and then only keep 10 of them in the long run, or is it more you’re going to try to hire people who, mostly, you expect to work out?
Jan Leike: We want to really invest in the researchers that we’re hiring.
Rob Wiblin: So it’s more of the second one.
Jan Leike: Yeah.
Rob Wiblin: I imagine the bar is reasonably high for getting hired. Is there a way of communicating what the bar kind of is? I know people could be both overconfident and underconfident, and it could be quite bad if someone would be really good, but they don’t feel like they’re such a badass that they should necessarily get a role like this. So if there’s any kind of more explicit way of communicating who should apply, that could be useful.
Jan Leike: I mean, maybe the most important thing is: If you’re in doubt, please apply.
Rob Wiblin: The cost of a false negative is higher than the cost of a false positive.
Jan Leike: Exactly.
Rob Wiblin: You’ve slightly already done this earlier in the interview, but do you want to just directly make the pitch for why amazing people should apply to work with you on the Superalignment team?
Jan Leike: Yeah. In short, I think this is one of the most important problems. We really have to get this right. It’s not optional. We want to do really ambitious things. We’ve set ourselves the goal to actually solve it in four years. We are serious about that. So if you want to work in a team of highly motivated, talented people who are really trying to solve ambitious problems and have a lot of resources to do so, this is the place to go. I think also we are at the state of the art of the technology, and OpenAI is really backing us at what we want to do. So I think we have as good a shot at the problem as anyone else, if not more. And I think we should just really do it and really go for it. And you could make that happen, and that’ll be really exciting.
Rob Wiblin: Do you also need any non-machine learning and non-research people on that team? There’s always operations, communications, legal, these other groups — or maybe for that, you’ll just have to apply the OpenAI in general rather than the Superalignment team specifically?
Jan Leike: Yeah. That’s right. And I’m generally also just really excited to have more people who really care about the alignment problem, who really care about the future of AI going well, just apply to OpenAI in whatever role — just help us make that future a reality. And there’s a lot of people at OpenAI who really care about this, but the more people who care about the problems, the important problems, I think the better.
Rob Wiblin: Yeah. So many policy issues have come up through the conversation. I know that there are some really amazing people on the policy team over at OpenAI.
Jan Leike: That’s right. I can name some other teams. So I think the Policy Research team is doing really excellent work on dangerous capabilities evaluations, and actually trying to get agreements about when should we all stop. And there’s the Safety Systems team, that actually tries to improve alignment and safety of models we have right now — like making the refusals better, fixing jailbreaking, improving monitoring — all of these problems, they’re really important. And for some listeners who might be more sceptical about the long-run problems that we have to solve, and want to do something that has impact right now, these are great teams to join, and I’m excited for what they’re doing.
And then, of course, there’s a lot of other teams at OpenAI that are doing important work, just improving RLHF, improving ChatGPT, all of this legal, communications, recruiting. There’s a lot of things to do. We are focusing on trying to figure out how to align superintelligence. But as we’ve discussed, it’s not the only thing we need.
Rob Wiblin: Yeah. If someone were reluctant to apply because they were scared that getting involved might enhance capabilities, and they were someone who thought that speeding up capabilities research was a bad thing, what would you say to them?
Jan Leike: I mean, if you don’t want to do that, don’t apply to the Capabilities team.
Rob Wiblin: Yeah. Fair enough. So yeah, I guess just the obvious thing is, it sounds like working on the Superalignment team is not going to meaningfully contribute to capabilities progress on any kind of global level?
Jan Leike: I don’t want to promise that nothing we’ll do will have any capabilities impact. And as we mentioned earlier, I think some of the biggest alignment wins will also have some of these effects. I think that’s just real and unavoidable.
I think also in the EA community specifically, there’s a lot of hesitation around, “If I get into ML or if I do an ML engineering job somewhere, I might accelerate timelines a little bit, and it’ll be so bad if I did that.” And I think that kind of reasoning really underestimates the career capital growth and the skills growth that you would get by just doing some of these jobs for a while while you’re skilling up, and then you can switch to alignment later. I think in general there’s so many people working on capabilities that one more or less won’t make it go that much faster. But there’s not that many people in alignment. So as one person working on alignment, you can actually make a much larger difference.
Rob Wiblin: Yeah. As we always do when this topic comes up, I’ll link to our article, “If you want to reduce AI risk, should you take roles that advance AI capabilities?.” There we have responses from a wide range of people who we ask this question to, who do have something of a range of views. But I think the reasoning that you’ve given there — that just your proportional increase in capabilities research that you would make would very small relative to the proportional increase in alignment research that you would make, plus all of the benefits that you get from skilling up personally and then being able to use those skills later in your career — it seems pretty clear to me, in this case at least.
What are the distinctive things about OpenAI’s culture that people should be aware of going in? Is there a particular character that really thrives there?
Jan Leike: I think we generally want to be really welcoming to all kinds of different people, and all kinds of different characters — you know, everyone. I think we just need a lot of diversity of thought on how to go about this problem. And many people have said this before: there’s also so many non-machine learning aspects to this problem. And so especially if you’re somebody who has a nontraditional background and switched into ML, or has specifically an origin story that is non-typical, I think that’s super valuable.
In general, I care a lot about having a team culture that is really warm and friendly and inclusive, but also creates a lot of psychological safety for people to voice spicy takes on some of the things that we’re doing, or our approach in general. And we would love for you to contribute to that. We need to collaborate to solve the problem, and it’s not just like, who can get the credit or something. This problem just needs to get solved.
Rob Wiblin: If a really talented person wanted to switch into working on technical alignment, but for some reason it was impossible for them to go join you on the Superalignment team, is there anywhere else that you’d be really excited for them to apply that’s not at OpenAI?
Jan Leike: Yeah. There’s other AI labs that I think are doing a good job, really cool work. You know, Google DeepMind or Anthropic, and there’s other academic labs that are really doing cool stuff at Berkeley or at Stanford or in Oxford. I would consider applying to those. Also, it’s always very sad when we have to turn down really talented people. But also we are a small team; we can’t hire everyone. And sometimes people aren’t quite ready, and it’s good to focus on more skill building and career capital investment. I think that’s also a really valid strategy. I think all in all, probably people that go through our pipeline generally underestimate how valuable it is to take a research engineering job at another company and skill up and learn a bunch of things, and there’s a lot of opportunities to do that.
Rob Wiblin: Yeah. Just on practical questions, is it possible to work remotely? And can you sponsor visas for people who aren’t US citizens?
Jan Leike: We definitely sponsor visas. Remote work is generally not encouraged because almost the entire team is in San Francisco. We go into the office at least three times a week, and it’s just so much easier to collaborate. So if you can do that, that would be really good.
Rob Wiblin: Are there any other points that you want to make before we push on?
Jan Leike: Yeah. Thank you so much for letting me pitch these roles here. I’m really excited for more people who really care about this problem, really care about the future to go well, and making sure humanity manages this transition into a post-AGI world. And thank you for doing this.
Jan’s favourite science fiction books [02:46:30]
Rob Wiblin: All right. We’ve really gone over time. I’ve been keeping you for a long while, and I’m sure you have a lot of stuff to do setting up this whole project. Maybe a final question before we go is: Do you have a favourite piece of science fiction?
Jan Leike: I really like the Greg Egan books. A lot of these are really old, like Permutation City was one of my favourites. A lot of the ideas that he plays with felt really out there at the time, I’m sure. But now, it just seems somewhat striking a lot closer to home in a whole bunch of ways, and you can kind of feel more and more of the weird sci-fi ideas become reality. But also I actually like that he tries to paint a positive view of what society could look like in the long run.
Rob Wiblin: Yeah. Whatever you said, I was going to ask: Is your life weirder or less weird than what is portrayed in that piece of science fiction? I actually don’t know about Permutation City, but maybe could you quickly tell us what it’s about, and whether it’s weirder than your own situation in this world?
Jan Leike: Definitely, it’s so much more weird than my life. So Permutation City is a book that plays with the idea of uploading, of having digital copies of humans, and living in a mathematical universe, and what are the implications of that, and virtual humans can rewrite their own code. And a lot of things like that we can’t do yet. Maybe in some ways, AI can do it, or maybe in the near future, or medium future, AI could rewrite parts of its own neural network or something, if we make interpretability progress. I mean, I don’t know. It’s very out there in science fiction, right? That’s what makes it so cool.
Rob Wiblin: Yeah. I don’t know. I do feel like sometimes we’re living through a science fiction novel.
Jan Leike: Oh, this is nothing. It’s going to get so much weirder.
Rob Wiblin: Yeah. All right. Well, we have that to look forward to in the 2030s or 2040s.
Jan Leike: I don’t know exactly how it’s going to go, but I promise you it’ll be weird by today’s standards.
Rob Wiblin: Yeah. Well, best of luck with the project. I really look forward to seeing how it comes along. My guest today has been Jan Leike. Thanks so much for coming on The 80,000 Hours Podcast, Jan.
Jan Leike: Thank you so much for having me.
Rob’s outro [02:49:05]
Rob Wiblin: Before we go I just wanted to remind you all that 80,000 Hours does offer one-on-one advising to help people figure out how to have a bigger impact with their career, and we’re particularly excited to speak to people who want to pursue the sorts of careers Jan has been talking about in this episode.
You can apply to talk to someone here at 80000hours.org/advising.
One new feature the advising team is building out is a system for recommending advisees to employers for specific opportunities.
It’s likely that you’re not always able to stay up to date on new organisations and openings in your area of interest, because, well, how could you? So it’s handy to have the one-on-one team support you by keeping track of relevant opportunities that might come up, and giving you a boost by affirmatively recommending you when there seems to be a great fit based on what you discussed. Note, this is an opt-in aspect of the service, so if you just want the advice, that’s still there for you.
Of course, we still can’t advise everyone who applies, and sometimes we even say no to people because they already have a sensible plan and it’s not clear what we can add.
But still, I’d definitely encourage you to apply for career advising if you’re considering trying to have a bigger impact in your career, or figuring out how you can build career capital so you’re in a better position to do good in future.
Again, the address is 80000hours.org/advising.
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
The audio engineering team is led by Ben Cordell, with mastering and technical editing by Milo McGuire and Simon Monsour.
Additional content editing by Luisa Rodriguez and Katy Moore, who also puts together full transcripts and an extensive collection of links to learn more — those are available on our site.
Thanks for joining, talk to you again soon.