PhD or programming? Fast paths into aligning AI as a machine learning engineer, according to ML engineers Catherine Olsson & Daniel Ziegler
By Robert Wiblin and Keiran Harris · Published November 2nd, 2018
PhD or programming? Fast paths into aligning AI as a machine learning engineer, according to ML engineers Catherine Olsson & Daniel Ziegler
By Robert Wiblin and Keiran Harris · Published November 2nd, 2018
If you are a talented software engineer, the state of the questions right now is that some of them are just ready to throw engineers on. And so if you haven’t just tried applying to the position that you want, just try. Just see. You might actually be ready for it.
Catherine Olsson
After dropping out of his ML PhD at Stanford, Daniel Ziegler needed to decide what to do next. He’d always enjoyed building stuff and wanted to help shape the development of AI, so he thought a research engineering position at an org dedicated to aligning AI with human interests could be his best option.
He decided to apply to OpenAI, spent 6 weeks preparing for the interview, and actually landed the job. His PhD, by contrast, might have taken 6 years. Daniel thinks this highly accelerated career path may be possible for many others.
On today’s episode Daniel is joined by Catherine Olsson, who has also worked at OpenAI, and left her computational neuroscience PhD to become a research engineer at Google Brain. They share this piece of advice for those interested in this career path: just dive in. If you’re trying to get good at something, just start doing that thing, and figure out that way what’s necessary to be able to do it well.
To go with this episode, Catherine has even written a simple step-by-step guide to help others copy her and Daniel’s success.
Daniel thinks the key for him was nailing the job interview.
OpenAI needed him to be able to demonstrate the ability to do the kind of stuff he’d be working on day-to-day. So his approach was to take a list of 50 key deep reinforcement learning papers, read one or two a day, and pick a handful to actually reproduce. He spent a bunch of time coding in Python and TensorFlow, sometimes 12 hours a day, trying to debug and tune things until they were actually working.
Daniel emphasizes that the most important thing was to practice exactly those things that he knew he needed to be able to do. He also received an offer from the Machine Intelligence Research Institute, and so he had the opportunity to decide between two organisations focused on the global problem that most concerns him.
Daniel’s path might seem unusual, but both he and Catherine expect it can be replicated by others. If they’re right, it could greatly increase our ability to quickly get new people into ML roles in which they can make a difference.
Catherine says that her move from OpenAI to an ML research team at Google now allows her to bring a different set of skills to the table. Technical AI safety is a multifaceted area of research, and the many sub-questions in areas such as reward learning, robustness, and interpretability all need to be answered to maximize the probability that AI development goes well for humanity.
Today’s episode combines the expertise of two pioneers and is a key resource for anyone wanting to follow in their footsteps. We cover:
- What is the field of AI safety? How could your projects contribute?
- What are OpenAI and Google Brain doing?
- Why would one decide to work on AI?
- The pros and cons of ML PhDs
- Do you learn more on the job, or while doing a PhD?
- Why did Daniel think OpenAI had the best approach? What did that mean?
- Controversial issues within ML
- What are some of the problems that are ready for software engineers?
- What’s required to be a good ML engineer? Is replicating papers a good way of determining suitability?
- What fraction of software developers could make similar transitions?
- How in-demand are research engineers?
- The development of Dota 2 bots
- What’s the organisational structure of ML groups? Are there similarities to an academic lab?
- The fluidity of roles in ML
- Do research scientists have more influence on the vision of an org?
- What’s the value of working in orgs not specifically focused on safety?
- Has learning more made you more or less worried about the future?
- The value of AI policy work
- Advice for people considering 23andMe
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours Podcast is produced by Keiran Harris.
Highlights
Catherine Olsson: “AI safety” is not one thing. It’s definitely not one field. If anything, it’s a community of people who like to use the phrase “AI safety” to describe what they’re interested in. But if you look at what different groups or different people are working on, they’re very very different fields of endeavor. So you have groups that are trying to take deep reinforcement learning and introduce a human feedback element so that you can learn human preferences in a deep RL system. That’s one research agenda.
… MIRI has folks working on decision theory. If we understood decision theory better, than we would know better what a good system should be like. Okay, decision theory theorem proving is just categorically a completely different type of work from deep reinforcement learning. You’ve got groups that are working on… so like my group, for example, working on robustness in machine learning systems. How do we know that they’ve learned the thing that we wanted them to learn? Also a completely different field of endeavor.
And it’s very important to keep in mind if you’re looking for a, quote, career in AI safety, is what exactly is it that you think is gonna be important for what trajectory that you think the world is gonna be on? And then what are the particular sub skills that it’s gonna take? Because it’s not a monolith at all. There’s many many different groups taking many many different approaches, and the skills you need are gonna be extraordinarily different depending on the path.
Daniel Ziegler: Normally in the reinforcement learning paradigm, you have some agent acting in some environment, so it might be playing a video game or it might be controlling a robot, it may be a real robot, maybe a simulated robot, and it’s trying to achieve some sort of well-defined goal that’s assumed to be specified as part of the environment. So in a video game, that might be the score. In robotics tasks, it might be something like run as far as you can in 10 seconds, and something that’s a hard-coded function that’s easily specified as part of the environment.
For a lot of more interesting, real-world applications, that’s not really going to work. It’s too difficult to just write down the reward function that tells you exactly how well you’re doing, because there’s just too many things to take into account. The Safety Team said, okay, let’s relax this assumption and instead of assuming that the reward function is built into the environment, we’ll actually try to learn the reward function based on human feedback.
One of the environments that was like a little simulated robotics task where you have this little hopping agent, just like a big leg basically, we gave humans these examples of what the leg was currently doing. We gave them two examples, one on the left and one on the right, and then the human had to decide which of those was doing a better job, according to whatever the human thought the job should be. So one thing we got the little hopper to do is to do a backflip. It turns out, it’s actually pretty tricky to write down a hard-coded reward function for how to do a backflip, but if you just a few hundred times show a human, is this a better backflip or is this a better back flip, and then have the system learn from that what the human is trying to aim for, that actually works a lot better.
So the idea is, now instead of having to write down a hard-coded reward function, we can just learn that from human oversight. So now what we’re trying to do is take that idea and take some other kinds of bigger mechanisms for learning from human feedback and apply real, natural language to that. So we’re building agents which can speak in natural language themselves and maybe take natural language feedback, and trying to scale those up and move in the direction of solving more real tasks.
Catherine Olsson: I think the best way to figure out what’s going on is just to dive in. In fact, I’m directly referencing a post by Nate Soares, called Dive In, which I love and recommend, that if you have an extremely concrete plan of how you’re going to contribute that has actionable and trackable steps, you’re going to start getting data from the world about your plan a lot sooner than if you have some unreachable or nebulous plan. I would encourage anyone who’s interested in this sort of thing to look for the smallest step that you can take that brings you just a little closer. If you’re currently a software engineer and you can take a statistics class and maybe do some data science in your current role, by all means do that. Take just one step closer to something in the space of machine learning.
If you can just do software engineering at an organization that does ML, if you take that role, you’ve just got your face in the data in a much more concrete and tangible way. I think, particularly folks who are coming at this topic from an EA angle, maybe you’ve read Superintelligence, whatever your first intro was, those abstractions or motivating examples are quite far removed from the actual work that’s being done and the types of systems that are being deployed today. I think starting to bridge that conceptual gap is one of the best things that you can do for yourself if you’re interested in starting to contribute.
Daniel Ziegler: Yes, and I would say, try just diving in all the way if you can. Like I said, when I was preparing for the OpenAI interviews, I went straight to just implementing a bunch of deep reinforcement learning algorithms as very nearly my first serious project in machine learning, and obviously there were things along the way where I had to shore up on some of the machine learning basics and some probability and statistics and linear algebra and so forth, but by doing it in sort of a depth-first manner, like where I just went right for it and then saw as I went what I needed to do, I was able to be a lot more efficient about it and also just actually practice the thing that I wanted to be doing.
Articles, books, and other media discussed in the show
- Please let us know how we’ve helped you: fill out our 2018 annual impact survey
- Catherine Olsson’s guide: Concrete next steps for transitioning into ML Engineering for AI Safety
- The Effective Altruism Grants program
- Deep reinforcement learning from human preferences by Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei
- Dive in by Nate Soares
- Critch on career advice for junior AI-x-risk-concerned researchers by Rob Bensinger
- OpenAI/baselines on GitHub
- Key Papers in Deep RL by Josh Achiam
- The Structure of Scientific Revolutions by Thomas S. Kuhn
- fast.ai course
- Ought
- 23andMe DNA testing service
- A short introduction to RL terminology, kinds of algorithms, and basic theory.
- An essay about how to grow into an RL research role.
- A well-documented code repo of short, standalone implementations of: Vanilla Policy Gradient (VPG), Trust Region Policy Optimization (TRPO), Proximal Policy Optimization (PPO), Deep Deterministic Policy Gradient (DDPG), Twin Delayed DDPG (TD3), and Soft Actor-Critic (SAC).
- And a few exercises to serve as warm-ups.
Transcript
Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where each week we have an unusually in-depth conversation about the world’s most pressing problems and how you can use your career to solve them. I’m Rob Wiblin, Director of Research at 80,000 Hours.
Today’s episode is going to be incredibly useful for anyone who has considered working on the AI alignment problem, or artificial intelligence or machine learning more generally.
It also explains ways to greatly speed up your career and sidestep doing a PhD, which I expect can be generalised to some other areas as well.
It comes with a written step-by-step guide written by Catherine Olsson explaining how you can transfer into a research engineering role in machine learning as soon as possible. We’ll link to that in the show notes and blog post attached to the show.
If you’d like to follow that guide and retrain to get an ML engineering position, but don’t have the cash in the bank to pay your living expenses while you do so, the Effective Altruism Grants program may be able to fund you whenever it next opens. We’ll stick up a link to their site as well.
If you have limited interest in AI, ML, PhDs or software engineering you may find this episode goes too far in the weeds for you.
But I expect you’d nonetheless be interested in earlier episodes that introduced this topic like #3 – Dr Dario Amodei on OpenAI and how AI will change the world for good and ill, #31 – Prof Dafoe on defusing the political & economic risks posed by existing AI capabilities and #44 – Dr Paul Christiano on how we’ll hand the future off to AI, & solving the alignment problem.
I also have a really important appeal to make to you all. We’re getting close to the end of the year, which is when 80,000 Hours tries to figure out whether all the things we’ve been working on throughout the year have actually been of use to anyone.
We need to hear from people how we’ve helped them so that we know we just shouldn’t give up and get different jobs. And our donors need to know we’ve changed people careers so they know to keep funding us. If we couldn’t find out those stories, 80,000 Hours wouldn’t exist.
So if anything 80,000 Hours has done, including this podcast, our website or our coaching has changed your career plans or otherwise helped you, please go to 80000hours.org/survey and take a few minutes to let us know how.
That’s 80000hours.org/survey.
OK here’s Daniel and Catherine.
Robert Wiblin: Today I’m speaking with Daniel Ziegler and Catherine Olsson. Daniel and Catherine both studied at MIT, with Daniel majoring in computer science and Catherine double majoring in computer science and brain and cognitive science. Daniel started an ML PhD at Stanford, but left and is now a research engineer at OpenAI. Catherine started a computational neuroscience PhD and then left with a master’s degree to join OpenAI as a software engineer. And now she’s at Google Brain as a research software engineer. Thanks for coming on the podcast, both of you.
Catherine Olsson: Thanks so much.
Daniel Ziegler: Yeah, excited to be here.
Robert Wiblin: So in today’s episode we hope to get more information about how technically inclined people can move their careers into the direction of aligning AI with human interest. But first, can you each describe what you’re doing now and why you think it’s important work? Maybe Catherine first.
Catherine Olsson: Sure. So right now at Google Brain, I’m a research software engineer as you said. I’m working on Ian Goodfellow’s team, which works on adversarial machine learning. And that includes situations where there’s a real adversary that’s trying to cause your machine learning model to make a mistake, or there’s like a contrived adversary as part of the training process, as in a generative adversarial network that’s part of the process of making the network learn better.
Daniel Ziegler: Yeah. So I work as a research engineer on the safety team at OpenAI, and basically, my job is to take ideas that some of the researchers there have for training aligned AI systems and implementing concrete prototypes and experimenting with those prototypes and seeing what actually matters and how we can make them work better. We’re working on a bunch of techniques for getting a whole bunch of human feedback and using that to train AI systems to do the right thing.
Robert Wiblin: How do you think those products will ultimately contribute to AI safety in the broader picture?
Catherine Olsson: From my perspective, many paths to powerful AI systems will involve machine learning. I think many folks would hope that we can just construct by hand a system that we knew exactly how every piece works, but that’s not how the current deep learning paradigm is proceeding. Where you throw the system at a lot of data, it learns whatever it learns, and the thing that is has learned in current systems and in systems we can foresee, is not exactly what we wish it would have learned.
Catherine Olsson: So current systems like classifiers are the ones that adversarial examples are typically demonstrated on, where you can make very small perturbations to the image and get the classifier to produce a completely different label than what a human would put on that image. So this is just evidence that the systems that we have today are not learning the types of concepts that humans would hope that they learn, and they’re not learning robust concepts. If systems that are anything like this are gonna be in powerful systems, we will have to iron out these issues first. That’s one narrative I have for how this sort of work might contribute.
Catherine Olsson: The sort of problems that we’re working on in a research sense in this field are still ironing early kinks in the process. It’s very unlikely that any real world system is gonna get thrown off by someone going and changing every pixel a tiny amount. But it is a real valuable toy problem for us to be honing our formalisms and our abstractions in this sort of a research domain.’
Daniel Ziegler: Yeah. I like to frame the problem of deploying a powerful ML system in a way that’s actually safe and actually beneficial like… we can kind of break it up into two parts, where the first part is just giving it the right objective at all. So that’s like optimizing for the right thing. And the second part is trying to optimize for that objective robustly. And optimizing for that robustly includes things like being robust to adversarial examples, or optimization pressure from other parts of the system. It includes things like being safe during the training process, like with safe exploration in reinforcement learning.
Daniel Ziegler: We’re definitely really interested in all those kinds of robustness things, but so far on the safety team, an even more central focus has been this thing about giving our AI systems the right objective. We’re definitely not very optimistic about trying to write down some perfect utility function for our AI systems or some perfect reward function for our AI systems directly, right? It seems like there’s too many things that we care about and whatever we try to specify directly for interesting tasks is gonna have loopholes in it that are gonna be exploited by powerful systems.
Daniel Ziegler: Our plan, really, is to come up with mechanisms for collecting a whole bunch of human demonstrations and human feedback and using them in a way to direct the training process of a powerful AI system to do the right thing. If we define the right objective and we train for it in a robust fashion, really we should be done. We should have a system which does what people want and so in some sense, that’s the whole problem. Maybe it’s kind of a broad view of safety, of saying if you have a powerful system that’s acting in the world and has a lot of influence, for that system to act safely, it’s going to need to understand quite a bit about human values.
Daniel Ziegler: You know, maybe it doesn’t have to have a perfect understanding, maybe sometimes it can act sort of cautiously, but it’s going to need to understand a lot and we want to do that as well as we can.
Robert Wiblin: Yeah, so listeners might know a little bit about OpenAI already, cause we spoke to Dario Amodei last year. But I imagine things have shifted a bit cause the organization’s only a couple of years old and it’s a pretty fast moving space. So what’s the broader picture of what OpenAI is doing?
Daniel Ziegler: Yeah, I mean, I’m pretty new to the organization. I’ve only been here for about four months, but OpenAI’s got a lot of different projects going on. There’s a lot of individual teams. Some of them are teams focused on one big project like the Dota team developing OpenAI Five, which has recently had some exciting matches. There’s the OpenAI robotics team, which recently had a release with Dactyl, a robotic hand. But then there’s also other teams like the multi agent team or the safety team, which have taken more of a portfolio approach, where there’s a bunch of smaller projects that one to two researchers are working on.
Daniel Ziegler: Most of OpenAI is really focused on doing a bunch of research and a bunch of large scale engineering projects to try to push forward AI. And the safety team is trying to harness that and harness other advances in the field, and trying to stay at the cutting edge there and apply those techniques to systems to make aligned AI.
Robert Wiblin: Yeah. And on the other hand, I know remarkably little about Google Brain. How’s it different from DeepMind and has there ever been a consideration of putting them together? Is there a difference of focus and what’s Brain’s main projects?
Catherine Olsson: Google Brain is part of Google, which is under the Alphabet umbrella. DeepMind is also under Alphabet, but they’re separate in many ways, and there’s a lot of collaboration as well. DeepMind’s motto is something along the lines of “Solve intelligence, use it to solve everything else.” Brain does not have quite a AGI or general intelligence focus, and you can see that in the types of research that’s being done. The work that I’ve liked that’s come out of Brain includes machine translation systems, for example, which are being deployed in things that we’re seeing today.
Daniel Ziegler: Yeah, and OpenAI’s mission is more AGI focused like DeepMind, so OpenAI’s mission explicitly is to build safe artificial general intelligence. I’m really excited that safety is part of the mission statement there.
Robert Wiblin: Yeah. So Brain is more focused on specific products that Google can develop, like the translation.
Catherine Olsson: Well, the translation where it’s basic research. Brain is not a product focused team, but it’s basic research that more holistically covers the space of things that machine learning could be applied to as opposed to specific general purpose agents, which I think is a more narrow focus that places like OpenAI and DeepMind are focusing. The things that Brain is producing is not focused on general purpose agents of this highly specific kind. It’s anything that sort of uses machine learning or learning or representations to produce useful stuff.
Catherine Olsson: Not product per se, but basic research that more spans the space of types of tasks that machine learning can be applied to.
Robert Wiblin: Can you say a bit of why you decided to work on artificial intelligence?
Catherine Olsson: For me it’s sort of an interesting story that when I was in my PhD program, I… for all of my life wanted to study something at the intersection of minds and machines. I thought systems that process information are very interesting, and we’ve got some inside our heads and some that we can program. Something in this space would be really cool to work on, and so I joined the PhD program because I wasn’t done with these questions yet, and I was very excited about the lab that I joined and the work going on in that department. But I found it kind of solitary and kind of slow paced. And so I was looking to go back in the software engineering direction towards the sort of stuff I’d done in undergrad with the computer science degree that I’d finished.
Catherine Olsson: I actually expected that I would have to go with something way more boring to get back to software engineering. I was like, “Oh, I’m gonna go do data science at a company I don’t care about, but at least it’ll be more collaborative and faster paced and I’ll be happier.” Then I reached out to Greg Brockman, actually, who also went to MIT. And I knew him not particularly well, but well enough that I could say, “Hey I hear you’ve got this company called OpenAI. Do you have any jobs?” And he was like, “Actually yes. We’re hiring software engineers.” So it was really fantastic timing for me.
Catherine Olsson: At that point I was thinking that machine learning or deep learning, particularly, was on the rise, and so it was a good time to jump on that bandwagon, just for ordinary reasons of career prestige and high pay. You know, I had a lot of friends who were concerned about long term impacts from artificial intelligence and I’m like, “Well, let me just go to a place where people are talking about this and just sort of see what’s up.” And you know, if I’m gonna work in this space, I’d rather work in a place that, as Daniel said, has the beneficial impacts of humanity top of mind.
Catherine Olsson: My impression after ending up at OpenAI is that there’s really just not that many people thinking extremely seriously about the long term. That sort of inspired me to keep that on my radar as one of the things that I can be steering my career towards.
Daniel Ziegler: Yeah, I would say I really came to AI and AI safety in particular through effective altruism. I had been doing computer science all my life and was studying computer science in undergrad, and when it came towards the end of my undergrad and I was thinking what kinds of careers could I actually use this for that would actually have an impact in the world… You know, I’d been hearing more and more about AI safety over the years and just became increasingly convinced that AI was gonna be a big deal, very plausibly gonna have a huge impact on the world one way or another in the not that distant future. So it seemed like a natural transition.
Daniel Ziegler: I mean, I didn’t have that much ML experience, but I did have a lot of computer science background in general, so it seemed like it should be a very manageable jump, so I decided to try to go for it.
Robert Wiblin: So what’s the path that took you to working at OpenAI? I guess most people who graduate with computer science say… You know, I wouldn’t imagine that they could quickly go into a job at OpenAI, but you managed to do it.
Daniel Ziegler: Yeah, so the funny thing is that originally I decided that I would try to get an ML PhD, to try to do AI safety research in academia. So I applied to a bunch of top PhD programs in ML and I got into Stanford despite the fact that I didn’t really have any ML experience. I did do some pretty strong undergrad research, but that was in formal verification, so like I helped prove a file system correct, proved that it will never lose your data and stuff. But that really had nothing to do with machine learning, so it was definitely a pretty big leap.
Daniel Ziegler: I sort of figured hey, if they accepted me, might as well give it a shot. Yeah, and when I ended up at Stanford, there were a couple of things that came together that actually made it so that I went on leave after just two weeks there. What happened was like, I was sort of starting to have second thoughts anyway, because I just noticed that my ML background really was a bit lacking and I would have to do a lot of catch-up. That was gonna be pretty hard work to get to the point to where I could actually do useful research. Maybe more importantly, I just noticed that I didn’t really have my own vision for how to do AI safety work, AI safety research that I thought was actually gonna be useful.
Daniel Ziegler: I definitely thought I was gonna be able to find incremental things that definitely rhymed with AI safety and seemed kind of useful, but nothing that I thought was really gonna make a big difference in the end. And on top of all that, I actually got a brief medical scare where I downloaded 23andMe’s raw data and uploaded it to another service, which was not the smartest… uploaded it to another service to analyze the data, and it told me that I have this rare genetic heart condition, hypertrophic cardiomyopathy. That was literally right as I was starting my PhD at Stanford, and already stressed out about the situation.
Daniel Ziegler: In hindsight, I took it far more seriously than I should have. It was pretty likely to be a false alarm, and after a bunch of tests, I figured out, yes, it was a false alarm. But at the time, with all this coming together, I was like, alright. I’m gonna step back and take some time to reevaluate, and so that’s what I did.
Robert Wiblin: I guess this is a huge aside, but do you have any advice for people that are doing 23andMe at this point? I mean, I’ve considered doing it, but this is one of the reservations I’ve had, that I’m gonna discover lots of terrible things that are in my future and that I’m not gonna be able to do anything about.
Daniel Ziegler: Yeah, yeah, yeah. Well, I think you can do it responsibly. 23andMe lets you choose whether you want to even see some of the more life impacting things, like whether you’re at high risk for Alzheimer’s or whatever. You can just turn that off and not even see. Also, I went another step beyond that and downloaded all of the raw data and uploaded to another service, without any FDA approval or anything for any of those tests. And the thing to remember there is it’s like a huge multiple hypothesis testing problem. Because you have thousands of SNPs, single nucleotide polymorphisms, and each one of them maybe have a 99.9 plus percent accuracy. But of thousands of them, you’re pretty likely to get a false positive somewhere that says you have something terrible.
Daniel Ziegler: So if you do take the step of downloading the raw data that way, you should think it’s only a very slight… and if something comes up, there’s only a very slight suggestion that something is actually wrong. Maybe it’s worth following up on, but definitely you should not take it that seriously.
Robert Wiblin: Yeah. Okay, so returning to the PhD. I know quite a lot of people who have applied to do ML PhDs. Sounds like you went into it with perhaps not quite enough background in ML, like it was going to be a lot of work to keep up. Do you think… does that generalize to other people? Should other people try to do masters or some other study first?
Daniel Ziegler: Yeah. I mean, I think that probably it would be smart to start a few years earlier and… to get into a top PhD program, you kind of need to have done some research already, and gotten some publications, or at least done research that’s got you really good letters of recommendation from your advisors. And it would be a lot more convenient if that was just in machine learning already. So if you have that head start and can start doing machine learning research right away, that’s definitely gonna be a much easier path.
Daniel Ziegler: I think what I did is certainly doable. People do make those kinds of transitions. But you should definitely expect it to be a little bit more difficult.
Catherine Olsson: Can I shove in with some more PhD rants?
Robert Wiblin: Definitely.
Catherine Olsson: There’s a lot of advice about doing PhDs out there. I have just some meta advice, which is like… that advice often just comes without reasoning. It’s like a barrage of like, read lots of papers! Also talk to lots of people. Make sure that you’re publishing, but only in the good conferences. Pick a really good advisor, you know, who have all the good advisor things about them. There’s a lot of advice that’s not really grounded in any particular goal, and when I turned up to do a PhD, I was just trying to follow all of the advice in a kind of undirected way. I just wanna study stuff in this area.
Catherine Olsson: I now think a PhD is gonna go a lot better if you have a pretty clear goal, even if it changes, right? What’s the saying? Plans are useless, planning is essential? I think I’ve mangled that a little bit, but come in with something you’re trying to do.
Catherine Olsson: A PhD is a little like a hacker space, where if you’ve got a project in mind, even if it’s kind of a toy project that you’re gonna throw out, you can come in and start using those resources towards something. Whereas if you turn up at hacker space and just start plugging stuff together, you’re not gonna produce anything. And PhDs in general are often particularly tough if you don’t have experience with yourself of how to motivate yourself in an unstructured environment. I don’t tend to think of that as like, oh, there’s people who can do it and people who can’t. It’s like, have you got the life experience under your belt, whatever it takes for you to know how you work best when you aren’t given that structure by someone else.
Catherine Olsson: I would recommend that folks consider doing something other than going straight from undergrad to PhD, because the undergrad experience is so scaffolded and structured where you’re assigned tasks, and the PhD experience is so unscaffolded and unstructured that anything you can do to give yourself some time to feel out, “What am I like when I’m working on something where no one has told me what the steps are?” And feel that out in other, lower stakes contexts, whether that’s side projects or a job where your manager lets you spend 20% of your time on less structured stuff.
Catherine Olsson: Then once you feel like you’ve got your own motivational house in order, then to jump into something as motivationally fraught as a PhD, you’re more likely to have the experience that you need for your own motivational scaffolding.
Robert Wiblin: Yeah. So you left after two years, was it?
Catherine Olsson: Three years, yeah. I would’ve left after two but unlike Stanford, NYU is not as lenient with taking leaves, and so I spent a whole other year trying to make it work for me, which… it didn’t, but I tried.
Robert Wiblin: Yeah, do you wanna talk about that?
Catherine Olsson: Yeah. So, as I said, my experience coming in was that there topics that I found interesting and wanted to work on, but didn’t have a specific goal of like, “I wanna discover this thing about the brain works in order to achieve this outcome.” It’s interesting. For a long time, I was unhappy, but I didn’t know why. Everything on the surface seemed to check out, like, good lab, good project, got the skills that I need. Why am I unhappy? And ultimately I realized that I’m allowed to just leave cause I’m unhappy. I don’t need to know why.
Catherine Olsson: I do now have hypotheses that I think are pretty solid. As I said, for example, comparatively solitary work and comparatively slow feedback cycles, which led me to something like software engineering which is more group or team based a lot of the time and has faster feedback cycles. But also I think not having a particular thing in the world that I was trying to move or do or cause, I think, led me to feel ultimately, what of this is gonna matter? And the pure raw curiosity wasn’t driving me as much. And I think some folks can succeed motivated just off the scientific curiosity of, “How does this work? I just wanna figure it out.” But for me, that has to be just one piece of the large puzzle.
Daniel Ziegler: Yeah, and I think I sort of was afraid that something kind of similar was gonna happen to me. Where I sort of wanted to work on AI safety in the abstract, but I didn’t have a particular research agenda that I had come up with that I was really burning to try to execute on and specific questions that I really wanted to understand. So without that, I think it would’ve taken a while until I had actually found something that I thought was valuable.
Catherine Olsson: So can I get on a soapbox about that? Like, one of my favorite soapboxes to get on, but I’m gonna… I’m gonna get on.
Daniel Ziegler: Okay.
Robert Wiblin: Love soapboxes on this show, so go for it.
Catherine Olsson: So the thing you’re pointing out is that you wanted to work on AI safety in the abstract, but you didn’t have a particular question that you wanted to work on. I see this a lot in people that I talk to that are interested in AI safety, and my sort of party line is that “AI safety” is not one thing. It’s definitely not one field. If anything, it’s a community of people who like to use the phrase “AI safety” to describe what they’re interested in. But if you look at what different groups or different people are working on, they’re very very different fields of endeavor. So you have groups that are trying to take deep reinforcement learning and introduce a human feedback element so that you can learn human preferences in a deep RL system. That’s one research agenda. There are many ways that that could end up in a future system.
Catherine Olsson: Another agenda is, for example, MIRI has folks working on decision theory. If we understood decision theory better, than we would know better what a good system should be like. Okay, decision theory theorem proving is just categorically a completely different type of work from deep reinforcement learning. You’ve got groups that are working on… so like my group, for example, working on robustness in machine learning systems. How do we know that they’ve learned the thing that we wanted them to learn? Also a completely different field of endeavor.
Catherine Olsson: And it’s very important to keep in mind if you’re looking for a, quote, career in AI safety, is what exactly is it that you think is gonna be important for what trajectory that you think the world is gonna be on? And then what are the particular subskills that it’s gonna take? Because it’s not a monolith at all. There’s many many different groups taking many many different approaches, and the skills you need are gonna be extraordinarily different depending on the path.
Daniel Ziegler: Yeah. And one thing I wanna say, and I wonder if you agree with this, is to some extent, it is possible to off-load your… especially if you’re going for more of a research engineering position, it is possible to off-load some of your opinions onto other people. Like I sort of looked out at all the different people doing different kinds of AI safety work, and basically, my call was that what the OpenAI safety team was doing was to me seemed like the most promising and the most valuable approach. And I did have to make that decision myself, and understand what I thought the trade-offs were between different approaches, but I didn’t have to come up with the approach myself. And I was able to piggyback on other people’s thinking about this space, and join this team with an existing research agenda that I could just help out with. That, I think, was really useful. That said, the more you can have your own opinions and your own ideas about AI safety, the better, for sure.
Catherine Olsson: I’ll answer your question, but I wanna push you a bit.
Daniel Ziegler: Uh-huh.
Catherine Olsson: Approach to what? You thought it was the best approach to what, exactly?
Daniel Ziegler: Yeah, I think that’s absolutely the right question to ask. I think it’s definitely a mistake to come in from the EA space and be like, “Oh, AI safety, there’s this thing.” Which I think is exactly what you’re pointing at. So to try to be more specific about how I think of what I’m trying to solve, is basically, I expect that at some point in the future… and it could be in not all that much time, we’ll have powerful AI systems that exceed human performance on many kinds of cognitive tasks.
Catherine Olsson: And sorry, you mean some sort of autonomous agent-like system?
Daniel Ziegler: So it doesn’t necessarily have to be agent-like systems. It could be systems that are just recommending steps for humans to take or whatever. But they might be systems that are taking in a huge amount of information or doing a whole bunch of reasoning on their own in a way that’s really difficult for humans to oversee. And I think there’s gonna be really powerful economic incentives and strategic incentives for various actors to deploy powerful systems like this.
Daniel Ziegler: So I don’t want to end up in a world where there’s this vacuum where there’s this powerful incentive to deploy systems that are used to help make important decisions or maybe are acting autonomously themselves, but we don’t know exactly how to specify what they should do in an effective way. So that’s really why I’m super excited to work on some of the stuff that the OpenAI safety team’s working on, which I would definitely wanna explain a little more about.
Robert Wiblin: Almost the core of the conversation is that the path from doing a PhD or finishing a PhD to then going into one of these roles is pretty clear and we’ve got lots of coverage of that. But I think people are a lot less clear about how do you get into really useful research roles in AI safety broadly construed without finishing a PhD?
Robert Wiblin: So you left the PhD and then what happened next? What did you decide to do and why?
Daniel Ziegler: I basically just spent a couple of months taking some time off and thinking about what I wanted to do. I did a couple of little projects, like I helped out a little bit with a research project at CHAI, the Center for Human-compatible AI at Berkeley. I thought a little bit about how could you try to use prediction markets to try to understand what’s gonna happen with AI. But yeah, then I decided, I think, that like… I’ve always enjoyed software engineering and I like building stuff, so I thought that trying to work in a research engineering position for an AI safety org could be a really good path for me. So I decided to apply to OpenAI and to MIRI, and I spent basically a solid month and a half just preparing for the OpenAI interview.
Daniel Ziegler: What I did was, Josh Achiam who was one of the researchers on the OpenAI safety team, has a list of fifty or so deep reinforcement learning papers, like the key papers in deep RL. And I just went through this list together with a housemate of mine, and we read one or two papers a day, and picked a handful of them to implement and to actually try to reproduce. So I spent a bunch of time coding in Python and TensorFlow, coding up these deep reinforcement learning papers and trying to debug things and trying to tune things until they were actually working reasonably well.
Daniel Ziegler: I think generally the advice is like, if you’re trying to get good at something, just do that thing and then see what’s necessary to be able to do well at that. So I just jumped into that, and then I applied to OpenAI, got that job. I also went through MIRI’s interview process, and they did end up giving me the offer as well. And then I spent some time trying to decide and ended up thinking OpenAI was the place to go.
Robert Wiblin: So it seems like there was a huge potential shock out here, cause you could’ve spent, what, four to seven years doing the PhD –
Daniel Ziegler: Absolutely.
Robert Wiblin: And instead you spent six weeks reading papers. What do you make of that?
Daniel Ziegler: Yeah, yeah. Certainly, I think I would’ve hoped to do useful research, AI safety research, during my PhD as well. I think that if I hadn’t been able to do that, I would’ve been pretty unhappy spending all that time just sort of in preparation for hopefully doing something useful. Even so, I think this was a much more effective path than I could’ve taken otherwise.
Catherine Olsson: I’ll emphasize that I also see that the path might be shorter than you think as a theme, at least right now, in 2018, in the landscape, where as I mentioned there’s many different corners or different approaches or groups working on something that could be construed as AI safety or could be relevant to AI safety. And some of these corners, like Dario’s team at OpenAI, are just hiring engineers. Similarly, MIRI is just hiring straight up software engineers. Now, it’s not the case that every problem that needs to get solved in order to deploy powerful systems safely is at that point, but some of them are. And so if you are a talented software engineer and you’d like to start working on it, the state of the questions right now is that some of them are just ready to throw engineers on. And so if you haven’t just tried applying to the position that you want, just try. Just see. You might actually be ready for it.
Catherine Olsson: There are many total unsolved wide open pre-paradigm questions that you can’t just throw a software engineer on. And those need a different sort of approach, so it’s valuable for folks who think these questions are important to consider the full spectrum of problems that we need to solve and see which of those can I best contribute to. And some of them, we can just throw software engineers on, and if that’s what you wanna do, go do that. And some of them need highly productive researcher effort to crack open pre-paradigm corners of the space, and that’s just a very different kind of thinking. It’s a very different kind of work. But all these kinds are currently useful.
Robert Wiblin: Yeah. What are some of the problems that are ready for software engineers?
Daniel Ziegler: Yeah, so I mean, on the OpenAI safety team, we just decided to open up a position for just a generalist software engineer that doesn’t necessarily have to have any ML experience. And the idea there is that we’re getting big enough now, we’re like six people, and with a couple of new people joining us soon, that we do have a bunch of tasks where a software engineer can help us a lot. Things like making a website to collect human training data or optimizing various parts of our machine learning workflow to make things run faster, to manage all the experiments that we’re running. So basically both things that are directly part of the experiments and things that will help on the workflow for the researchers and the research engineers. And that’ll all be very useful.
Robert Wiblin: Catherine, it sounded like you had some other things in mind as well?
Catherine Olsson: Yeah, so I was also gonna point out that in the field of adversarial examples, which is in my opinion a research field that has a current paradigm… and I sort of mean paradigm in the Kuhnian “Structure of Scientific Revolutions” sense, where there’s a set of questions and there’s a set of tools, and there’s a community of researchers that agree that if you apply those tools to those questions then you yield something that we all recognize as progress. So there’s a crank to turn, and there are many many researchers who are now working on adversarial examples turning that crank.
Catherine Olsson: And so that’s a case where you don’t have to necessarily know how to go into a totally uncharted research field and find a foothold in the middle of nothing. If you’re a competent researcher or research engineer, there’s work you can do right now on attacks and defenses in the current adversarial examples paradigm.
Catherine Olsson: And I think that’s not quite just ordinary software engineer. That’s like, if you know TensorFlow and you can read your way through papers, you don’t have to be like at a PI Principal Investigator level, like got your PhD researcher. You can be early stage PhD and sort of jump on some of these open problems in adversarial examples and sort of push that forward.
Catherine Olsson: Now in my view, I think that field is gonna need to move on to the next paradigm, right? Like, the “make small changes to images” paradigm is currently still fruitful, but it’s starting to sort of show cracks in that paradigm. That’s not a particularly realistic threat model in the real world. What could we move to that’s a better threat model? Or the theory of adversarial examples, also just starting to get opened up.
Catherine Olsson: And so I think this is a regime where if you’re a research engineer or early stage researcher, there are these sort of juicy hand holds that’ll help us move this paradigm into the next phase or the next paradigm to come, where we’re starting to really understand what’s going on with these problems and then how that might relate to systems that actually get deployed in the real world.
Robert Wiblin: So just to back up, you left your PhD, and then what happened?
Catherine Olsson: Right. So as I was considering leaving, that was the spring of 2016. That’s when I reached out to OpenAI, among other places I’d reached out to, and said like, “Hey, do you have a job for me?” And OpenAI said, “Oh actually, here, do a trial with us. Spend your afternoons and evenings, throw together this API for this tool that we’ve built.” And that was a really fun way for me to just see what that work would be like. And that went well and I said like, “Okay, well I’m about to go on vacation, but when I get back, happy to do more trials.”
Catherine Olsson: So I moved to San Francisco with the hope that that would work out, but also sort of considering other job opportunities. And that did end up working out and I was able to jump over to OpenAI then. But it was a bit of a leap of faith, like, “Well, I’m gonna go to San Francisco. There’s a lot of jobs that I like out there and one of them I’ve already had a successful initial trial, and there’s other opportunities I could consider as well.”
Robert Wiblin: Yeah. So I guess for both of you, what do you think it was that made OpenAI interested in hiring you?
Daniel Ziegler: I think for me it was mostly just, you know, doing really well in the interview. I mean, I think, like once you get your foot in the door, once you can convince them that you’re reasonably likely to be capable of doing their kind of work based on your background. It’s really just a matter of demonstrating in the interviews that you’re able to quickly code a bunch of machine-learning, reinforcement-learning code and get it to work and do the kind of stuff that you’re going to do day to day on the job.
Catherine Olsson: Different organizations are looking for very different skill sets or personality types. OpenAI, when I joined, was very small, I was really excited about jumping in on the ground floor of something that was still forming itself, and I think I brought that demeanor to the table as well as just the technical chops of like, yep, you can throw me in on coding something that I’ve never seen before and I’ll be able to do it. I think, for any given organization that you might be looking at applying to, great questions to ask, and this is generic career advice, but ask the questions like, what’s valued here? What sort of person thrives here? It’s going to be very different for different places. Now that I’m at Google, I feel like it’s a different set of skills that I’m bringing to the table. It’s not like a totally new, changing culture. It’s actually quite an established culture, but now the types of skills that I’m bringing to the table include working on a team to figure out what this team needs. What project could I pick up to fill in the gap here? What skills can I develop that are underrepresented among this group of eight to 10 of us who are working on similar problems.
Robert Wiblin: So, you work on that computer game thing, I’m guessing, at the time, in 2016-17 at OpenAI?
Catherine Olsson: Yes. I was working on the Universe Project.
Robert Wiblin: That’s it.
Catherine Olsson: Exactly.
Robert Wiblin: It’s not called a computer game.
Catherine Olsson: Well, then I worked on the other computer game thing, namely the DotA 2 project.
Robert Wiblin: And that was wound down, I think, in 2017, because… the first one, Universe, people just found out it was too hard and they thought, oh, well, we’ll have to do this…
Catherine Olsson: Yes. So Universe was a project to allow any game that you can run in a Docker container, like any game that you can run on a computer basically, to be an environment for reinforcement learning research. It was a highly ambitious goal. The hope was to unlock many more environments. The quantity of the environments would drive research progress.
Catherine Olsson: In my take of why it didn’t ultimately get taken up as a set of environments by researchers, one is that they ran in real time. The ability to run faster than real time is currently a major driver of progress in reinforcement learning. Another is the environments were not all that high quality, and so you don’t necessarily know exactly what training signal is my agent being fed, and the current work was through operating in a small number of environments that researchers know very intimately, which was not the vision of that project.
Catherine Olsson: I still think something like that is a really useful target for transfer learning or generalization in agents, but it wasn’t the correct next step for the research community to be tackling.
Robert Wiblin: Okay, so people have moved on to DotA 2. This is Defense of the Ancients.
Catherine Olsson: Yes.
Robert Wiblin: It’s a computer game that AI’s are learning to play.
Catherine Olsson: Right. So, DotA 2 is a very popular multiplayer computer game, five players on a side. It requires a lot of teamwork, communication, long-range strategy, et cetera. When I joined that project, it was first starting up. I was working on the evaluation system. So how do we know how good this agent is? We came up with a system for playing the agents against one another, ranking them so you can see the progress over time. It’s a multi-agent, or player v. player setup, so you don’t have just one score, like how good is my classifier or how good is my agent against the level. Your agent is always tied against itself as it’s doing self-play. So how to turn, while it’s just still tied, into a meaningful measure of progress involved loading past agents, future agents, playing them against one another, and then looking at how that win rate changed over time.
Robert Wiblin: Okay, so, Daniel, now that you’re working at OpenAI, what is the main agenda there that you’re contributing to?
Daniel Ziegler: Yes, so maybe I will start by describing some of the past work that the safety team did when I wasn’t there yet. There was a NIPS paper last year called, Deep Reinforcement Learning from Human Preferences. The idea there was, normally in the reinforcement learning paradigm, you have some agent acting in some environment, so it might be playing a video game or it might be controlling a robot, it may be a real robot, maybe a simulated robot, and it’s trying to achieve some sort of well-defined goal that’s assumed to be specified as part of the environment. So in a video game, that might be the score. In robotics tasks, it might be something like run as far as you can in 10 seconds, and something that’s a hard-coded function that’s easily specified as part of the environment.
Daniel Ziegler: For a lot of more interesting, real-world applications, that’s not really going to work. It’s too difficult to just write down the reward function that tells you exactly how well you’re doing, because there’s just too many things to take into account. The Safety Team said, okay, let’s relax this assumption and instead of assuming that the reward function is built into the environment, we’ll actually try to learn the reward function based on human feedback.
Daniel Ziegler: So in one of the environments, one of the environments that was like a little simulated robotics task where you have this little hopping agent, just like a big leg basically, we gave humans these examples of what the leg was currently doing. We gave them two examples, one on the left and one on the right, and then the human had to decide which of those was doing a better job, according to whatever the human thought the job should be. So one thing we got the little hopper to do is to do a back flip. It turns out, it’s actually pretty tricky to write down a hard-coded reward function for how to do a back flip, but if you just a few hundred times show a human, is this a better back flip or is this a better back flip, and then have the system learn from that what the human is trying to aim for, that actually works a lot better.
Daniel Ziegler: So the idea is, now instead of having to write down a hard-coded reward function, we can just learn that from human oversight. So now what we’re trying to do is take that idea and take some other kinds of bigger mechanisms for learning from human feedback and apply real, natural language to that. So we’re building agents which can speak in natural language themselves and maybe take natural language feedback, and trying to scale those up and move in the direction of solving more real tasks.
Daniel Ziegler: In the past, we’ve just solved all these really small tasks, but they really are toy tasks. We don’t actually care about making agents do back flips. We care about solving actually interesting problems and working with natural language is a step on the way to that.
Robert Wiblin: It’s understanding feedback that people say? What do you mean?
Daniel Ziegler: Yes.
Robert Wiblin: That sounds pretty wild.
Daniel Ziegler: Yes. I mean, obviously, we’re not at the point where it really works with full human natural language and actually understands what you’re saying, although; natural language models in machine learning are starting to get surprisingly good. You can train them to the point where they are saying things which sound like pretty reasonable text. It’s sort of an ambitious step to try to start working with those things and see how well we can make our ideas, or our alignment ideas, work with natural language. The sooner we start, the more time we’ll have to make that work. We basically want to be on the cutting edge as soon as we can.
Robert Wiblin: So at the moment…
Daniel Ziegler: You know, we’re not, we’re definitely starting pretty simple. We don’t have anything really groundbreaking yet, but that’s the direction we’re moving in.
Robert Wiblin: So, I’ve also interviewed Paul Christiano who’s working at OpenAI. He was talking about this amplification, what’s the name of this approach that OpenAI has been pioneering lately?
Daniel Ziegler: Yes. The idea with amplification, so going back to human feedback where we have humans evaluating the outcomes of an AI system acting and trying to compare which outcomes are better, we think that’s a step forward, but there’s still a lot of limitations with that because, for a lot of interesting applications, it’s going to be really hard for a human to understand what’s going on well enough to oversee this big, complicated process that might be going on. So humans might, at some point, lose the ability to directly evaluate everything that’s coming out of an AI system.
Daniel Ziegler: The idea with amplification, and we also have a related idea from Geoffrey Irving, another researcher on the Safety Team, around debate is – so basically, imagine we’re trying to build some kind of powerful question and answering system. So you give it some kind of question, say like how good is this transit system design or where should I go on vacation, or something like that. We want to train a system that gives good answers to this kind of thing.
Daniel Ziegler: So let’s take amplification. The idea with amplification is you take some question that you want answered, and one place you can start is to be like, alright, let’s let a human think about that question for 10 minutes and see what they come up with. So you can try to train a system with a bunch of examples of questions, where you give the human some question. You let them think about it for 10 minutes and then they give some answer. You train a system to try to imitate humans on that.
Daniel Ziegler: To be clear, it’s out of reach of current ML systems, but in principle that is a well-defined ML problem. That wouldn’t be that exciting, because if we managed to do that, we’d only have built a system which can imitate a small amount of human reasoning, but not do anything more capable than that.
Daniel Ziegler: So then the idea is, alright, why don’t we, as this person is thinking, why don’t we let them ask some sub-questions. So they get some type of a question like, how good is this transit system design. Then they can break it up into a couple of smaller questions like how much is it going to cost to build, and what’s the economic benefit going to be. Then those sub-questions get to be answered by another human that also gets to think for 10 minutes. That human also gets asked more sub-questions, and so you get this giant tree of little humans, or human-like things thinking for just 10 minutes apiece, but potentially you can actually do a bunch of really interesting reasoning in this gigantic tree. The idea, of course, is that we’re not actually going to have this giant tree of humans doing this really weird computation. We’re actually going to train ML systems to imitate the individual humans in this tree, and we’re also going to train ML systems to try to predict the output of the entire process. So it goes straight from a question to an answer that potentially involved a whole bunch of reasoning.
Daniel Ziegler: The hope is that – I guess the idea behind this kind of system is that we can try to figure out what humans would have said to a particular question if they had been able to think for thousands of years and spawn a bunch of copies themselves and have this big, gigantic deliberation process and then finally come up with an answer. Since we can’t actually do that, we’re going to try to make an ML system do that for us.
Robert Wiblin: What kind of safety work, is there a Google Brain? I guess it’s not the key focus there as much, but…
Catherine Olsson: Well, so, yes. Again, this comes back to what makes AI Safety, AI safety. If you view it as a community of people who refer to their work with that term, there’s really not much happening at Google Brain. If you zoom out and say, well what’s the goal, that one potential way to view the goal of AI safety is to expand the palette of types of outcomes that we can get from AI. If all we can do is build systems that maximize a single number and do so in ways that we can neither inspect nor modify nor have any assurances about, that’s an extremely limited palette, and only quite dismal outcomes will come from that. We’d like to expand that palette to be able to specify things that are harder to specify. We’d like to have more assurances that it’s going to go well, or more ways to inspect that, more ways to modify that, etc.
Catherine Olsson: Chris Olah, who actually just announced that he’s moving to OpenAI, has been doing a lot of fantastic interpretability work. There are other groups within Google, for example the PAIR Group, People + AI Research, that are also doing interpretability work, so that goal is to be able to inspect and understand what systems are doing and, if it appears from your inspection that it’s going to do something that you would like it not to, that’s sort of one avenue for intervening. As I mentioned, robustness and adversarial examples is definitely a focus. There’s a broader security and privacy umbrella, which broadly is looking into how we can make sure that systems classifiers or others are not revealing data about users that we don’t want revealed, or manipulable by adversaries in ways we don’t want them to be. That’s more assurances that these systems are behaving the way that you’d like them to. There’s a lot of fairness work that Google is focusing on. Fairness is an interesting one, where I think it’s not obvious to many people that a system that’s producing unfair outcomes for people is not aligned with our values.
Catherine Olsson: So there is extremely concrete and practical evidence that we already have systems that are not aligned with human values, and if we just crank up the power, deploy them in more contexts, give them more decision-making power, more autonomy, that’s not going away. There are many different angles on this. Another is the work on learning from human feedback. Some of the music and art generation teams are looking into how to incorporate human preferences for what sort of music or art they’d like to generate. That doesn’t appear on the surface to be “safety,” but, again, I sort of view it under the lens of expanding the palette of types of outcomes that we can use AI to achieve as opposed to just “Hammer on this number. Whoops, it didn’t correspond to what we wanted. Oh no.”
Robert Wiblin: So what is the organizational structure that ML groups have? Like what are the different roles that your fill and that other people fill?
Catherine Olsson: So it’s different at different groups. If you’re applying to a specific place, always make sure to ask them, what exactly is this role going to do? Very broadly speaking to try and generalize it across groups, a research scientist is someone who has more experience leading research projects of their own or selecting their own research questions, and is directing that sort of program. A research engineer has more focus on implementation in some sense, so that could be scaling things up or it could be quick, memory-efficient implementations, whatever that might be. Different people come in with different skill sets, so which title you end up in is often just an approximation to what your strengths are when entering.
Catherine Olsson: In my role, there’s no restriction of what I can or can’t do. I’m going to do better at projects that leverage the skills that I have, which are not currently those of picking independently a research direction and leading it on my own, although I am working on those, because I think those would better unlock the set of projects I could work on. Many organizations have more clear-cut structure there where research engineers might get assigned to a particular project that need whatever skills they come in with, but I think it’s often just more useful to look at what skills a given person has as opposed to what title they end up with.
Daniel Ziegler: Yes. At OpenAI, certainly on the Safety Team, it’s a pretty fluid distinction as well. I think that, given the state of ML right now and the fact that it’s such an empirical field, maybe 75% of what an ML researcher will do is exactly the same kinds of things that an ML engineer is going to do, like implementing new machine-learning models and running a bunch of experiments with them, tuning them, fixing bugs, and seeing what works well. Those are all things that I do basically on a day-to-day basis. Then what a research scientist is going to do on top of that is be really up to date with the literature and come up with a bunch of new ideas for what direction to move in.
Daniel Ziegler: Right now, basically on the Safety Team, for the most part, on a high level, I am implementing ideas that some of the research scientists have come up with, but when it comes to the smaller decisions like, how can we try to make this ML algorithm work better, what’s the next experiment to run, sort of day-to-day decisions like that, I am definitely making those decisions myself a lot of the time and could fluidly move into setting more of the high-level vision as well if I wanted to.
Catherine Olsson: For myself, I’ve been doing a mix this past year of working on my own project that I came up with, with help from my manager and other folks contributing to that project too, especially in the last stages after the prototypes had started working, and also jumping in on other people’s projects. I have one skill that I’ve tried to develop, which is to be able to leap into something where I don’t have very much context and spin up quickly and be able to contribute critically so that if colleagues of mine are up against a deadline and need some help in that final mile, being able to jump in and be useful, even though I haven’t had months and months of context.
Robert Wiblin: So I’m mostly familiar with biology labs. How much does it look – like a typical lab would have one PI or one lab leader, and then a bunch PhD students and a bunch of other people who are doing research under them in that kind of hierarchy. Is that kind of similar, or do you have a research scientist who is always overseeing what the engineers are doing?
Catherine Olsson: I don’t think an academic model is a particularly good fit, largely because there are many more research scientists. It’s not quite so much like a PI and grad student type model, and there are research scientists and engineers of all levels who will collaborate in very organic groups with mixes of skill sets. The person with the title of Research Scientist also has a manager, who is also, probably, a research scientist, who they will then go to for advice. I don’t think it’s quite as strict, like once you’re a PI and now you lead your lab and tell your grad students what to do. It’s much more fluid. I think that’s actually a strong benefit. There isn’t in industry labs as much pressure to be the PI of the such-and-such lab. You do have to demonstrate that you are good at leadership and leading independent projects, etc., but there is a lot more flexibility to collaborate in groups of whatever composition that particular project is well suited for.
Daniel Ziegler: I think it varies a whole bunch. I think a lot of the projects that the Safety Team’s worked on have been like one researcher or maybe one researcher and a half working on something that’s their idea but they’re trying to do, and then now with starting to use more natural language, we’re starting to have a slightly bigger team there, so there’s four-ish people working on that, which is new for the Safety Team. There are also a bunch of really big teams at OpenAI, like DotA and robotics, that are just big engineering efforts with lots of people working on them. I think it sort of spans the spectrum.
Robert Wiblin: Yes, so it sounds like the roles are a lot more fluid, perhaps, than what I was imagining that people kind of shift between doing different tasks just based on where they can contribute the most to the project as it is.
Catherine Olsson: Right. Exactly. It’s just very dependent on what a particular project needs, what skills you have, what skills are within reach for you to develop next. It’s a much more organic process, I think.
Robert Wiblin: Yes. I guess I’ve heard people debating about how valuable is it to get someone hired at OpenAI as a research scientist versus a research engineer. It sounds like that might almost just be a confused question, because it’s going to depend so much on the person. In fact, there isn’t a clear barrier.
Catherine Olsson: There’s no clear barrier. Again, it just comes back to skills. I think that question, how valuable is it for a person to be in role A versus role B, that’s what skill would be implied by the person getting hired as A or B. I think folks who are able to come up with new research agendas that are well targeted at problems that need solving are, of course, incredibly valuable, and so are folks who can jump on existing agendas and push them forward. They’re valuable in different ways. I think as the set of questions under the safety umbrella evolves and matures, then different skills will be needed in different parts of the problem. That’s something that I was pointing at before, that right now there are corners of the space that you don’t even need any machine learning experience at all to contribute to. There are other parts of the space that would be difficult to get a hand hold on, unless you’ve already demonstrated that you can productively do novel research in unexplored domains. These are very different skills that are needed.
Daniel Ziegler: Yes, and I think that, right now, on the OpenAI Safety Team, we’re at a point where we’re trying to scale things up and we can absolutely use just more engineering effort. I think – so when I joined, I was the first research engineer. It seemed pretty clear that the work I was doing was being very directly taken off the hands of some of the people with “Research Scientist” in their title, and I was able to let them, like let Paul Christiano and let Geoffrey Irving think more on a conceptual level about how they wanted to build their alignment schemes. So both from the perspective of driving experimentation faster with more concrete prototypes of the schemes and from the perspective of giving people more time to think on a more abstract level, I think it was really valuable to have me there as a research engineer.
Robert Wiblin: Yes. Do you think there’s a difference in replaceability between, you know, I mean there’s people who have whatever schools come with having finished a PhD and those who don’t? If they hadn’t been able to hire you, what would have been the next best alternative to them? Was it someone who’s significantly worse or just not making a hire at all, or is it someone who’s just marginally less as effective? Perhaps more generally, like some people it’s easier to find a second-best candidate than others?
Daniel Ziegler: Yes. I do think it’s going to vary a lot. Right now on the Safety Team, both seem very not replaceable. I think if I hadn’t been hired, that would have just been one fewer research engineer for at least the next half year or so. Although, as we continue to grow and reach a bigger hiring pool, that might change to some extent. I think the same thing is true, and probably even more so, for people that are able to contribute more of their own agenda. I think that’s something that we can always use more of and there will be room for that for a long time.
Catherine Olsson: I’d additionally like to emphasize management skill. That’s something that I think hasn’t come up here yet, but a lot of research teams need folks who are both technically competent and good managers. So it’s not just like, can you write tensorflow code versus come up with ideas, it’s also can you build out a team or manage a project. Those are also incredibly valuable skills. If you’ve got some of one and some of the other, that’ll also go a long way.
Robert Wiblin: That’s kind of a potentially unique combination that’s very hard to hire for otherwise.
Catherine Olsson: And additional pieces like people who are excited about thinking about the strategic vision of what an organization or a team should do, or where is humanity going with all of this, if you’re willing to engage both at the level of debugging tensorflow and at the level of maybe more policy-like questions, that’s another difficult to find combination.
Robert Wiblin: Do you think either of you are learning more on this job than you would have if you’d done the PhD, or continue the PhD?
Catherine Olsson: Absolutely.
Daniel Ziegler: Yes I think so.
Robert Wiblin: It seems that way.
Catherine Olsson: I think the best way to figure out what’s going on is just to dive in. In fact, I’m directly referencing a post by Nate Soares, called Dive In, which I love and recommend, that if you have an extremely concrete plan of how you’re going to contribute that has actionable and trackable steps, you’re going to start getting data from the world about your plan a lot sooner than if you have some unreachable or nebulous plan. I would encourage anyone who’s interested in this sort of thing to look for the smallest step that you can take that brings you just a little closer. If you’re currently a software engineer and you can take a statistics class and maybe do some data science in your current role, by all means do that. Take just one step closer to something in the space of machine learning.
Catherine Olsson: If you can just do software engineering at an organization that does ML, now you’ve just, if you take that role, you’ve just got your face in the data in a much more concrete and tangible way. I think, particularly folks who are coming at this topic from an EA angle, maybe you’ve read Superintelligence, whatever your first intro was, those abstractions or motivating examples are quite far removed from the actual work that’s being done and the types of systems that are being deployed today. I think starting to bridge that conceptual gap is one of the best things that you can do for yourself if you’re interested in starting to contribute.
Daniel Ziegler: Yes, and I would say, try just diving in all the way if you can. Like I said, when I was preparing for the OpenAI interviews, I went straight to just implementing a bunch of deep reinforcement learning algorithms as very nearly my first serious project in machine learning, and obviously there were things along the way where I had to shore up on some of the machine learning basics and some probability and statistics and linear algebra and so forth, but by doing it in sort of a depth-first manner, like where I just went right for it and then saw as I went what I needed to do, I was able to be a lot more efficient about it and also just actually practice the thing that I wanted to be doing.
Catherine Olsson: Yes, I definitely second that. Anything that you can do that’s hands on and tractable is going to get you a lot farther. One mistake I see people make is getting very intimidated by these very long reading lists. There are many reading lists out there of how to get started in AI safety, read these 12 books, and they’re like, oh my God. People fall into the trap of being like, oh I’m going to learn reinforcement learning. I’d say, swap that out. Rather than learn X, try to learn to do X. I’m going to learn how to implement DQN on Atari. Great, now you can tell if you’re on track for that, whereas if you’re trying to “learn RL,” you have no way to know, have I learned it yet or not? Whereas if there’s something you’re trying to do, then you can tell, can I do that thing yet or not.
Robert Wiblin: So another question I’ve heard people discuss is, in as much as you’re trying to get an organization to take safety considerations more seriously, it might be the case that research scientists have more influence over the ideas of the culture, their priorities. Do you think that’s true, or is it just kind of anyone there who is contributing who has good ideas, they can get taken up.
Daniel Ziegler: I think, for the most part, it is the latter. I think that – I feel like I’m in a position where if I have opinions on things or ideas for where the work should go, I will at least get listened to. I think it wouldn’t be that different if I was a research scientist. The more people in an organization that are thinking about safety and really have the long term in mind, the better. I think that even if they don’t have direct decision-making influence, just by being there and talking to people, you can make sure that an organization is moving in the right direction.
Catherine Olsson: I think, sort of to break down what does it mean to “take safety seriously”. That has two quite different pieces right now. One is, as a research-producing organization or institution, to focus on research that will do what? It will improve robustness or shape our ability to inspect or provide feedback, whatever it is, so there’s that one question of “take seriously” as in “prioritize research with a certain flavor.” Then there’s another of being cognizant of the effects on humanity of anything that you actually deploy. Those are very, very different. What’s being researched and what’s being deployed and “taking seriously” looks very different in those cases. Some of those are both extremely valuable and I think from the research standpoint, in either of these roles you have some chance to choose what you work on and you can choose to put the firepower of your output behind projects that you think are most valuable in any role that you’re in.
Catherine Olsson: On the question of what’s actually getting deployed, at a place like Google, even if you’re just a Google software engineer on Cloud or something, you can be feeding into the organization’s procedures around choosing what’s good to build and what’s good to deploy. That’s actually accessible to people who are totally outside the scope of research – at the edge of what’s getting deployed to real users. I think that’s also valuable, though it’s quite different from the research side.
Robert Wiblin: Yes. I relay the conversation that I’ve heard is, is there much value in being someone who works some kind of capabilities within these organizations, where it’s not clear that you’re having either of these effects directly that you’re not part of a safety focused team, nor do you seem to really have that much influence over deployment per se, but still you’re part of this broader, important organization that could be influential within the development of machine learning, and you have a particular view about how safety should be given a lot of importance, relative to just speeding things up.
Robert Wiblin: Do you have any perspective having been in these organizations of how useful that is?
Daniel Ziegler: Yes, I kind of want to repeat what I said earlier about just the more people in an organization thinking about these issues the better. One great thing about OpenAI is that throughout the company, I think the safety team is really well respected. It’s a really strong team and it has a bunch of strong people on it. It probably helps that we’re doing legitimate-seeming ML research, but I think having people outside of this sort of nominal safety team also caring about that, can help that situation come about, like if you’re at a place where people working specifically on safety related things don’t have as much respect as other parts of the organization, that could help. That said, I think it does seem significantly more valuable to actually be working on stuff directly.
Catherine Olsson: I would say that if your goal is to have some positive impact on the world, you need some plan for what exactly that’s going to look like. Simply being part of an organization and abstractly caring about something is not an impact plan. There are many ways that could go that could have a positive impact, like get involved in projects that don’t seem directly relevant to the stuff you care most about as a way to get related experience if you think there’s better mentorship on that project. That definitely works. You could work your way up the ladder of influence in the organization, and then hope to shape how decisionmaking in this organization happens, in general, at the high level and sort of steer an organization towards what you view as more safety promoting or positive outcome promoting, decision-making structures in the organization.
Catherine Olsson: Now I’d hope that if you want to do that, that you actually know something about which decision-making structures in an organization go better or worse. This is a case where simply standing nearby and caring is not going to have any particular impact one way or the other. But if you become an expert in organizational decision making and what causes organizations to make safe versus unsafe decisions and then put yourself in a position of power, that sounds fantastic. I would love every organization to have people with actual expertise about organizational decisionmaking shaping those decisions. I think simply being aware that there is a problem is not even remotely the same thing as having concrete and particular skills that will cause outcomes to be better.
Robert Wiblin: Okay, so dealing with a slightly different topic now, what does your day look like? How much time do you spend, I guess, in the office? How is that roughly divided between different kinds of tasks?
Daniel Ziegler: I tend to spend, because I care about what I’m doing, I tend to spend maybe ten hours a day in the office, although I’m just currently taking weekends off. But, yeah, so, I mostly just work on coding up new stuff in Python, in TensorFlow, new machine learning code, running that code in a whole bunch of experiments for different tasks or different hyperparameter settings to see how well it’s working and then based on the results of those experiments, going back and tweaking something to make it better, fixing bugs, and those kinds of things. Some of the time, I’ll be working on something that’s completely new.
Daniel Ziegler: We’ll be solving a task we haven’t approached yet or adding some new twist to the machine learning problem, or we’ll just have some existing benchmark that we’re trying to do better at, and it’s just a matter of making things run faster or improving the training process to make it more and more effectively, so it’s both being able to do new things and doing better at existing tasks. It’s a lot of back and forth. Sometimes, I try to have two things I’m working on at the same time so I can code on one and fire off the experiments. While those experiments are running, go back to the other thing. It’s a little bit multitask-y, and something I’m still figuring out how to deal with, but, yeah, that’s the bulk of what I do.
Catherine Olsson: Similarly, I spend the bulk of my uninterrupted times, I at least strive to spend coding because that’s the most valuable time for code. Any given project looks different at different phases of its life cycle. I also try to have two projects, one that might be a little more just put in the hours, and you know what you have to build, and you just have to build it, and another that might be more experimental where you’re throwing together prototypes or reading things that are related, or talking to people about those ideas.
Catherine Olsson: I also spend maybe 20% of my time on what I think of as more in the vein of policy or outreach about what our team is doing and how that’s important both internally and externally. So, some of that within Google working on groups that are looking at how do we implement the security and privacy principles? What sort of trainings might we need? Or, going to external workshops that are looking for the perspective of folks who know how adversarial examples or ML security fits into that picture.
Robert Wiblin: Have either of you considered working at DeepMind or other organizations and how would one decide which organization to go to if you had a choice?
Catherine Olsson: Personally, I don’t want to move to London right now.
Daniel Ziegler: Same.
Catherine Olsson: So, I’m not inclined to go to DeepMind, but I hear great things about them if you’d like to go to London.
Daniel Ziegler: Yeah, I know a couple of people on the safety team at DeepMind, at the ML security team at DeepMind. I think they’re definitely doing valuable work as well, but I think you should really just look at some of the work that these teams have been doing and see what seems the most valuable to you and also the most exciting for you to work on yourself, then see if you can handle moving to wherever the thing is.
Catherine Olsson: I also want to emphasize again that just because … Not everything that’s good to do is labeled AI safety. If you restrict yourself just to things that are called a safety team, you’re going to miss a lot of important problems, a lot of good mentorship that’s not under that umbrella. Of course, it’s easy to just do … I think there’s also the flip side. It’s easy to do something that just rhymes with the problems that you care about, but isn’t actually going to contribute. That can be an okay way to get some related experience. Right, so, one thing to highlight is many research orgs do some sort of residency or fellowship program.
Catherine Olsson: The Google AI residency is one of the more well known ones, but there are many year long programs like this that’ll spin people up on research. Don’t restrict yourselves just to the ones that are in the orgs that immediately come to mind, because there’s plenty of these sorts of fellowship or training programs. Similarly, any given research question that you find interesting, I’m sure there are many groups that are tackling it from different angles. So, if you have some question in mind, you can go from there rather than just trying to key off, is it called a safety team or not?
Robert Wiblin: Yeah, are there any specific projects that you think are underrated or people aren’t sufficiently aware of?
Daniel Ziegler: One organization I want to plug is Ought run by Andreas Stuhlmüller which is very related to some of the stuff we’re working on on the safety team at OpenAI, but it’s trying to do human experiments around amplification in particular. It’s trying to do experiments to see how does it work when you try to break up a question into smaller sub questions and only have any individual person think for a small amount of time, but have the thing as a whole involve a whole bunch of thinking. I think they’ve been looking to hire research engineers for a while. That might be a place you haven’t heard of. Yeah, the output of their work will be super useful for informing how we want to continue doing research on the safety team.
Catherine Olsson: Yeah, I think one area that people often overlook is verification of neural nets. That’s happening in a lot of different places, but, yeah, I think verification of neural nets is under appreciated. There’s been some early progress. It doesn’t quite work at scale yet, but it could. That would be a fantastic tool to have in our tool kit.
Robert Wiblin: Where do you think you might go next after your current position or project?
Catherine Olsson: Good question. The place that I’m currently trying to go with my career is a combination of having good technical chops like implementation skills, some sense, some nose for research direction, and also management skills. I’d like to be able to empower teams to work better. I’m doing as much of that as I can in my current role, but I definitely would like to move towards a management role in the future.
Daniel Ziegler: I’d really like to move in a direction where I’m doing more research-y work and and coming up with my own ideas, my own agenda. I think it’s both the case that those ideas would be really useful on their own right, but also just having that kind of big picture vision would help me make better low level decisions when I’m doing engineering type work as well. I’ve noticed that it’s really important to understand what the goal actually is and what we’re trying to aim for when we’re trying to develop a system. I can have an alright idea of that and make some calls of my own, but the better my understanding is there, the more effectively I can work on my own, and the more effectively I can make the right decisions about what’s the right tweak we can try next, what’s the number I should actually be trying to optimize, and so forth?
Robert Wiblin: What are the biggest disagreements between people who are interested in alignment in machine learning? Did you have any view on it? Are there any controversial questions that people have heated conversations about?
Catherine Olsson: The primary disagreement that comes to mind is not so much a disagreement you’d have a heated conversation about as just a difference in vision is what should we build? If you start from the classic AI safety thought experiments, they basically boil down to if we were to build some sort of long range goal planning agent with high autonomy that can reason consequentially and is optimizing for a single objective, that will go badly. Yeah, no one’s about to build one of these. What should we build instead?
Catherine Olsson: I think that negative space is actually a much richer space than people realize. Sure, there’s one thing that would be really bad to build, but what else? What, of the many other things, could we or should we build? I think that leads to a lot of difference in what you see in different agendas. A system like [Paul’s 01:09:36] amplification project is not necessarily going to go around the world or go around your house and clean your dishes. It’s just not that kind of system. Should we have agents that wash dishes? What should they be like? Should we have agents that make important decisions in major world powers? What should those be like?
Catherine Olsson: Is that even a good idea at all? Should we just make sure we never build those? I think these differences in vision for the future are actually pretty substantial in people’s motivations in terms of what they work on, what they envision building. I think they often go unsaid that we can sweep all this under the rug of, oh, well, we all just want to build something such that it goes well for everyone. The negative space of “go poorly” is actually huge.
Daniel Ziegler: Yeah, I would totally agree with that. On the safety team, we’re also definitely not trying to build a monolithic super-intelligence which is going to rule the world. We are also definitely hoping to have a much more gradual transition where we build systems which are usable to enhance our capabilities and give us useful advice and slowly take us to a world where more and more things are being optimized with AI systems somewhere in the loop, but do that in a safe manner and without suddenly transforming the world.
Daniel Ziegler: In general, there’s not that much of a reason to deploy things that are actually agents and acting on their own right in the world. Although, I think you get a lot of safety problems even if you just have systems that are sort of giving humans advice. If humans don’t understand why decisions are being made or why a certain thing was being recommended, they could still end up causing a lot of damage even if the AI system isn’t actually acting on its own.
Catherine Olsson: I’ll also point out in a similar vein that I think there are some non-disagreements that appear to be disagreements just because of differences in framing. So, from the community that would call itself the AI safety community, there’s a focus AGI as an example of an extremely transformative technology, which people think would go badly by default. From other communities within ML, there’s more of a focus on intelligence amplification. “Let’s broaden the scope away from these terminator-like, autonomous, powerful systems, and focus on other ways that automated decision making can make people’s lives better.”
Catherine Olsson: These are extraordinarily compatible views. They’re almost the same view, that excessive focus on powerful, autonomous systems is going to go badly. Right, that’s exactly what both communities believe, but there are differences in emphasis and differences in language that obscure fundamental commonalities in vision, I think, of what would be good to do, right? Where ought we go as humanity? What would it look like for us to thrive?
Robert Wiblin: Has learning more and more made you more or less worried about how the future is going to go, specifically about ML, but maybe more broadly?
Catherine Olsson: Initially, when I first moved into the field, definitely more. I came from a place of … “Some people seem worried. I wonder what’s that about,” to, “Wow, there’s so many ways this could go badly, different scales and types and scopes and trajectories of badly, but there’s many, many different ways for it to go badly.” I’d actually say that’s still the core of my motivation. People talk about existential risk, but there’s also just ordinary catastrophes are entirely possible here. Geopolitical strife over perceived strong systems, even if they’re not, could be quite bad, all the way down to just ordinary surveillance is pretty goddamn bad.
Catherine Olsson: Many, many different ways that things could go badly if sufficient care isn’t taken, that gets me worried across that spectrum. Now, there’s of course reason to then say, well, okay, these are of different scopes or scales. I think on that level, my thoughts are still evolving in terms of what trajectory is humanity likely to be on and how can we steer away from the worst blunders? But, I think the overall spectrum is becoming more and more clear to me that making things solidly good is actually quite hard, and there’s many different kinds of badly we could end up in.
Daniel Ziegler: Yeah, I think I followed a somewhat similar trajectory where in the beginning AI safety seemed like this kind of … When not a lot of people were thinking about it seemed like this fringe idea that seemed almost maybe a little bit crazy, but just learning a lot more arguments and a lot more concrete ways that things could go badly definitely made me more worried. I think one thing that’s made me a lot more optimistic is that I am pretty excited about a good chunk of the work that is being done, including on the OpenAI safety team. I think we do have some approaches that actually do have a chance of making the situation a lot better. That’s made me a lot more optimistic.
Robert Wiblin: Yeah, do you want to talk anymore about the kind of AI policy and strategy side of things? We’ve had conversations with Miles Brundage and Allan Dafoe. Could you imagine going into those kinds of questions at some point in the future? Or, are you very enthusiastic for other people to do it?
Daniel Ziegler: Definitely enthusiastic for other people to do it. Including at OpenAI, actually, Miles Brundage just joined OpenAI as well to work on all those kinds of questions there. Yeah, it’s something that I want to think about for myself and get a better understanding of. I don’t think it’s something I’m going to make my primary focus, but I do think that technical directions absolutely need to be informed by broader policy and governance thinking. It’s also the case that they sort of trade off against each other to some extent. The more we can enable good global coordination and have strong international institutions for safely developing and employing AI, the easier that makes the technical problem. We’ll get more time to solve safety problems and won’t have to think about being in some kind of race dynamic. Yeah, just in general, the easier it’ll make the situation.
Catherine Olsson: I’m really glad that folks like Miles and Allan Dafoe are taking a very sophisticated approach to these questions. Any system that actually gets deployed in the real world is going to get deployed in a sociocultural context with all of its own complexities and nuance. I think it’s really important for those of us who work on the technical side of things to remember that context is extremely complicated. My training is not in international geopolitics, yet the work that I do has those implications.
Catherine Olsson: I think it’s important to remember how important the whole rest of the world with all of its sociocultural complexity is in terms of the context for the technical work that’s getting done. Whether I might go work on that sort of stuff in the future or continue down the path of doing technical research, I think it’s important to me to stay in touch with “what are researchers on the political or sociological or ethical side of things saying?” about the real world impact of these technologies.
Robert Wiblin: What would you say to people who are skeptical about the tractability of working on AI safety at this point?
Daniel Ziegler: I think it’s definitely correct to be worried about tractability, but I think it’s not a question that can be answered completely in the abstract. You really want to look at some of the ideas that are out there for trying to make the future of AI more beneficial and think about how useful do those seem. I think that your opinions can definitely vary on that, but I do think people have a lot of ideas, so it’s worth looking at those. I also want to say I think the common argument against tractability is something like we don’t know what really powerful ML are going to look like, so it’s really hard to work on them now.
Daniel Ziegler: But, first of all, I’d say it’s not that crazy to suppose that something like the current set of deep learning techniques could be extended to make some really, really powerful systems that have a lot of impact. We keep seeing more and more problems just getting solved by just our current techniques, which are still really stupid in a lot of ways, but, yet, they seem to be enough to do a lot of things. It’s also the case that certainly the kinds of things we work on in the safety team are pretty general. It doesn’t actually matter that it’s deep neural networks that are in our systems.
Daniel Ziegler: Any kind of black box system that has some kind of loss function and a bunch of training data, we could just plug into our schemes. If there’s a bunch of advances and people pretty, pretty radically change the way they do machine learning, as long as it still is machine learning in some sense, I think what we’re doing today will still mostly apply. Of course, some of the specific engineering details that we’re trying out now aren’t going to be relevant anymore, but I think that’s okay.
Daniel Ziegler: I think it’s really important to do really concrete experiments with prototype ideas when we can just to get some exposure to real data and have a real feedback loop that shows us which parts of these schemes are important, which things don’t seem to matter that much, at least at the current scales. So, there are lots of useful things, empirical things we can try. But, that doesn’t mean that everything’s going to get thrown out the window when a bunch of advancements happen.
Catherine Olsson: One thing I’ll emphasize on the topic of tractability, which I mentioned earlier is the idea of a scientific paradigm as a happy marriage between a set of questions and a set of tools that produce demonstrable progress on those questions. I think in the context of a happy and functioning paradigm, there’s plenty of tractable work to do for people with those skills. When it comes to how do we go to a pre-paradigm question and bring about the first paradigm in that question, I don’t think humanity knows how to do that reliably. So, that’s pretty untractable.
Catherine Olsson: Some people seem to be able to do it, but I think keeping that sort of distinction separate is important. Also, not everything that currently is operating as a paradigm is necessarily pointed right at the crux of the problem. Every abstraction that humanity has used has been in some ways incomplete, and yet useful. I think striking that balance between what are the useful abstractions that we can make technical or theoretical progress on, that yet are close enough to the types of real things that we want to see or cause in the world is maybe just the whole problem of science.
Robert Wiblin: Okay, so, let’s move on to the more specialized question for people who are really considering taking action and potentially copying your example or doing something similar. I guess a lot of the lessons that we might be looking at here have kind of already come up somewhat indirectly by looking at your experiences, but let’s make them super concrete and flesh them out even more. What do you think is required to be a good ML engineer? Are there any concrete signs that people can look at to say, yes, I could do this, or, no, I should try to do something else?
Catherine Olsson: Could I actually just not answer your question, to say the first thing that came to mind based on what you said there? One thing folks should keep in mind when looking at my experience, but I think many successful people’s experience is that it was a huge component of luck. Don’t discount that, right? If I had decided to quit grad school in a different year, OpenAI would not have been hiring rapidly.
Catherine Olsson: But, it just so happens that the moment when I decided to change tracks was the moment where OpenAI needed to hire people with my skill set. That kind of opportunity doesn’t come up all the time, and my life would have looked different if it hadn’t. So, if you’re finding that you’re struggling, it might not be that you lack the skills. It might be that window of opportunity hasn’t come around for you. So, don’t get discouraged, particularly if you’re comparing yourself to folks who you perceive as successful. Often, there was a large component of happenstance.
Catherine Olsson: Of course, you can take steps to have those opportunities come to you more often. You can talk to people who are important in the field or write blog posts that catch people’s eye, or go viral on Twitter, or otherwise bring some spotlight of attention to yourself, but I don’t want to downplay the importance of just the pure happenstance of people’s trajectory who … What companies happen to be hiring for what, or what past experiences they might have had.
Daniel Ziegler: Yeah, I definitely agree with that.
Robert Wiblin: So, setting that aside, setting aside that caveat-
Catherine Olsson: I just want that to be somewhere in there.
Robert Wiblin: No, that makes some sense. You can’t only look at the … Well, we have to look at other people who took your strategy. What’s the full distribution around them, just sampling from the top? You don’t want to select based on the Y axis, but hopefully there’s still some wisdom we can get. Perhaps you can look more broadly at other people you know in the field and what choices they’ve made and how things have gone rather than just looking at your experiences. Are there any indications of things that are a good sign about your prospects?
Daniel Ziegler: Yeah, I think there is a couple of sorts of skills that you are going to need to have. You do need to be a pretty good software engineer. You do need to have some understanding of machine learning and statistics at least by the time you actually are going to do the job. But, honestly, I feel like the best thing to do is try it. You can spend a couple of days just trying to implement, trying to replicate some machine learning paper. You’ll notice pretty quickly whether it’s really painful or really, really difficult for you, or whether it’s something that you can imagine doing. It’s always frustrating, even for people who are very good at it. Working with ML, it can just be a pretty big pain. Part of that is having the right degree of frustration tolerance, but all these things are things you can find out by giving it a go.
Catherine Olsson: The types of software skills that you need to work on machine learning are similar or overlapping with but not identical to the skills that make a software engineer who’s writing production code good at what they do. You need iteration speed. Tom Brown, who’s a coworker of mine who’s one of the more successful research engineers that I know was previously a startup engineer building web startups where iteration speed is the primary driver. If you can iterate quickly, even if it seems to you to be in an unrelated technical or coding domain, that’s actually quite a good sign. Also, the kind of good code that you need to write is not necessarily exhaustively tested so much as clear and quick to read and verify that it’s doing what you want. These are very different definitions of good. I think machine learning code, at least research code, is a lot more about being nimble than it is about being exhaustive.
Daniel Ziegler: Yeah, it’s really hard to test machine learning code well. I think basically debugging it can be quite difficult, and the best you can do is hope to make it really easy to read so you can verify that it’s actually doing the right thing, and then collect a bunch of metrics on how it’s performing to try to get some signal on whether things roughly look the way that they should. But, you’re not going to be able to write tests or verify it the way you could other kinds of software.
Robert Wiblin: What fraction of people working as software developers more generally do you think could make transitions into similar roles to the ones that you’re doing?
Catherine Olsson: I’m going to go out on a limb and say most, to be honest. I think one barrier that many software engineers have to transitioning to a ML software role is the time, freedom, flexibility, et cetera, to do that retraining. It does take a while. That’s part of why I suggest that if folks can, in their current role or their current organization, start to gain machine learning skills, to do it that way if they can’t take the time off.
Catherine Olsson: Maybe they don’t have the runway, or they have other obligations, that there are ways that you can start seeing how machine learning fits for you if you’re already in a software role, because many organizations want to be using more machine learning. If you go to your manager, you’re like, “Hey, I want to go retrain into machine learning so that I can apply it to our such and such pipeline,” I bet many managers will be like, “Oh, yeah, great, great. Go, take these trainings. Go learn this stuff.”
Catherine Olsson: I think even if you find it hard to imagine taking three months off and studying machine learning full time for those months, you might be able to find a way to work that into your current role. I think looking for those opportunities could help more people find that sort of path. But, if your current situation doesn’t give you that flexibility, again, I have a “don’t blame yourself” thing here. If you don’t have the luck, or if you don’t have the flexibility, transitioning into machine learning does take some of that flexibility that not everyone has the privilege to have.
Robert Wiblin: I think a common thing among potential software engineers or software developers listening to this will just be that they kind of feel under confident, or I see this a lot, of people who are too scared to make this transition. I guess for that reason, I’m just looking for concrete things that potentially a substantial fraction of listeners might have that will be, yes, this is really a sign that you could do this, and there’s a good chance of success. Is there anything, I don’t know, even academic results that most people will have had at some point that are an indication of capability?
Daniel Ziegler: It’s hard to say. I want to say that I was actually … I actually felt quite under confident basically until I actually got the job offers from MIRI and OpenAI. I really was not sure that I was going to be able to make this transition well and land in a position that I thought was really valuable. I definitely think I both got pretty lucky there and also, yeah, I absolutely wasn’t sure. I was in a pretty, pretty privileged position to be able to spend a couple months trying it. I do think it was a good sign that I did have a strong computer science background in general. I had some research experience, even in a different field. That had gone pretty well. Those certainly are good signs, but they’re not entirely necessary, either.
Catherine Olsson: I’d just like to emphasize that machine learning is not that hard. It sounds really intimidating. It’s just not that hard. I’ll also make a plug for the [fast.ai 01:27:20] course. There’s a lot of [MOOCs 01:27:23] or online courses out there, but that one seems to be particularly well tuned for getting people hands on experience, quickly hitting the ground running. For folks who are already software engineers, that is often a good intro rather than jumping in through the math. Also, emphasis on giving folks common sense understanding, ways to build in sanity checks or understand what’s going on, or work incrementally. I think the advice to try to re-implement papers can often tangle people up because the incremental path is not necessarily clear of how do I start and how do I know that I’m making progress. So, if that feels like a barrier to you, then going to one of these courses that’ll give you that scaffolding of how can you work incrementally toward something could be helpful to you.
Daniel Ziegler: One thing I also wanted to mention is Josh Achiam, one of the researchers on the safety team, like I mentioned, is actually working on a collection of resources for spinning up in deep reinforcement learning in particular, which is relevant to at least what the OpenAI safety team does. I think we’ll be able to link to at least one of those documents with the list of key papers in deep reinforcement learning. Then, I think he’s also going to have some more resources just giving advice and a more gradual path and some example code and a more gradual path into actually being able to do deep reinforcement learning research or engineering.
Catherine Olsson: One piece of advice that I would strongly encourage folks to do if they’re strongly considering moving into this kind of path is write out your plan of what you’re going to do, then show it to someone who works in one of these roles. Be like, “Would this plan get me where I want to go?” Rather than just embarking blindly and hoping that you’re doing the right set of things. As I mentioned earlier, I think folks often charge into these enormous reading lists and feel like they need to read these 12 different books or get good at all of these different fields before they can even enter.
Catherine Olsson: I think if you try and write down your plan, then show it to someone, they’ll be like, “You don’t need to do half this, or two thirds of this is not on the critical path.” The critical path is much shorter than you might think, but if you’re outside that role, you don’t necessarily have the insight of which pieces are more or less essential. So, yeah. Whatever job you’re trying to get, I’m sure people there would give you some feedback if you’re like, “If I study these things, or if I can do these tasks, am I on the right track?”
Daniel Ziegler: Yeah, and I think it’s the case that the things you end up needing to use are actually a pretty limited set. People might have the feeling that they need to go study all of statistics or anything like that. On a day to day basis, I’m just drawing on a very small handful of things like I need to understand the basics of probability and random variables and expected value and variance and bias and unbiased estimators and things like that, but it doesn’t go that much beyond that.
Daniel Ziegler: You should have a pretty good understanding of linear algebra, but you don’t need to know all the different possible decompositions are, stuff like that. You need to know enough multi-variable calculus to be able to take gradients and maybe Lagrange multipliers will be useful sometimes, but you don’t need to be able to do complicated surface integrals. There are some prerequisites even just on the fundamentals, but you actually end up needing to need a pretty limited subset.
Catherine Olsson: I’d say that the conceptual understanding, yeah, conceptual understanding of a shallow set will take you a lot further than mechanical understanding of a deeper set of mathematical tools. If you know what a gradient really is, that’s way more important than being able to calculate one, because TensorFlow will calculate gradients for you. But, if you’re stuck with this problem where you’re like, “Gosh, my generator isn’t learning when the discriminator has taken more steps. Why might that be?” Then, to say, “What gradient do I need to inspect to know where the information has stopped flowing?” Or, are some of these units saturated? If you don’t know what a gradient is, it’s not going to occur to you to be like, “Oh, if I measure this one gradient, then I can diagnose the issue.” Being able to debug in this conceptually fluent way is a lot more important than having a big tool kit of random disconnected facts.
Daniel Ziegler: Yeah, absolutely.
Robert Wiblin: Yeah, do you want to list any of the other things that are on the core pathway that people should definitely be aware of, or things that people should essentially skip?
Catherine Olsson: One thing that I’ve enjoyed doing is looking at implementations of something, like open source implementations of something that I’m trying to understand, and making sure that I understand why I made the choices they did in every line of the implementation. In fact, I’ve had more luck with this than working from the papers themselves, because papers often leave out really important tips and tricks that if you just implement what the paper said, you’re actually missing something important in how they got their results, that if you go to GitHub, well, why did they normalize it in this way? They didn’t say in the paper how they normalized, but actually that particular normalization was really crucial.
Catherine Olsson: So, going line by line and making sure you understand what particular choices they’re making can give you, like an index, like a list of things you need to understand. You’re like, “Why are they using this distribution? What is this particular modeling choice doing for them? Why did they train the discriminator for more steps than the generator? What’s up with that? Do you always have to do that?” I think that kind of approach can give you … What’s on the critical path to understanding this algorithm as opposed to starting from a textbook and then hoping that it’ll lead to something.
Daniel Ziegler: Yeah, I want to echo that. I’ve suggested re-implementing papers a couple of times already, but I think it’s important to keep in mind it is really hard to do if you don’t have something more concrete to work with. So, when I was implementing Deep RL papers, I did end up looking at the OpenAI baselines implementations quite a bit. I think when Josh Achiam [01:33:09] releases his Spinning Up package, that will have even nicer, cleaner more educational code that will be really useful for people to look at. That definitely makes if more feasible.
Daniel Ziegler: There’s just random stuff that people do to get their machine learning to work that ends up making a huge impact. When I was working on my PPO implementation actually together with a housemate of mine, we realized we just couldn’t get it to perform nearly as well as the numbers from OpenAI baselines. When we started looking at the differences, we realized that the exact way that OpenAI baselines was pre-processing the input frames on Atari — to make it grayscale and stack the maximum of two frames every four frames for four frames back — all these random little details made a huge difference. Even a bigger difference than actual bugs that we had in our code when we fixed them.
Daniel Ziegler: It is the case that if you want to actually get good performance, you have to just look at the tricks that other people are doing and draw from that.
Catherine Olsson: One thing I’ll say to just drive home the point that different research areas take different skills, I worked on environments for reinforcement learning agents, so I know a lot of properties of what sort of frame rate or step rate do they generally exhibit? Which algorithms are faster or slower? But, I haven’t implemented PPO. I don’t actually know at that level of gory detail what the tips and tricks are. I know it at a high level, but it hasn’t been on my critical path to do that. Clearly, here’s at least one research engineer in machine learning who has not implemented PPO. I’m doing just fine. It’s not been on the critical path of the projects that I need to do, but I know how I would go about learning that if I needed to.
Daniel Ziegler: Yeah, I think it’s actually not … If you do want to take a path of re-implementing a bunch of ML papers, I think it’s not actually super critical exactly which set of papers that is. I chose deep reinforcement learning because that’s what Dario, the team lead of the safety team, suggested to me. That’s some of what the safety team works with, but it probably would have been perfectly fine to replicate a whole bunch of papers about GANs. The important thing was that I learned what the workflow was like, learned TensorFlow, and figured out whether this was something I could do. Basically, gained a skill of taking what’s written in a paper and trying to implement that, or taking some abstract idea that someone’s come up with and implementing that, but I’m certainly not just working with reinforcement learning day to day now. That’s absolutely fine.
Catherine Olsson: I also do want to emphasize something that you mentioned earlier, [Daniel 01:35:37], of frustration tolerance. Machine learning is, I said, not that hard, but very frustrating sometimes where things don’t work for mysterious reasons. Particularly for those of us who come from a software engineering background, we’re used to a style of debugging where you can trace the execution of the program step by step and figure out what went wrong, whereas if the number comes out too big, there’s no line at which it was the right size before that line, and it’s too big after that line.
Catherine Olsson: So, you need a different set of tools, which are learnable, but it takes persistence. So, if you’re finding that something just doesn’t work, don’t blame yourself. Blame machine learning for being a terrible debugging context. It’s not you. It’s not your fault. It just takes more persistence, maybe even more creativity than traditional software debugging does. If you’re finding that you have mysterious bugs, that’s not a sign that you’re bad at this.
Daniel Ziegler: Yeah, absolutely. My housemate and I spent a week debugging a random issue in our PPO implementation that ended up being because TensorFlow, a particular operation in TensorFlow just randomly decided not to propagate gradients backwards through one of its arguments. As we were coding this, the next version of TensorFlow had fixed, deprecated this old version of the function and fixed this problem as the softmax_cross_entropy_with_logits function, but we just didn’t know what was going on.
Daniel Ziegler: It turned out to be this really … We had to spend a lot of time isolating different things and looking at all our different metrics. We realized in this case it was our entropy bonus that wasn’t working at all because of this. Eventually, we just cranked it up, the entropy bonus, all the way, to hope that the entropy would stop collapsing, but it didn’t do anything at all. We realized it must be a problem here, but we spent a week with this bug.
Robert Wiblin: This makes absolutely no sense to me, but Catherine’s cracking up.
Catherine Olsson: So familiar. Even my sanity test, the most simplest sanity test doesn’t even work.
Daniel Ziegler: It can be pretty rough.
Catherine Olsson: Yeah, sanity tests like that have been crucial to my ability to make progress, by the way. Can this thing even memorize the training set? Let me just make sure that it even has enough capacity to do that. Okay, no. It didn’t even do that. I have a much worse problem. So, starting with that sort of … Okay, if I crank the bonus up all the way, I crank it up to 1,000, it did nothing. Okay, is it even-
Daniel Ziegler: Something’s not working.
Catherine Olsson: Is it even connected?
Robert Wiblin: What kinds of people do you think should just suck it up and do the PhD for a whole long time versus people who should take more like your path?
Catherine Olsson: It’s not an either or. I think from my more pure software engineering role at OpenAI that could have been a great jumping off point for a PhD. If I had spent a year building environments for reinforcement learning, learned a bit about how RL research scientists think, what sort of problems they are working on, and then said, “Okay, cool. I now want to switch from building environments for this research problem to doing the research.” If I’d found a good mentor at a university, that would have been a fantastic way to move into a PhD. So, I don’t think it’s either or.
Catherine Olsson: Right now, I have the level of research mentorship that I’m looking for in my current position. I don’t really have a motivation to shift over to the PhD, but if you’re at a place where you think the thing you really need is to work extensively under an advisor or mentor whose research intuition is incredibly sharp, and to learn what they know about how to do research, then a PhD is a great place to do that. If you think you could get by with just a year of that, try a residency or fellowship.
Catherine Olsson: If you think you want to just dive into the landscape as quickly as you can, maybe a software engineering role at one of these orgs is going to do that for you. I imagine if I were a software engineer working on TensorFlow right now, not even particular algorithms, but actually just the machinery the algorithms runs on, that I would learn plenty about how modern deep learning works that could give me a sense of which problems in that space would be something that I’d want to devote a few years of a research career to.
Daniel Ziegler: Yeah, I definitely agree with that. I’m definitely much more equipped now to go back to a PhD if I wanted to than when I actually tried to go for a PhD. I think the way I would decide whether to do that, in addition to what Catherine said is if I was at a point where I had my own ideas and research questions that I wanted to pursue, and I thought that academia was the right place for that, with that kind of vision in place, I think PhDs can make a lot of sense.
Catherine Olsson: Again, I’ll emphasize the “academia as hackerspace” metaphor, that if you’ve got a project that you’re really excited about, but you’re having trouble working on it as a side project or your job doesn’t give you the flexibility to work on it, or you would like to take just a year off to work on it, but you don’t have the funding, there’s many different vehicles. I think academia is one where you can just take that project and work on it for several years with computational resources and mentorship resource.
Catherine Olsson: Mentorship is another really crucial one that I think everything I’m good at, it’s because there was someone that I once looked up to who I could emulate in that skill. I think finding good mentorship is also crucial, and you can do some of that in industry. You can do some of that in academia. You can do that in fellowships. There’s a lot of ways towards that. Particularly, I think for folks whose motivation is something in the AI safety vicinity, it’s not necessary that you only work on safety related problems. In fact, I think that’s harmful because the cross pollination from other technical areas that are more established is a really crucial source of inspiration, techniques, approaches, et cetera.
Catherine Olsson: If you limit yourself just to things that sound safety relevant, I think you’re going to miss out on all of that cross pollination. I think actually spending, if you were interested in doing a whole PhD in theoretical computer science, that’s unrelated to AI safety, then go work on safety, that’s likely to go well for you, I think, because you’ve gained deep research intuition, how to do research in general in a field where the research paths or trajectories are more clearly paved. Now, all of those auxiliary intuitions that you’ve developed will find some way to transfer over.
Daniel Ziegler: Yeah, that’s exactly what [Paul Christiano 01:42:08] did, actually. He did a theoretical CS PhD, and then started working on AI safety research. I think maybe to summarize that, it’s pretty hard to learn how to work on AI safety and how to do research in general at the same time because AI safety is such an open field right now without really a very well developed paradigm. It can definitely make sense to do something else first, then apply that knowledge.
Catherine Olsson: I think related-ly, there’s a blog post by Andrew Critch that I’ll reference here pointing out that it can be harmful to feel pressured to be useful immediately, to contribute usefully to AI safety immediately. Because, then you can accidentally trick yourself or tell yourself a story that, oh, yes, the thing I’m doing right now is extremely relevant to AI safety. And, it’s not. I think that it is and should be viewed as fine to spend time working on whatever it is you personally need to learn or develop in order to get the foothold that you need to contribute. It’s fine if you’re not contributing yet. I’ll emphasize, I don’t think I’ve contributed in any particularly impactful way yet other than by being in conversations and I think contributing helpfully to those conversations. I don’t think my technical output has yet been that impactful, although it’s clear to me that it’s been the stuff that I need to do in order to get myself on a path to having a substantial impact.
Robert Wiblin: So, Daniel, your story of spending six weeks reading papers and then getting a job seems particularly extreme. Do you see…. Obviously there’s some luck in this. You’re in the right place at the right time, but do you think other people could make that transition as quickly? It’d be incredible if we could fill out OpenAI with tons of more research engineers. Every six weeks, just start filling them out.
Daniel Ziegler: I don’t know. It’s hard to say. I don’t think that many people have tried it. I will say that my knowledge was definitely pretty shaky at the end of those six weeks. There was lots of ML basics that I sort of skated by on in the interviews that I didn’t have a good understanding of. I think it also helped that I was able to do this with a housemate of mine. We literally spent 12 hours a day for some of the weeks just cranking out code and trying to debug our implementations and also reading papers. I think I was in a particularly good environment and I probably would expect it by default to take somewhat longer. I just want to emphasize again, what was really important about what I did was I just, as quickly as it was feasible, I just aimed to practice exactly those things that I knew I needed to do. That worked well for me.
Catherine Olsson: There’s also an important component of learning on the job the things that you need. When I joined Google Brain, I didn’t really know TensorFlow. I knew a bit, but definitely not enough to do what I had to do, or what they hired me to do. Okay, learn TensorFlow, then do the stuff you have to do. We know from what we’ve seen of your background that you’re capable of that, so just do it. I think many jobs have that flavor, that if you’ve demonstrated that you can learn stuff quickly, and that you’ve got the pre-reqs, that they’ll let you run with this. That’s another thing that I’d point out about the strategy of if you’re in a role where you could learn machine learning as part of your current role, clearly that wouldn’t involve already knowing it, because you’d go to your manager and be like, “Hey, can I learn this thing and then apply it to our products?” I think many places are open to that sort of thing.
Robert Wiblin: Yeah, I guess you probably knew personally some people who were in OpenAI, so they could give you some advice on what specifically you need to know.
Daniel Ziegler: Yeah, that definitely helped as well. It was definitely essential. Really, it was just that Dario sent me Josh’s document of key papers in deep RL and told me, “Oh, yeah. You should just read all these and implement some of them.” I just went and did that. Having that clear guideline definitely helped a lot.
Robert Wiblin: Hopefully, we’ll be able to stick up links to all of these things that we’re talking about here.
Catherine Olsson: Yeah, and again, I keep harping on this, but I’d emphasize different agendas have different pre-reqs, so reach out to the group that you would be trying to join and say, “In order to do what you do, what should I do?” Then, do what they tell you. Actually just do just that.
Robert Wiblin: It sounds like organizations in this space are fairly willing to hire people and then have them learn a lot of the skills that they need on the job. Is that fair to say?
Catherine Olsson: I think it depends on the group.
Daniel Ziegler: Yeah, it definitely depends. I think we are hiring for a general software engineer on the safety team, but for the research engineers that we’re hiring, you are going to be asked to code a bunch of TensorFlow in your interview and talk about machine learning and more or less know what you’re doing. So, definitely you do need to have a pretty good level of knowledge coming in.
Catherine Olsson: I think maybe the right framework here is that anyone who comes into one of these roles has lopsided skills. Any particular missing skill is not going to be the end of the world. I think this is true just for anything within machine learning that maybe now people are starting to graduate with degrees from undergrad that have focused on machine learning, but until now, that really wasn’t true. So, anyone who ended up in this space came with a conglomeration of related background knowledge. For me, I had a bunch of research experience in computational neuroscience. Is that machine learning? Well, I was doing model fitting. I can tell you about the trade-offs between different kinds of Bayesian model fitting. Is that what I do? No, but it means that I had some statistical maturity through that, but I hadn’t written TensorFlow. I’d written some Theano. Close enough. Many of these things, I think if you’ve got enough of the pieces, then you can pick up whatever you happen to be missing, and everyone’s going to be missing or strong at a different subset.
Daniel Ziegler: Part of it’s just proving to the organization that you’re interviewing for that you do have the capability to learn, to get really good at something. You have to be good at something that’s vaguely machine learning related, but it doesn’t matter that much if it’s exactly what you’re going to be doing on the job.
Catherine Olsson: In terms of interviews, I want to emphasize not to pretend to know anything you don’t know, because they’re just going to ask you about it, then you won’t know it. So, be really upfront about what your level of experience has been and what you want your interviewers to hold you accountable for versus what you don’t know.
Robert Wiblin: How many research engineers are needed by these kinds of organizations? Is it foreseeable that you’re going to be full up and just have hired lots of people soon? Or, is it just going to be a constant need for more and more for the foreseeable future?
Daniel Ziegler: On the safety team, I think we’re definitely planning to continue to hire. If we manage to hire fast enough, we might raise the bar even higher, but we’re planning to grow fast. I was the first engineer. I started in May. We just had a new engineer start two weeks ago, Jeff. Next week, we have another research engineer coming. We’re planning to basically continue that trajectory and continue to hire as quickly as we find good people. That may change at some point, but that’s how things look right now. I think it’s always going to depend on the particular team and project.
Catherine Olsson: Again, across the landscape of things under the safety umbrella, different problems are going to have different personnel needs. That’s going to evolve over time. I think being in a position where you’re prepared to jump in on one of these problems if a position opens up is a great boon to the field where if a new problem suddenly needs to hire five people to spin up a team, and there’s five people who have learned the relevant background and are ready to jump in or are waiting in the wings, that’s going to be fantastic for our ability to move quickly on these problems as fields start to open up. My team is bottle-necked on mentorship ability.
Catherine Olsson: There’s plenty more we could do, but the folks who already are skilled at doing this kind of work are spending as much time as they want to spend on helping, advising, and training. I’m trying to spin up as quickly as I can on helping, advising, and training others in this. But, I think that just varies from corner to corner. I get the impression from talking to Dario that OpenAI also could use some management, project management, organization stuff. I think it’s both direct technical skill and the willingness to pass on that skill and train others that’ll allow us to flesh out the pipeline of people who want to contribute with folks who are able to get their hands on problems and contribute.
Daniel Ziegler: Yeah, now that the team is growing, that’s going to be absolutely essential.
Robert Wiblin: Are there any kind of intermediate organizations that you think would be good stepping stones for people who want to move from software development in general to working at your organizations?
Catherine Olsson: One thing I might say is that startups are often willing to take chances on people and give you a chance to just dive in. I think the fast paced experience of a startup can be really good training in how to iterate quickly and how to make sure that you’re making progress. Maybe consider working at a startup that’s doing something ML related as a way to move from a more structured traditional software engineering thing.
Robert Wiblin: You mentioned a [MOOC 01:51:03] earlier. That’s a good way to learn.
Catherine Olsson: Yeah, [fast.ai 01:51:05].
Robert Wiblin: Yeah, are there any more formal academic programs that you’ve heard on the grapevine are particularly good that you might want to draw attention to? PhDs, master’s, even undergrad, places where they do good courses?
Catherine Olsson: In terms of undergrad places, I think if you look at the list of places that are top in CS, that’s also likely to be a list of places that’s top in ML with some modifications that I don’t currently feel like I have a good enough picture to recommend, but one thing I’ll point out is that different undergrad programs teach AI or ML with a different focus. Some undergrad AI courses are still focused on more good old fashioned AI rules based systems type of thing. Some machine learning courses are still focused on logistic regression and [SVMs 01:51:49] and are not going to get you to deep learning.
Catherine Olsson: If your career goal is to contribute to deep learning, like a traditional ML or AI class or program may not prepare you for that. The top professors in any given subfield are going to be distributed very unevenly across universities, so if you’re looking at the master’s or PhD level, look for specific professors. There is a risk with that. I’ve often heard the advice that if you’re going to go into a PhD program make sure it’s some place that there’s at least two or three people that you’d be happy to work with. You might say, “How do that jive with actually just look for specific professors?”
Catherine Olsson: I’d say, “If you’re applying to a place where there’s only one professor you’d be happy to work with, then you don’t get to work with that person, leave. Just leave.” That’s a fine plan, though. Yeah, if you’re going to a PhD program, you should do it either of those ways. Have a particular person in mind, and if it doesn’t work out, then the PhD program just didn’t work out for you. Or, have a small collection of folks who would be able to provide you with the mentorship that you want. Then, if one of those doesn’t work out, you can fall back on another.
Robert Wiblin: Do you have any good failure stories that people could learn from, people who’ve tried to get into this, and it just hasn’t worked out for one reason or another? You don’t have to name names, but …
Daniel Ziegler: Yeah, so, a couple of people did trials on the OpenAI safety team, and these were absolutely smart people and good software engineers, but they just realized that machine learning was too much of a pain for them and not what they wanted to be doing. So, they decided not to keep going there. I think that’s definitely an acceptable outcome. One of them works for MIRI now.
Catherine Olsson: I think the failure stories that I see most often are people who don’t have a sufficiently clear and targeted plan. I’ve been really harping on this piece that the critical path is probably shorter than you think, and folks who don’t have a highly specific target in mind and just want to, quote, get good at ML and, quote, work on AI safety, will end up reading a bunch of stuff with no direction. I think having direction, even if it turns out to be a bad direction, will crystallize what you’re preparing, crystallize your preparation towards something concrete.
Catherine Olsson: That’s, I think, the biggest difference between folks who end up just reading a lot of stuff and not being prepared for any particular job versus have picked up a set of skills that makes for a good stepping stone. It is fine and good to read a lot for context. Just don’t confuse that with reading for job preparation. Having a broader context of what’s going on is incredibly valuable, but it’s not the same thing as developing the skills that’ll get you through that interview that’ll get you that role.
Robert Wiblin: I guess kind of here, my framing has perhaps been bad, that there’s research engineers, and then research scientists. Are there other roles, other labels that we should have in our head of different kinds of positions?
Catherine Olsson: I think focusing on what are the skills involved, maybe? I probably should have said this earlier, because I found this framework useful in the past. I think there’s approximately four different buckets of skill that are needed to do work in ML or deep learning that’s related to this stuff. One is ordinary software engineering, and there’s of course all those subdivisions, but that could be build environments for reinforcement learning, build a pipeline for human demonstrators to submit feedback, build a dashboard for researchers to view their experiments, that type of work.
Catherine Olsson: There’s machine learning implementation. So, take an idea for an algorithm and code it up in TensorFlow, or debug why your TensorFlow implementation is not producing the results that you want. There’s ML research direction, so, choosing what next problems are likely to be relevant or good approaches. Then, there’s ML theory. So, prove a bound on what type of learning performance we can get under these information theoretic assumptions about the data set. If you’re a research engineer, you’re probably going to be doing a lot of number two, the ML engineering work, but some ordinary software engineering just to get your saved trained agents onto the right file system.
Catherine Olsson: Then, you’re going to be doing some of three, of picking which direction are going to be productive. If you’re a research scientist, you’re more likely to have skills in those later buckets, like more theory skills and more research direction skills. I think those four categories are going to be a better guide than any particular title, because any title has a blend of those. Any problem you’re trying to tackle is going to need a different mix of those. If you’re trying to just scale up deep RL agents to run faster and more parallel, you probably don’t need any theory at all. Whereas, if you’re trying to prove some impossibility theorem about adversarial examples, you’re going to need a lot more theory.
Robert Wiblin: What do you think of the agent foundations agenda from Miri? Could you imagine working on that yourself? Do you have a view on whether people who are open between doing that and … Actually, are there people who are kind of … would be capable of going both in the Miri direction and in the research engineering direction? Or, is it mostly quite different skills?
Catherine Olsson: Well, I’ll point out that [Nate 01:56:45] himself had started as a Google software engineer, then trained in the sorts of mathematics that the agent foundations agenda requires, so I think folks who are abstract and technical thinkers can often pick up a different skill set there. It is a quite different agenda. They’re doing mathematics research. I know they’re also hiring software engineers, so I don’t know how the software engineering that they’re working on connects with their agent foundation stuff, if at all. You should ask them, if you want to apply for that sort of thing. Personally, I think the agent foundations work is usefully ironing out glitches in humanity’s current models of what’s going on with intelligence. I think depending on what you think humanity is going to do or build in what order in terms of building AI, that may or may not end up on the critical path, but they’re clearly doing some cool mathematics, and if you like cool mathematics, you should consider doing it.
Daniel Ziegler: Yeah, I think that sounds about right. I think it’s absolutely appealing to try to take MIRI’s approach and have a formal foundation for what the heck agents are and how we can try to understand what their goals are and try to specify what their goals are, but at the same time, I think at least I made the decision to work on more concrete stuff that we can actually experiment with today and is more connected to the kinds of techniques that at least in the next couple decades are more likely to be applied.
Robert Wiblin: I’ve just got this quote in front of me in the notes, so, with a quote from Christiano where he says, “A good litmus test of whether someone could be a good research engineer is whether they can properly implement a particular model from a paper in X hours or some kind of test that’s been … some number of hours that’s been set.” To what extent do you think that’s a good litmus test? It sounded earlier like you said it’s very unpredictable how long it might take to manage to replicate something.
Daniel Ziegler: It’s a little hard to give a specific kind of timeframe because it depends so much on exactly what standard you’re aiming for. I guess I can say a little bit about what it was like when I was trying to do this for deep RL papers. I was working with my housemate, and we were coding. We were doing a lot of pair programming, and also had very different sleep schedules. We probably had 16 hours of coding or debugging a day. With that level of work, we spent like two days on typically on papers where we just wanted to get the basics working and we spend over a week on some of the papers we were really trying to get, match the performance of Baselines or the paper and tune them a lot. But, it’s hard to say. I think depending on exactly how much code you look at or what level of performance you’re trying to achieve and which thing you’re doing, the complexity of different papers varies a lot. I think you’re going to get pretty different numbers. But, yeah. Maybe that’s a somewhat useful ballpark.
Catherine Olsson: I remember one of the engineers at OpenAI when I was there was working on a new or a good clear implementation of an algorithm. I forget what he was working on, but it took him a whole month to iron out all the bugs. I think if you’re like, gosh, let me just do this cheap, quick test, which is implement this paper, its not going to be a cheap, quick test. It might take you a whole month. That’s not necessarily a bad sign. This stuff, to really iron out all of the bugs and details can take quite a long time.
Daniel Ziegler: Yeah, even after a week, we definitely weren’t actually at a point where we were quite matching the performance of our baselines or papers.
Catherine Olsson: That’s part of why I emphasize that learning this stuff takes time, either to do it as a side project and give up all your other hobbies for a month, or to quit your job if you’ve got the runway for it. It’s not the kind of thing that you can get a good signal on just with a weekend here and there. It takes more time and determination and patience than that.
Robert Wiblin: Are there any other pieces of advice that you’d like to give to perhaps someone who’s a software developer at Google who is considering making a transition to being an ML research engineer and just needs to know exactly how to go about it?
Daniel Ziegler: One piece of advice is try to reach out to people that are currently doing the kinds of work that you think might be valuable, and ask them how they got there, and what advice they have. I think it can seem impenetrable or confusing from the outside to figure out what kinds of work are there even, or what are reasonable paths. But, I think people are pretty willing to spend at least some time helping people that are new and pointing them in a more useful direction. Yeah, I am, definitely. Email me, [email protected].
Catherine Olsson: If you’re at Google, you could do a 20% project. There’s a bunch of research teams within Google AI that would be open to that sort of thing.
Robert Wiblin: All right, we’ve been going for quite a few hours, and we’ve got to go get some Thai food, have dinner together. Are there any kind of final inspiring things you’d like to say to someone who’s listening and might potentially be on the fence about whether to really take action on what we’ve been talking about today?
Catherine Olsson: Yeah, I would encourage you to write down an extremely concrete plan. What exactly are you going to tackle? What steps are you going to take to tackle that? Then, send it to someone who works at one of these orgs to say, “Is this a reasonable plan?” I bet that they will pass you back some small edits, but basically a reasonable plan. Then, you can just do that.
Daniel Ziegler: Yeah, that definitely sounds like a good plan.
Robert Wiblin: My guests today have been Catherine Olsson and Daniel Ziegler. Thanks for coming on the podcast, guys.
Catherine Olsson: Thanks so much.
Daniel Ziegler: Thanks Rob.
Robert Wiblin: Before you go, a reminder that to help with that final point, Catherine has written a guide for us called: Concrete next steps for transitioning to ML Engineering for AI Safety. Don’t feel intimidated, just read it and follow the steps it lays out.
A reminder too that if you’d be interested in retraining to get an ML engineering position, but couldn’t support yourself during the time it would take to do so, look out for when the Effective Altruism Grants program next opens as it may be able to fill the gap.
If you liked this episode, you may well want to listen to #23 – How to actually become an AI alignment researcher, according to Dr Jan Leike. That’s in addition to the other 3 episodes I mentioned in the intro.
And finally, to keep operating 80,000 Hours needs to find out how it has influenced people’s careers or otherwise helped them. If this show, our coaching, or any of the articles on our website have helped you have more social impact, please head to 80000hours.org/survey and spend a few minutes to let us know.
We put a lot of time making this show and it really does make a big difference to hear from you.
The 80,000 Hours Podcast is produced by Keiran Harris.
Thanks for joining, talk to you in a week or two.
Learn more
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.