Nick Beckstead on how to spend billions of dollars preventing human extinction
By Robert Wiblin · Published October 11th, 2017
Nick Beckstead on how to spend billions of dollars preventing human extinction
By Robert Wiblin · Published October 11th, 2017
What if you were in a position to give away billions of dollars to improve the world? What would you do with it? This is the problem facing Program Officers at Open Philanthropy – people like Dr Nick Beckstead.
Following a PhD in philosophy, Nick works to figure out where money can do the most good. He’s been involved in major grants in a wide range of areas, including ending factory farming through technological innovation, safeguarding the world from advances in biotechnology and artificial intelligence, and spreading rational compassion.
This episode is a tour through some of the toughest questions ‘effective altruists’ face when figuring out how to best improve the world, including:
- Should we mostly try to help people currently alive, or future generations? Nick studied this question for years in his PhD thesis, On the Overwhelming Importance of Shaping the Far Future. (The first 31 minutes is a snappier version of my conversation with Toby Ord.)
- Is clean meat (aka in vitro meat) technologically feasible any time soon, or should we be looking for plant-based alternatives?
- To stop malaria is it more cost-effective to use technology to eliminate mosquitos than to distribute bed nets?
- What are the greatest risks to human civilisation continuing?
- Should people who want to improve the future work for changes that will be very useful in a specific scenario, or just generally try to improve how well humanity makes decisions?
- What specific jobs should our listeners take in order for Nick to be able to spend more money in useful ways to improve the world?
- Should we expect the future to be better if the economy grows more quickly – or more slowly?
We also cover some more personal issues like:
- Nick’s top book recommendations.
- How he developed (what is in my view) exceptional judgement.
- How he made his toughest career decisions.
- Why he wants to see less dilettantism and more expertise in the effective altruism community.
Don’t miss it.
Highlights
Dr Beckstead’s view, after studying the topic in his philosophy PhD thesis, is that we should care about future generations about as much as the present generation. Because few people are trying to do things that specifically benefit future generations, there are many neglected and important things to fund in this area.
A distinction Nick developed is that when trying to improve the future people can go for targeted changes that will be very important in a narrow range of scenarios, or general changes that are useful in a broad range of cases. Over time, Nick has become less sceptical about targeted changes.
Some archetypal paths that Nick is particularly keen for more people to take include:
…[being] really interested in deep learning, very quantitatively oriented, caring about AI safety, and just generally crushing it in their study of that, I think that’s an archetype that’s really useful. I’d encourage that person to apply for the Google Brian Residency Program as a way of learning more about deep learning and getting into the field. I think it could go more quickly than going through a PhD. It’s a quick way into the industry.
The other category was AI strategy work. … They need to be very sharp, and they need to have a good judgment, and they need to be interested in thinking about how institutions and politics work. I would love to see more people getting jobs in the US government that could be relevant to AI and to other cause areas.
In biosecurity…I think there’s two paths; one of which is the side of more policy, and one, which is more like learning the science. Getting a PhD in some area of biology, perhaps focused on immunology, or vaccine R&D would be a natural place to go, or getting a PhD, or doing a fellowship at one of the places that do work on biosecurity, perhaps the Center for Health Security that Open Phil funds.
Another category would be jobs in the Effective Altruism Community. I don’t think there’s a super natural background for that, other than majoring in a serious discipline, and studying it seriously, doing well, and thinking about the issues that the Effective Altruist Community cares about and getting to know it, and debate it in person I think would be my advice for that category.
Nick’s top audiobook recommendations include The Better Angels of Our Nature by Steven Pinker, The Power Broker by Robert Caro, Moral Mazes by Robert Jackall, Steve Jobs by Walter Isaacson, Science in the Twentieth Century: A Social-Intellectual Survey by Steven Goldman (The Great Courses), The Moral Animal by Robert Wright, Surely You’re Joking, Mr. Feynman by Richard Feynman, with an honorable mention for the podcast EconTalk by Russ Roberts.
Articles, books, and other media discussed in the show
- Job vacancies to work with Nick at Open Philanthropy.
- On the Overwhelming Importance of Shaping the Far Future. 2013. PhD Thesis. Department of Philosophy, Rutgers University. Population ethics is discussed in chapter 4.
- Chris Meacham paper on population ethics and person affecting views: Person-affecting views and saturating counterpart relations
- Nick’s suggested audiobooks
- Calculator for whether it’s better to speed up or slow down growth: Differential technological development: Some early thinking
- Also discusses speeding up growth versus other approaches: Making sense of long-term indirect effects – Robert Wiblin, EA Global 2016
- Summary of Open Philanthropy’s views on various global catastrophic risks including a summary spreadsheet
- Nick Bostrom’s website and Robin Hanson’s blog
- Cause report on animal product alternatives
- Broad versus narrow approaches to shaping the long-term future
- Stubborn Attachments by Tyler Cowen.
- EconTalk episode ‘Tyler Cowen on Stubborn Attachments, Prosperity, and the Good Society’, that outlines the thesis of Stubborn Attachments.
- Grant for novel ways of eradicating malaria
Transcript
Hey podcast listeners, this is Robert Wiblin, director of research at 80,000 Hours.
I recorded this episode with Nick Beckstead at the Effective Altruism Leaders Forum in San Francisco last month. Nick is one of the smartest people I know, so I was glad to get a few hours with him.
If you want to learn how people heavily involved in the effective altruism community think through problems, this is a good place to start.
If you listened to my conversation with Toby Ord a month ago, you can avoid some repetition by skipping the first 31 minutes.
As always you can apply for free coaching if you want to work on any of the problems discussed in this episode. You can subscribe by searching for 80,000 Hours in your podcasting software, with 80,000 as a number. The blog post with this episode has a full transcript and links to articles discussed in the show.
And now I bring you Nick Beckstead.
Robert Wiblin: Today, I’m speaking with Nick Beckstead. Nick is a Program Officer for Open Philanthropy. Previously, he studied Mathematics and Philosophy, completed a PhD in Philosophy at Rutgers University, and worked as a research fellow at the Future of Humanity Institute at Oxford University. A lot of his research focuses on the importance of helping future generations and how he might best go about doing that.
Nick, it should also be said, happens to be my boss, in a sense, because he’s a trustee of Center for Effective Altruism, which is the umbrella organisation which 80,000 Hours is a part of. Thanks for coming on the podcast, Nick.
Nick Beckstead: Thanks for having me. It’s good to be here.
Robert Wiblin: I’m hoping to have a pretty lengthy and wide range in discussion covering lots of topics that you’re an expert on and some that we’ve spoken about over the last couple of years. First, what kind of research are you doing at Open Philanthropy now?
Nick Beckstead: Right now, my time is split mainly between two categories, one of which is supporting biology grant making at Open Phil. We have a couple of scientists that work with us on this, and also Claire Zabel is working on it with us. Then, the other major part of it is grant making to support the Effective Altruism community, including the part of the Effective Altruism community that’s particularly interested in existential risk. Those are the two main things that I’m spending my time on. Then, I’m occasionally involved in other aspects of thinking about Open Phil strategy and thinking a little bit about philosophical frameworks for allocating funds across causes.
Robert Wiblin: What kind of philosophical questions?
Nick Beckstead: I guess, if you’re starting from first principles on that, you might ask questions like, what ethical framework are you going to use to evaluate how good it would be if you accomplish goals associated with different causes? Then, different frameworks, especially with regards to questions about population ethics would result in different evaluations of accomplishing the different goals that correspond to Open Phil’s causes.
Then, there’s questions about how you’re handling moral uncertainty because you could assign probabilities to all those different moral frameworks. It does a question of what you do, given conclusions about what would be best according to each of these frameworks, and what probabilities you assign to all the frameworks, how that is outputted into a decision about what to do. There’s philosophical debates about that kind of thing.
Robert Wiblin: Okay. We’ll come back to some of those questions in a minute, but first, what kinds of grants have you suggested to a philanthropy project? Do you have a sense of how that’s gone so far?
Nick Beckstead: I think we mostly don’t know how the grants have gone so far because almost all the grants that I’ve recommended and have been made, it’s happened over the last year and a half. A lot of what we’re doing is funding scientific research or funding growth of the Effective Altruist Community. Many of those things don’t have very obvious short-term payoffs. I think, we’ll probably be able to say more about most of these things in a couple of years, but right now, for the most part, I can’t really tally up many of the grants in terms of objective wins and losses.
If we were going to add up the grants made so far, the biggest bets from the science programme have been an investment in Target Malaria, which is an organisation that’s doing research aimed at developing gene drives for the elimination of malaria, provided that the strategy is agreed upon to be safe, and ethical, and approved by the communities that are affected and interested. An investment in Impossible Foods, which is an organisation that’s developing alternatives to animal products, and a grant to Ed Boyden’s lab at MIT. Those have been some of the biggest bets from the science programmes so far.
Robert Wiblin: What does Ed Boyden work on?
Nick Beckstead: This grant is supporting work on expansion microscopy and some techniques basically for getting better imaging and measurement of the state of the brain. It’s a neuroscience grant.
Robert Wiblin: Right, right, right. We’ll work through a bunch of those different focus areas later on, but a large focus of the grants that Open Philanthropy is making and that you’re focused on in particular is trying to improve the long-term future of humanity as a whole. You actually wrote your philosophy basis. Its title was the Overwhelming Importance of Shaping the Far Future. It’s available to read online. Why should we worry so much about the long term?
Nick Beckstead: The reason we should think so much about the long term, I guess, if I was going to boil that down a lot, if you’re adding up the well-being or the utility of all the beings that ever might live, then I think if you think about how likely is it that civilisation or human influence civilisation of some sort will be around for various periods of time, how large might it be, and thinking about how much utility there would be for each person at each part of time. You just are adding things up. That’s where almost all of the potential value is, is in the distant future.
It seems that there are some things that we can do now, particularly in terms of understanding and mitigating potential global catastrophic risk that have the potential to shape basically how large and good that future is. If you just zoom out a little bit and think about us as a species, we’ve been around for a couple of hundred thousand years so far. We’re on this planet that’s going to be habitable for several hundred million years. We’re in this universe that’s going to have stars burning for billions or possibly trillions of years, depending on how many of the stars you’re thinking about and exactly who you’re asking.
There’s just an overwhelming amount of potential value at stake if you think about the possible ways that that could play out, which, I think, on one hand includes our species not realising its potential, and maybe dying out too early if we don’t do everything right. I think, also, realistically on the opposite end includes capturing almost all of the possible value and building the best possible future with that giant expansive resources in space and time.
Robert Wiblin: What are the implications of this perspective? I guess, one that you’ve mentioned is that we want to reduce the risk of global catastrophic risk. We don’t want to die out because, then, we’d do anything because of the road ahead. Are there any other things that we should be thinking about?
Nick Beckstead: Yeah, I think that’s the most obvious implication. I think another possible implication, I think, a framing I like to put on this problem is to say … If you have this giant amount of future resources that we have, and so you say, “We’ve already avoided the global catastrophic risk,” we’re going to reach a point where we go out and use all of them. Then, the question is, how exactly are they going to be used and what is going to determine that? I think, at that point, you’re hoping that some wise choices are made at some point along the way where we’re making decisions about how all of these things are used that’s still within reasonable ranges of planning and thinking about it.
I guess, the question would be, you might factor it out as like, “How could you change our situation, so that better choices are made at critical junctures about important questions that might shape the long-term future?” I guess, you could factor that into things that improve individual judgment and decision making, things that improve collective judgment and decision making, things that are affecting share of cultural influence, and values of those who are making these crucial choices.
I think it’s a lot fuzzier when you start talking about exactly what you do about this, but I think that things like enhancing the growth and changing the character of the Effective Altruist movement is good. I think things like Philip Tetlock’s work trying to build on that, popularise it, do trials of things like that, and have those things be more incorporated into society is plausibly good.
I think that just having things like smarter people, perhaps better education system. I don’t know. There’s a lot of possibilities. I’m a lot less opinionated on exactly what the best route forward is within that whole sphere of things because I think a lot of it is more debatable. It’s a lot more robust and straightforward to think about what the case for, “Well, if there’s a big global catastrophe, either we might be wiped out or it might really mess up how the thing plays out.”
Robert Wiblin: Presumably, you didn’t give that brief argument, and then the entire philosophy profession change its mind and decided that shaping the far future is not overwhelmingly important, and that would just abandon all of their other research projects. What kinds of subtleties did you explore in the thesis? What kind of objections and responses are there?
Nick Beckstead: Yeah.
Robert Wiblin: I’m sorry to cast your mind back perhaps five years to your PhD defense but let’s see what you can …
Nick Beckstead: Yeah. One big subtlety is to do with the value of, there, being the difference in value between a future filled with something really good. A lot of people with lives that are good and have a lot of meaning in them, and a world that’s more empty. I think there’s a big set of philosophical questions about what framework to use for assigning value to those different things in a subset of moral philosophy called population ethics.
The space of answers to that that is considered, I think, ranges from maybe the simplest view would just be, we add up all the utility. We just take a utilitarian approach. We list all of the people that exist. The outcome, we say how well their life is going. We add it all up. There’s opposing view. It’s opposite, which is called person-affecting view. I would say the spirit of this view is to say, “Well, here’s some set of people, like maybe the people that exist right now or the people who are definitely going to exist regardless of what we do,” or something like that.
We classify those people as the main people. Then, we count all those other people that don’t have to exist, but might exist in the future depending on what we do. We call them the extra people. Say, “Let’s just add up the utilities of the main people,” or “Maybe let’s add up the utilities of the main people, and place some very secondary weight on the utilities of the extra people.”
Then, there’s a family of views that you could call views of diminishing marginal value where they would say something like, “Well, it’s good for there to be some extra people, but beyond the certain points, just like they have less and less additional value for per person you provide.”
To get some sense of how you would be applying these kinds of frameworks in some real way, you could imagine going back some point in the history of the world, and, say, imagine that some country had just sunk into the ocean a hundred years ago. Let’s call it country x. You could count the harm done in a number of ways.
I think if you’re an economist or something, maybe what you would do, if you’re counting that up, would be like you might do something count the number of people that died when it sank into the ocean. Assign a value to each of their lives, and say, “All right, that was the harm done from this activity.” Maybe you’d also count some of the harm done in lost gains to the rest of the world by not being able to trade from them or profit from their innovations and things like that.
That total view, we’d do it a different way. In my example, we know all the people, whoever existed, because country X didn’t sink into the ocean, and we could add up the value of all of their lives as well under the same framework. Then, you could have some kind of intermediate approach depending on your view about the diminishing value if you want to just take some in-between answer.
That’s one of the big main philosophical considerations is, basically, which of these views you adopt. In my dissertation, I did something a bit fancier than this and said, “Well, maybe you don’t have to exactly have something like the total view. Maybe there’s some other views that operate equivalently to that at the range of adding up value in different periods of history.” I think maybe we don’t need to get into that particular subtlety unless you want to for purposes of this discussion.
I guess, that would be the first category of ways people would disagree with me would be that maybe they would adopt a person-affecting view and say, “If the world is destroyed, let’s count harm by adding up the deaths of all the people who died when the world is destroyed. Let’s not count up the harm in terms of the massive foregone astronomical future benefits.” That would be one kind of category disagreement.
Briefly, another category disagreement would be time discounting. Some people would argue that benefits that are occurring more distantly in the future are intrinsically less valuable. We should have some exponential discount rate. If you do that, then unless you have benefits to create a time, growing it some faster than exponential rate, which is physically implausible, then-
Robert Wiblin: In the very long term.
Nick Beckstead: In the very long term, then almost all of the value of the future is going to be something that you could capture in the next, say, several hundred or several thousand years. That would be the other way somebody could … Those are probably the most two most common ways someone could disagree with me.
Maybe a third most common way would be more of an empirical disagreement that’s like, well, this is all well and nice, but it’s so difficult to predict anything about the future that we should just do the same things we always thought we should have done before we ever thought about this set of arguments.
Robert Wiblin: My understanding is that the time discounting approach is not really accepted by almost any moral philosophies or really anyone who’s thought about this kind of question from an ethical point of view. Is that right? It’s pretty unpopular.
Nick Beckstead: Yes, pretty much not accepted by moral philosophers. It is accepted by economists sometimes who have thought about it. I’m not clear how much this is seen as a question that people really dig into and debate about in economics, but it usually is … When I’m getting this argument, it’s usually somebody who is influenced by the economics profession in some way. I’ve never really got this argument from philosophers.
Robert Wiblin: I suspect that economists who are putting this forward are misunderstanding or perhaps answering a different question than the one that you are. They’re, perhaps, discounting the value of capital rather than the value of direct and morally-valuable experience or something like that.
Nick Beckstead: Yes. Yes, I agree with that. It’s confusing because there is importance in using discount rates for that kind of thing. I view them as something that’s intended to be and functions efficiently as a heuristic approximation for doing the normal utilitarian calculation, especially when you’re allocating between two goods of a relatively similar type over a period of decades, assuming nothing really crazy happens with the world.
I think the standard economic approach discounting has good rationales in terms of thinking about other uses of a delayed investment or accumulation of value from some asset of just growing in the world or being reinvested in the company, or organisation, or country, or something like that. I think it goes a little bit crazy if it starts telling you that a billion years of utopia that happens a billion years from now is worth less than a penny or something.
Robert Wiblin: Let’s take that one off the table. I agree. That doesn’t strike me as that plausible. Sure, definitely. Let’s take that one off the table but the other two are a little bit trickier. There is disagreement in philosophy about whether we should embrace the person-affecting view or not. To be honest, I’ve never really heard a coherent explanation of the person-affecting view, and how exactly you would define who is Included as the baseline people and who are the extra people. Perhaps, that’s my fault rather than the philosophy’s fault. Why don’t you personally place that much credence on the person-affecting view if indeed you don’t?
Nick Beckstead: Let’s see. I’m going through the mental motion of going back to my dissertation, and thinking about the chapter where I discussed person-affecting views. A sloganised intuition behind the person-affecting view is we’re in favor of making people happy, not making happy people. I think someone could arrive at this by … I think there’s a number of different types of intuitions that feed into this.
One kind of intuition is, who is the beneficiary of this action? Suppose, we consider the world where we don’t have a big utopia in the distant future, and we compare that with a world where we do have a utopia in the distant future. We don’t have this utopia, and you can imagine an exercise where we say like, “Raise your hand if this negatively affected you.” No one raises their hand because anyone who could have raised their hand doesn’t exist. They only exist in this other possible world. There’s, who’s the beneficiary of this? Nobody really. What’s so bad about us not having this big utopia?
Robert Wiblin: What’s the problem now?
Nick Beckstead: What’s the problem now? I guess, what’s the answer to that question? I guess, I would try to poke that intuition by offering a parallel type of problem for someone to think about. If we imagine a world in which is the inverse of this, imagine that instead of considering some great utopia that could have been created, suppose there is some great health that we averted. We’d have some large number of people having terrible lives, and we manage to avert it. We say, “Great, who’s the beneficiary of this?” No one can raise their hand similarly.
There’s a structurally similar argument. Yet, I think a few people would find it a terribly compelling thought to say, “Well, since no one raised their hand, there’s really no utility in averted that health. Let’s assign it zero value, that accomplishment.” I think the rhetoric associated with this, I failed to capture the intuition behind. They’re like, “I’m in favor of making people happy, not making happy people.”
Another thing that’s interesting about this is it reveals an asymmetry in the intuitions people have. If we’re considering cases where … Now, we’re talking less of the level of I’ve got a philosophical rationale for this, and more just reflecting thoughts about cases. I think many people would think of a case where you’re considering creating an extra life, and saying, “Okay. Would it have been good to create this extra life?” A lot of people have an intuition that’s like, “I’m okay with it if we don’t create another happy person.” It’s just not that big of a deal; whereas, I think, everyone pretty much agrees if you’re causing some person to exist to have a horrible life, that’s bad. There’s some kind of asymmetry there.
I think one of the puzzles in this literature and philosophy is trying to explain that asymmetry. I think I might be rambling a little bit. What was the original question here?
Robert Wiblin: I guess, it was over irrelevant. My question was, why don’t you personally accept the person-affecting view? Were you convinced through this kind of philosophical thought experiments or is it more of an intuitive judgment that you just didn’t see the appeal of the person-affecting view?
Nick Beckstead: Yeah. The methodology that I’d like to use for this, I guess, you could ask yourself three kinds of questions. One, do the implications of this view seem intuitive and natural? Two, is there a good philosophical rationale for why we should only be attending to the interest of the main people and not extra people that we could create, and particularly only the extra people whose lives are good and not the extra people whose lives are bad?
Three, when we run the different available views on this questions against a gamut of philosophical thought experiments and tally up the damage taken by every view, and also try to think about which of these views might be caused by some bias and way of processing the case.
Robert Wiblin: Example, a partiality for ourselves or partiality for people who we know and things like that.
Nick Beckstead: Sure. Then, we could ask, which of these views are taking the most damage? I guess, I would say we can go through more. I started going through one of them. I haven’t found the philosophical rationales for this view very compelling. I think this view, it has a couple of cases I think of as the most compelling arguments for it, but it has other cause that I see like in terms of intuitive counterexamples that are larger than the other views like the total view on this point. I guess, I don’t think it’s winning on any of those fronts. I don’t know. We could try and drill down on those, but that’s the high-level answer.
Robert Wiblin: Sure. I guess, let’s maybe not draw down on those right now because we could just link to the chapter in your thesis.
Nick Beckstead: Sure.
Robert Wiblin: I imagine one of them is the famous nonidentity problem that Derek Parfit identified in reasons and persons.
Nick Beckstead: Yeah, that would be a good one. I think one thing that that illustrates is there’s something natural about this thought like, “Yeah, better to help people that exist than cause there to be extra happy people.” I think what the nonidentity problem illustrates is basically that it’s very difficult to formally specify something that preserves this intuition, and also says possible things about all the cases that people can imagine. That’s a bit of a puzzle for a person-affecting views.
Robert Wiblin: Sure. I mean, I know that-
Nick Beckstead: I should say there’s maybe a bit of an underrated paper by Chris Meacham [00:29:31] that’s an attempt to solve this problem that maybe you could link to. I think I’ve thought less about that one, and have certain objections to it, but I don’t really discuss it in my dissertation.
Robert Wiblin: Another challenge for me in accepting the person-affecting view would be that I don’t think the idea that I am the same person as I was when I was a child or that I will be when I’m a little older really makes that much sense. That’s another idea that Derek Parfit exposed in reasons and persons. It’s like my properties would be different, and it’s not clear why the continuity between the who I am today and who I am in 20 years’ time really means that I’m the same person in a morally relevant sense.
Nick Beckstead: One of the arguments that’s in the dissertations a little bit, a lot of them already appeared in the philosophical literature, and it’s more of a review and summing up, and taking damage counts for all the views. One kind of argument I hammer on a bit more is that if you really accepted this person-affecting view, it seems like it has implausible implications for thinking about the value of preventing the destruction of the world, which is really the main question that I want to framework to give plausible answers to for the purposes of this discussion.
If you said like, “All right, how good would it be if we prevented the destruction of the world?” Let’s consider all the future beings and forget about all the current beings for a moment. On this strict person-affecting view, which taking it in this asymmetric way, it seems like what you would end up saying is, “Well, if we cause there to be all these future beings, and there’s a utopia, then that’s going to have basically zero value because all those being are extra, so we’re not going to add them up in our grand utility calculus.”
There’d be some probability that if things turned out badly for the future beings, or, for some probability, that turned out badly for a fraction of the future beings, in which case, it seems like you’d be ending up with an argument that’s like, “Well, it would be negative it that happens.” Then, if you consider this as like a gamble, and you say like, “Well, this could be good. This couldn’t be good but it could be bad,” then, it’s like automatically bad to save the world, at least, counting all these future benefits, leaving aside the benefits to the current people. That just seems like it can’t be the right framework for thinking about this problem.
Robert Wiblin: Okay. We’ve talked about discounting. We talked about the person-affecting view. The third objection is that even if it would be good to make the future better, you can’t really do that, or maybe you can, but only by making the present better. What do you think about that just briefly?
Nick Beckstead: Yeah. I guess, my first objection to this would be like there are a number of possible global catastrophic risks that seem like they could affect whether, and how, the long-term future plays put. The persons making this argument would essentially be saying that there’s nothing that we can do about any of those global catastrophic risks, or the extent to which we can affect them is so small that it’s really, really not worth considering.
I think, this person is essentially saying like, “Well, there’s nothing we can really do to reduce the risk of nuclear war. There’s nothing we can really do to reduce the risk of an asteroid hitting the earth. There’s nothing we can really do about potential risks from advanced AI. There’s nothing we can do in pandemic preparedness that would reduce the probability of a doomsday pandemic happening.”
Robert Wiblin: It’s not even possible that you could think of other problems that we haven’t yet listed where you could make an impact.
Nick Beckstead: I think, in any of these cases, you might say it’s very small probability, but I think this doesn’t seem like a particularly plausible suggestion that there’s nothing you can do about any of these things. It might feel more like as an individual that there’s nothing you can do about any of these things. I think one reframing of it that I could offer would be like, “Well, do you think many individuals doing something about it could collectively make some difference on it?”
It depends on what kind of unit you want to think of yourself as. Say, if you thought of the Effective Altruist Community as a group of thousands of people who are trying to do something about one of these problems, it seems like not at all absurd to believe that if you have thousands of people trying to work on pandemic preparedness that they can improve pandemic preparedness, and make it more likely that if there was a doomsday bio catastrophe, we’d be more prepared for that in some way. We’d be more likely to detect it early and stop it. We would be more likely to be able to develop a vaccine quickly and deploy it more quickly.
I have limited sympathy with the view. I think me interlocutor might say something like, “These probabilities are super made up. It’s not really going to translate into anything,” but, I don’t know. If you want to represent the other side, you could give it a go.
Robert Wiblin: Well, obviously, I don’t agree with this view because I’m spending my career trying to get more people to spend their career reducing global catastrophic risks. I just think it’s implausible that having thousands of smart people working on difficult scientific or political problems just, in principle, cannot improve those problems because we just see throughout history that thousands of smart people working on difficult problems are just very frequently successful. They invent new things or they run campaigns to change policy.
The idea that we could be so pessimistic that there’s just virtually zero chance that a community of smart people are trained to reduce global catastrophic risks that they couldn’t have any impact. I just don’t understand what the basis for it is. It seems like an extremely strong claim.
Nick Beckstead: Yeah. I don’t know. What are the kinds of objection might you have to this? Another objection you might have would be like, “Well, what you would really want to do if you wanted to make sure the long-term future turns out really well would be try to make sure that powerful countries have well-functioning institutions.” Maybe our democratic discourse is more civil and reasonable, or maybe we have better people in office. I could imagine somebody arguing like, “This is the most important thing to be doing regardless of what you believe about the set of considerations. Therefore, this whole discussion is irrelevant when deciding what is best to do.”
I think this is a plausible view. It would come down to differences of degree about how big are these global catastrophic risks? How much could we reduce them? How likely is it that we’ll end up with a good future under business as usual with no global catastrophe? I think that seems like something you could have more of reasonable …
Robert Wiblin: It’s more of a good question.
Nick Beckstead: Yeah. You could have more debate about. Maybe more of a taste-based question.
Robert Wiblin: Yeah. I’d like you to discuss later on. Maybe we’ll come back to that if we have time. Let’s move on from your thesis to talking about some concrete details about the specific global catastrophic risks that we face. As you said, the main thing that convinces you that we actually can do something about this is just looking at the details, and saying that there’s this whole work that can be done that seems like it would make a difference. Which global catastrophic risks do you think it’s most valuable to have extra people working on or extra money going towards reducing them?
Nick Beckstead: My basic framework for thinking about this question would be go through the list of global catastrophic risks and say, what’s the expected harm of this risk? Which ones are most likely to derail civilisation if I was assigning subjective probabilities to them based on what I know about them? Which of these risks are getting the most attention in terms of dollars and number of very talented people that are working on them? Which of these risks does it seem like there’s the most to do about them?
Open Phil has a blog post. It’s probably a few years old now that goes through and ranks all of the global catastrophic risks in a spreadsheet. I think I still mostly agree with that blog post. The output of that is that the risks that Open Phil’s prioritising are potential risks from advanced artificial intelligence, and biosecurity, and pandemic preparedness. Those two risks are, in my opinion, scoring some of the highest in terms of likelihood of derailing civilisation. They also get very limited attention from the philanthropic community.
I think, Open Phil is the largest philanthropic foundation that’s funding work on either of those. They score pretty well in terms of how neglected they are. Then, I think, each of them, it’s not as easy to tell whether your work is going to turn out to be very useful on them, and how well you’re doing as it is with some other things like, say, malaria eradication or something like that.
Robert Wiblin: Or even asteroid detection.
Nick Beckstead: Or even asteroid detection, but it does seem like there’s things that, given some plausible assumptions about the world, there being some chance of developing transformative AI in the next couple of decades, or there being some reasonable prospect of being possible to engineer really devastating pandemics in the next couple of decades where preparation of various sorts seems like it could get you somewhere with this.
Robert Wiblin: What other problems do you think are most pressing to work on besides those two just from any close area?
Nick Beckstead: What else is most pressing to work on apart from this? I would say it depends on who you’re asking this question to, in a way. I would answer this question differently if I’m advising the United States Government on what its priorities were versus if I were advising a young person who was trying to decide what to do with their career, and had EA inclinations.
For the latter category, I might say like, “Well, maybe you should, apart from working in these areas, maybe working in the Effective Altruist Community could be quite good and maybe working on better political judgment and decision making could be quite good.” If I were advising the US government, I’d had a different set of answers. I mostly think about the latter question because that’s who’s asking me.
Robert Wiblin: Open Philanthropy has a whole bunch of mind that it’s trying to giveaway. On what problems do you most struggle to find people who can easily do work with that, and what kind of problems?
Nick Beckstead: One category where I would say we’re really struggling to find people to do valuable work, and would like to have more people doing valuable work is the strategic aspect of potential risks from advanced AI. The way I would think about what risks we’re interested in and preparing for in artificial intelligence, I would say there’s basically two categories.
One of them is, the AI alignment problem or loss of controls. There’s a scenario where you have a very powerful artificial intelligent system or set of systems where people, there’s a misalignment between the intentions of the people who’ve designed the system and some goal that the system itself is pursuing. I think people like Nick Bostrom has explained why that’s a potential risk, and why the harm could be quite large in certain types of cases, not today, but further down the line.
Then, the other kind of category is just maybe you maintain control of the system, it’s following the user’s intentions, or maybe the user’s intentions are not that good, or aren’t very aligned of with what’s best for the world. There could be some plausible scenarios in which some group having an advantage in artificial intelligence could result in a concentration of power and harm coming from that.
Then, there’s a bit of interaction between the two of these things. You can imagine scenarios where maybe it’s difficult to solve this alignment problem, and, at the same time, different people are worried about what other people will do if they are the ones who get a concentration of power. It seems like there’s a recipe for harm there.
In terms of solving this problem, the Effective Altruist Community has really focused most of its discussion so far on the technical aspect of this, which is what are the principles or the technical specifications that could be used to design a system that alignment is retained between what the system is doing and the intention of its creators. Less has is gone to something that’s more of political or a strategic problem, which is, what is the proposed way of proceeding, given that you have created some very powerful artificial intelligent systems that everyone could agree to that would likely solve both of these problems, and be acceptable to the main parties that need to be influenced, whether they’re companies or states?
Thinking through that problem is something that I would love to see more people on the Effective Altruist Community thinking through. That’s an example of something maybe that’s more detailed than you’re going for, for that, but that’s one category.
Another category, I still think the technical side of this AI problem deserves a lot more attention than it gets. I would love to see people contributing to biosecurity and pandemic preparedness in a couple of different ways. On one hand, there are a lot of technical problems that could be solved.
Somebody who is a really good biologist really thinks a lot about this problem could make a big difference in terms of getting us in a better position to rapidly deploy medical countermeasures, such as immunising the population more quickly than it’s currently possible, or getting a wider variety of broad spectrum antivirals that could be used and deployed in the case of a pandemic. Getting us just tools that we’ll make surveillance and diagnosis cheaper, more ubiquitous, more rapid. There’s a suite of technical issues there.
On the flip side of that, and this is less my area to think through, but I have less to say about it than the science side of it, but it seems like there’s a lot that could be done on the policy side in terms of getting governments to pay more attention to this, and understand the biggest threats that they should be preparing for, improving surveillance systems, and making sure that the most crucial researches is well funded. The career route there would be people learning about biosecurity getting into the field. I think it would be a great area that a lot of Effective Altruist could contribute to, but doesn’t really get that much attention from our community at this point.
Robert Wiblin: I spoke with Howie Lempel, who used to work at Open Philanthropy for about two and a half hours about this. If you’d like to hear more on the biosecurity, then we’ll stick up a link to that one.
Nick Beckstead: Yeah, it’s probably better to hear his version of that answer.
Robert Wiblin: You’ve also been heavily involved in the Effective Altruism community over the last five years, basically since its inception. You’re familiar, I guess, with both its pros and its cons. How would you like to see the EA community improve and do that?
Nick Beckstead: I think I would like to see more people being dedicated to some of these problems, and some of the other problems in a full-time way, in a high-tension way with their careers, not just with their donations, and not just as a side project that they discuss on the internet and things like that. I think, really getting a little bit more full-time, and fully focused, and specialised on particular aspects of this. I’m thinking about where they can contribute.
One of my hobbies, a thing that I find really interesting to do is read about bigger accomplishments of humanity in the past, and read biographies of people who achieved great things. I think one of the things that’s come out of that for me, and just thinking about how people have a lot of impact in the world, I think it’s really hard to have a home run as a spare time venture, or as a personal side project.
I think I would love to see more people trying to ask themselves like, “What piece of this could I go full time on?” What piece of this could I become an expert on?” I think finding jobs in these problems in the Effective Altruism community, finding jobs in the government that’s advising people on how we should deal with all of these things would be a big improvement from the extent to which people are currently emphasising things like earning to give. I think that would probably be my top ask for the EA community.
Robert Wiblin: A question that often comes up is whether Effective Altruism should aim to be a very broad movement that appeals to potentially hundreds of millions of people, and it helps them each to make a somewhat larger contribution, or whether it should be more, say, like an academic research group or an academic research community that has only perhaps thousands or tens of thousands of people involved, but then tries to get a lot of value out of each one of them, really get them to make intellectual advances that are very valuable for the world. What’s your thought on that, on the two options there?
Nick Beckstead: I guess, if I have to pick one, maybe I would pick the second option, but I might frame it a little bit differently, and I might say, “Let’s leave the first option open in the long run as well.” I guess, the way I see it right now is this community doesn’t have currently a scalable use of a lot of people. There’s some groups that have found efficient scalable uses of a lot of people, and they’re using them in different ways.
For example, if you look at something like Teach for America, they identified an area where, “Man, we could really use tons and tons of talented people. We’ll train them up in a specific problem, improving the US education system. Then, we’ll get tons of them to do that. Various of them will keep working on that. Some of them will understand the problems the US education system faces, and fix some of its policy aspects.” That’s very much a scalable use of people. It’s a very clear instruction, and a way that there’s an obvious role for everyone.
I think, the Effective Altruist Community doesn’t have a scalable use of a lot of its highest value … There’s not really a scalable way to accomplish a lot of these highest valued objectives that’s standardised like that. The closest thing we have to that right now is you can earn to give and you can donate to any of the causes that are most favored by the Effective Altruist Community. I would feel like the mass movement version of it would be more compelling if we’d have in mind a really efficient and valuable scalable use of people, which I think is something we’ve figured out less.
I guess what I would say is right now, I think we should figure out how to productively use all of the people who are interested in doing as much good as they can, and focus on filling a lot of higher value roles that we can think of that aren’t always so standardised or something. We don’t need 2000 people to be working on AI strategy, or should be working on technical AI safety exactly. I would focus more on figuring out how we can best use the people that we have right now.
Another modification, I guess, to just picking the small group instead of the broad mass movement thingy. I don’t think it’s all about research. I think a lot of this is about implementation, and management, and operations, and running an organisation really well. It’s not just like four heads or something that are going to write weird research papers about the value of future lives or something like that. I think there’s a lot of ways for people to contribute. I think the relevant access for me is more like, “Are you full-time dedicated and thinking about the problem in a sophisticated way?”, less than like, “Is it academic, or research, or something like that?”
Robert Wiblin: In my experience, you have one of the best forever judgments of anyone I’ve met, which is one of the roles that you need in the community. You need some people to be coming up with new crazy ideas of being contrary and getting people to think new thoughts. Then, you also need honest brokers who just consider all of the arguments in one side and all of the arguments in the other, and try to reach a balance judgment that other people can trust. How do you think you’ve cultivated that of your life? Were you born this way, or is it a result of philosophic training, or something else?
Nick Beckstead: It’s a difficult question. Let me think about that for a second. I think that I’m usually high on skepticism, and placing usually high amount of value on authenticity in what I’m saying. If I’m saying something that I don’t quite know, or it’s a little bit off somehow, and I notice it, I’m running it through my head all the time and saying, “Is that exactly true or is it more this other thing?” That might be a piece of it. I have a lot of skepticism. I think just about established fields and ways of doing things that people say, “This is a trustworthy way of thinking,” or like, “This research methodology works.”
I think I don’t necessarily have default trust in the conventional wisdom of that sort until I’ve spent some time poking it, or unless there’s obvious use of the reasoning method in the world. If people are building rocket ships with some physics, then I’m likely to really give them the benefit of the doubt.
Those are some initial thoughts. I don’t feel like I know the answer to this question. I think I’m more just tempted to generate an answer, and didn’t really quite succeed at it.
Robert Wiblin: Do you feel like your judgment has gotten better over time?
Nick Beckstead: Definitely, it has. Maybe some tools that I feel like I’ve gotten some juice out of, learning philosophy and learning how to take something that’s written and be like, “Alright. What were the main claims in this thing? What were the arguments for it? Did the arguments have the structure of a valid argument? Which of these premises was the weakest?”
I think I’ve got a fair amount of juice out of learning that type of stuff. I think I got a fair amount of juice out of just thinking a lot about patient epistemology, being like, “Alright, I’m going to assign.” I’m going to be the kind of person that assigns credences to things and tries to act in accordance with my credences, and bet in accordance with them, and be willing to do that. Maybe there’s something about that that is particularly useful. I think the Effective Altruist Community, there are people in this community that I’ve learned a lot from interacting with, and maybe there’s a piece of that there.
Robert Wiblin: That’s a good answer. I think, for quite a few years now, I’ve just been in the habit of giving probabilities to almost everything that comes up or whenever you’re thinking about a contentious issue, you just attach the credence to it. It’s hard for me to remember how I thought before that. How would you even deal with these issues? I think that’s one if you’re able to pick that up and just make it a habit. That’s something that I’d definitely recommend. I think if you get involved in the Effective Altruism community, it might be hard to avoid picking that one up. It’s somewhat contagious.
Nick Beckstead: Maybe just being in the mood of thinking like there are a bunch of ways that people rationalise and self-deceive, and trying to know about them, and trying to notice it if I’m doing it myself or if I’m saying something that’s not quite right because I have a side in an argument, and maybe I should take a step back and notice that I’m doing that, conceding ground inch by inch rather than saying, “The evidence is going in one way here. Maybe I should just follow that where it leads.”
Robert Wiblin: On your personal website, you have a list of books that influenced your thinking that you particularly recommend other people read. I’ve actually been working through them in the last few years. What are a few of those that you would like to mention here?
Nick Beckstead: My website is audiobooks. It’s just books that I’ve been listening to over the last few years. They’re all different and interesting in different ways. Probably, if I had just restricted it to books, full stop, I might have a different list of books, and perhaps less in idiosyncratic line in some ways, and more idiosyncratic in some other ways. I’ll just restrict the answer to the audiobooks that I’ve been listening to over the last few years.
One of them that I really liked is Better Angels of our Nature by Steven Pinker. When I first saw this book, I thought I wouldn’t find it that exciting because the subtitle of it was … I can’t remember. Is it Why Violence is Declined or How Violence is Declined. I was like, “Well, I don’t believe by default that things have been getting better and violence has been declining. I don’t find that hard to believe. What am I really going to get out of this book?
I actually thought it had a really interesting blend of thinking. It did a number of things that I like. One, it took the macro historical perspective like where is the world going. Two, it had a nice blend of quantitative and qualitative data. Three, it had plausible and interesting speculation about the mechanisms of why that was happening.
It’s filled with these graphs that illustrate a lot of the main point. Also, it has this really interesting qualitative stories about, “Well, we used to torture people in these ways,” or “People used to get in fights on the beach over women, and it was a macho and cool thing to do,” various things that you just know about but don’t exactly think about all the time. It weaves a nice and plausible story about how that all fits together. I thought that was a really enjoyable book.
Robert Wiblin: Pinker has a new book coming out, I think, in December or January called Enlightenment Now. I think I’m going to enjoy a lot. Perhaps it will be more of a cheerleading book than anything else, but I’m excited for that one.
Nick Beckstead: I’ll list another one and talk about it a little bit here. I’ve got a bunch of other ones, but The Power Broker by Robert Caro was a really interesting book. It’s all about this figure, Robert Moses, who came to power. Basically, became this very overpowered civil servant in the history of New York. It tells the story of how he did this, and how he eventually got some very large budget that he was working with. It’s like some sizeable fraction of all of New York City’s money that was under the control of his authority that he was in charge of.
Robert Wiblin: This is an immense book. I think it says 60-hour long audio book, and I’m 40 hours through. It has this extraordinary story at one point about how he managed to … He was running, I think, the Triborough Bridge Authority and various other statutory authorities that the city government had created.
He took a bunch of loans from bankers who wanted to lend money to construct the infrastructure. He put into the bonds agreements that he would be leading the Triborough Bridge Authority, and then all of these other infrastructure authorities. It then became legally impossible to remove him because it would be a violation of borrowing from the bank.
Of course, the politicians had noticed that he could do this, now, that they know that he had done it. As a result, he was able to stay in this role by just rolling over these bonds that always had an agreement that he could never be fired. He was able to remain in control for decades.
Nick Beckstead: Right. I think that’s a really valuable and interesting book because I really enjoy certain types of biographies as micro histories. You learn all these things about how a political system works in one place or how an organisation works in one place. You can build up an inventory of these things over time that you know about.
Then, when people make interesting general claims about how things work, I like to test them against the micro histories that I know about, and be like, “Does that really hit with the life of Robert Moses?” or “Does that fit with what I learned about the life of Steve Jobs,” or “Does that really fit with what I learned from X, Y, Z compendium of people who influenced the world a lot and their mini biographies.” I think that can be very valuable. An interesting piece of the world that I wouldn’t otherwise know a lot about, but seems like it can be used to test a lot of these other general claims about how things work. You can think about it as you’re reading through.
Robert Wiblin: Do you want to give a third one?
Nick Beckstead: Yeah.
Robert Wiblin: One that’s been more divisive among my friends has been Moral Mazes by Robert Jackall. Do you want to quickly describe that?
Nick Beckstead: I got this recommendation from Aaron Swartz’s list of books that he like. I would call this basically someone doing an ethnography of a couple of different corporations and describing what kinds of failure modes there were in these organisations, and where incentives would and wouldn’t be compatible. I really enjoyed the book for that purpose.
Let’s see. I’m trying to remember anecdotes from it. I think maybe one kind of anecdote that’s fairly illustrative that I would find interesting would be there would be people in this be responsible for manufacturing plants, and they’d be in these positions a period of a few years. They would be judged on how well things were going while they were there at the power plant, or, sorry, at the plant.
They would have this expression called milking the plant, which would be a thing you could do where, basically, you would cut corners on maintenance of everything, and trade off short-term gains for long-term gains, and it wouldn’t really show up in the metrics that anyone was using to evaluate how well the managers of these plants were doing. Then, by the time there was a problem several years later, the manager would have been promoted or moved on to another role in the company. This is basically known to be a thing by a number of the people that he interviewed. I thought that was a very interesting thing.
Another thing I found very interesting, perhaps this is naïve and not that interesting to most people, but interesting to me as someone who hasn’t worked in a giant bureaucracy. People tended to talk about their work like, “I work for this person,” and less like, “I work for this company.” There would be these very transactional relationships where the person you work for, you’re making sure that they look good, and gathering information for them, and feeding it to them. The person you work for is making a bet on you as an apprentice. If they rise within this bureaucracy, then they’ll bring you on as an appointment on to higher on roles.
I just think it highlighted a lot of dysfunction that I don’t know where else I would know about unless I had lived through it or something. I found it really interesting for that reason.
Robert Wiblin: Yeah. I have worked in bureaucracies, not for that long, but I’ve really enjoyed the first few chapters. I was laughing along and nodding along to the various descriptions of people’s behavior. Also, the strategy that you don’t often think about explicitly that explains why they’re having the way that they do.
Another anecdote that sticks out to me is it did a lot to explain the ideology that comes along with working in a corporation. Perhaps, the moral hollowing out that comes with working in a corporation for a long time where it’s very bad for your career to think too much in moral terms, and not enough about experience and what is experience for the company and what’s going to advance your career.
There was a case where two people at this firm who had been having an affair were making out in the parking lot, and everyone saw it. Not a single person objected to the fact that they were cheating on their partners. They just thought that this was bad because it showed that they lack the self-control necessary to do the dirty work of the company. This is what they would say in the private interviews with the ethnographer. He was somewhat struck that they just didn’t think of this as a moral issue at all. It was purely a matter of pragmatism.
Moving on, what is the path that took you to where you ended up now? What are some particularly good calls you think you’ve made as your career has progressed?
Nick Beckstead: Let’s see. I guess, this is a question of how far back you want to go. You could go all the way back to being an undergraduate and going to graduate school, and then all the way up to the present. Why don’t I do that? Why don’t I just start back from being an undergrad to grad school?
Robert Wiblin: When I was a young boy, I dreamed of working in a foundation. Making grants.
Nick Beckstead: Right. When I was an undergrad, I guess, maybe one of the big first choices I made was, am I going to go to graduate school in Philosophy or Economics, or am I going to go and try and make a bunch of money, and do a proto earning-to-give type strategy? I guess, maybe at that point in my life, this is back in 2006 when I was making these choices. I had read Peter Singer as an undergraduate. I had some vague utilitarian guilt that I ought to be doing something useful for the world. I think I ended up really just going with the thing that of those that I seemed most likely to be exceptional at.
I’ve had more signs that I majored in Philosophy and double majored in Math. I think I had some signs that I could possibly be a very good philosopher and not really science that I was going to be some really excellent mathematician. I think I hadn’t really tried as much economics exactly. I think it was more gripped by an interest in the philosophy questions. I was spending a ton of my time going through piles of books at the library that were in the philosophy section and less of it in any of the other ones.
Maybe as a personal interest thing and some rough sense, back at that point, I was most interested in the epistemology, and I had some feeling like, “Geez, figuring out good standards of reasoning in a general way, and not just in a here’s how you do a particular statistical test type way seemed like a very valuable project.” The philosophers seemed like the people who were the best and most natural fit for carrying out a project like that or is the only people who seemed to be thinking about it that I had found at that point in my life.
Anyway, I went. Then, I decided to go to grad school in Philosophy. I went to Rutgers. Basically, that was the best programme I was accepted to. At the time, I think it was ranked top three in Philosophy in the US. It’s a good programme. Anyway, that’s an interesting question to think about which of those was the best choice. I guess, I’m glad I didn’t do the proto earning-to-give thing. I’m not sure how it would have played out if I went and done Economics instead. Maybe it would be more of a debate.
I went to grad school. I was mostly thinking about epistemology. I think there are two things that made me end up changing and deciding, “Well, I should be studying some other things and thinking about other things in my life.” I guess, I was thinking a lot about ethics. I was having a lot of debates about consequentialism and utilitarianism with people in grad school. I felt like I wasn’t really being talked out as my broadly utilitarian view on things. I was feeling some cognitive dissonance about my life. I think I was like, “Are we really making a lot progress in a practical way on how to reason better?”
I think, in a lot of ways, the questions that seems most popular in philosophy seemed like maybe they weren’t making a lot of progress on that question either. A lot of the community seemed more interested in the analysis of what knowledge is, which didn’t seem like particularly useful to me as an input into deciding exactly how to reason. There’s a community doing stuff on Foundations of Bayesianism. I think that stuff heuristically was very interesting to think about, and think about how to apply, but it didn’t really seem like … It felt like a lot of the questions were difficult to get a lot of traction on, and didn’t super actionable in terms of getting better at thinking about how to reason exactly.
Then, at the same time, I went and read this biography of Paul Farmer, who’s the Founder of Partners in Health, and heroically saving all of these lives in setting up an organisation that does that, and just feeling like a bit unexcited about my future. I resolved, at that point like, “I really should be living up more to my values. If I’m going to succeed at this, I’m going to need to find other people that are interested in thinking about this stuff.
At that point in my life, I started looking for that, thinking maybe I should be thinking more about ethics. That’s the incremental change in research interest that would seem like have more of a chance of being relevant to the world, and thinking about what I could do in my spare time just to have more of a contribution to things. It was around that period that I started getting introduced to … This was around 2009, that I started getting introduced to various players in early days of the Effective Altruist Community.
I went and read about some early plans for Giving What We Can starting before it was a thing. I was introduced to early stage of GiveWell through Peter Singer when I went asking him for advice about what I should do with my life. I was taking a class with him. I was introduced to Nick Bostrom’s work just randomly from a colleague that said, “You might really like this paper on infinite ethics that I found on Nick Bostrom’s website.” They were indeed right. I really did like that paper. I was looking through a bunch of the other things. I bumped in at Robin Hanson. He came and spoke at a class I was in I was taking at Princeton, and looking through his blog, and thinking about things on there.
Then, I guess, next stage here was something like, “Maybe I should write a dissertation on something particularly relevant.” I was thinking through at that point arguments that I’ve seen from Nick Bostrom about astronomical ways. I was like, “Okay. Well, maybe I could write a dissertation about this. It seems like there’s a lot of points in this argument that are debatable that maybe I could shed some light on,” and ended up doing that.
I decided that of these communities and the proto EA land that I thought I could help. I had the most resonance at that time with the Giving What We Can crowd. I’ve had a lot of conversations with Toby Ord and Will MacAskill at that point, and thought, “Maybe I could help Giving What We Can be more effective. I got really involved with them, and became a trustee of that organisation, and helped launch student groups. The first ones are EA Branded in the US, and met a lot of the people that are in my current network through that.
To try and speed this up, and so I don’t ramble on forever, I guess, next other things you did, while I was writing this dissertation, I went and visited the folks at FHI for a summer. I got to know them better. I went and interned at GiveWell for a summer, and got to know that group of people better. My first job coming out of that, I took as a research fellow at the Future of Humanity Institute. After being there for almost two years, I got an offer from Open Phil to come and work on things there, and been doing that for the last three years.
Robert Wiblin: While you were running a thesis about the value of the very long run future, you were also doing work that was focused on poverty reduction specifically. Was that a tension? Why did it take you a while to switch towards focusing on existential risk or catastrophic risk?
Nick Beckstead: It was a little bit of a tension. I mean, I think a couple of reasons. One reason was it still felt a bit crazy to me somehow to be placing some of debt on my life that is like, “Well, I’ll have this small probability of getting this huge number, getting this huge amount of good accomplished.” I was particularly thinking about stuff that just seemed I think I hadn’t vetted this thoroughly, views about the plausibility of AI, and space colonisation, and crazy transformative tech type things. It felt a little bit crazy, I guess, was one reason.
Then, the other reason was I had some hopes that promoting the growth of the Effective Altruist Community would eventually help a lot with existential risk. Yeah, would end up helping with it, especially if it turned to be a well-reasoned case. I think for that reason, I felt okay about whatever I was doing at that time. I think, in retrospect, I think the EA community was maybe a little bit too hesitant to wear the weird on its sleeve. Maybe I prefer if we had done more of that.
Robert Wiblin: In the past, when you’ve had somewhat close calls trying to decide what to do next with your career, you’ve often written quite lengthy documents consisting all the present conflict options. Do you think that was a good use of time and something that other people should do as well?
Nick Beckstead: I do think that was a good use of time. I think there were maybe a couple of cases where I over-analyzed it a bit. I think on balance, that’s the direction to air in.
A methodology I found really useful with decisions like that is like step one, write down all of the considerations, pros and cons, as you see them right now, and rank them in terms of importance. Step two, write down all of your key uncertainties or articulate all the ways that you’re uncomfortable with your current stance on the issue. Step three, state a default action. This is like a gone-to-my-head thing, I’m going to decide now. Step four, list things you could do to investigate this question, and resolve your uncertainties and ways you’re uncomfortable, and do them. Then, talk to a bunch of people whose judgment you trust and know about your situation. Do a bunch of that. Maybe you do a little bit of iteration on listing of questions.
I think a failure mode is where you just keep thinking about it until it seems clear what the right decision is. I think, in some cases, that’s interminable and that’s a mistake I’ve made at some point in my life when I was thinking what the next career step is. I think, at some point, you have to just say, “All right. Well-”
Robert Wiblin: What’s this I’m likely to know?
Nick Beckstead: I investigated all the things I should investigate. This is my choice. I’m making it. I do feel pretty good about having spent time writing up these documents. I think the thing that is at waste is agonise over it a whole bunch even when I don’t seem to be adding anything to the decision.
Robert Wiblin: I’d like to, now, talk about three blog posts that you’ve written over the last few years. The first one is one about in vitro meat or clean meat, as it’s often called now. Two years ago, you wrote that you were fairly pessimistic about the rate at which clean meat might be developed. For that reason, I think Open Phil was more likely to make grants focused on plant-based alternative to animal products rather cultured meat. What were the concerns that you had at that time?
Nick Beckstead: Maybe I should just say a bit contextualised what I, in Open Phil, did to investigate this question, and what are current stances, how confident we are in that. One of my projects here at Open Phil has been to identify areas that are particularly promising as possible programme areas in science. I’ve been working with scientific advisors to help me evaluate a lot of technical material. My role has been on the values side to think a bit about how good would it be if we accomplish this goal.
Also, on some philanthropy type questions, how neglected is this cause really? Does it look like a good fit for philanthropy? Also, on the side of certain questions that maybe people, maybe a lot of scientists wouldn’t think about as naturally or consider a normal part of her discipline, like “What is the timeline on which this type of technology might be developed? With what probability might it be developed?”, which is, in some ways, the scientist is the person who knows the most relevant inputs to that. Maybe that’s not the type of question they’re used to writing about and thinking about in papers they publish.
Anyway, one of the things that we decided to look into was alternatives to animal products. I worked with somebody who was working with us as a consultant at the time who’s a scientist. We had conversations with several of the main people who work in that field, put together information about what are the companies, and what philanthropic investments had been made in the area, what are problems that need to be solved, and what kind of work could feasibly be done to solve it.
We thought a little bit about analogous cases also like biotech companies that had been trying to make a commodity. The tissue engineering industry, which is not a perfect analogy for clean meat, but is fundamentally similar in a number of ways. We tried to pull all of that together and make a judgment about how promising this is.
Maybe that’s all in a couple of hundred hours of work or something. Definitely, I don’t consider myself an expert on this. The attempt was to have some basic understanding of the area. I guess our stance coming out of it was while it would be very high upside and it’s a pretty neglected area, we didn’t see a lot of evidence that it seemed particularly tractable. Some of the scientists that we consulted were pretty skeptical of whether it was going to be feasible. When we have tried to put numbers to it in lowest feasible cost analysis, we didn’t really see a way to get the cost down as low as possible.
We also had some conversations with various people in this field after we’ve come to those conclusions, and after at a second point later having Chris Somerville, one of our science advisors look into the cost effectiveness, or, sorry, the lowest cost analysis and see if this was going to be feasible. Also, got opinions of some other scientists. From people who didn’t really have a horse in the race, they tended to be very, very skeptical of it. We didn’t really hear from people what I would consider to be convincing arguments or counter considerations on what the lowest possible cost was likely to be.
My position is not like, “Hey, that definitely won’t work.” I think some people might argue like, “Well, if you don’t know it won’t work and the upside is so high, then Open Phil should invest in it anyway because that’s a good way to do science funding.” I guess, I don’t totally see it that way. I think if you do some preliminary investigation on the tractability of some idea, and all the signs that you’re seeing point to pessimistic, you could argue that, “Hey, we should look into it more.” Maybe, at that point, it would start looking more like it was going to work out.
I think it doesn’t seem like a funder should be funding on the basis of like, “Well, all the science we looked into when we looked in tractability looked pretty unpromising.” That’s the basic stance. You could say, “Well, you should go for moonshots.” I guess, my reply is, “Well, if that’s the philosophy you want to take, what are the best moonshots you should be doing?”, and less of a stance of like, “Well, this is a moonshot that is good, so you should go for it.”
We made an investment in Impossible Foods. I think that is a more promising bet in a number of ways than what we might have done in clean meat.
Robert Wiblin: Impossible Foods does plant-based alternatives.
Nick Beckstead: Does plant based alternatives, yeah. That seems like a model where much more likely that the costs are going to get down in a way that it’s going to be … it could be a mass market enterprise that’s profitable.
Yeah, I guess, that’s the basic stance from my end. I know a lot of people in the EA community have been skeptical of this decision. I don’t know that I’m right. I don’t have super high confidence. There’s a question of, what do I spend my time on? I think in an ideal world, if I had a clone or something, maybe the clone would go and spend a bunch more time getting to the bottom of this debate. In the current world, my focus is more on building the EA community, maintaining our science operation that we have going, and thinking more about what we can do about global catastrophic risk.
Robert Wiblin: Your view might have changed in the last few years if you were still following it, if your interests have moved on a bit?
Nick Beckstead: I mean, maybe. Has my view changed at all? I think some people that I respect have not been very convinced by Open Phil’s decision. That gives me some pause. Maybe I’ve updated in light of that to be more optimistic than I was. I’m pretty unsure whether we’ve made the right call on that yet from where it stands.
Robert Wiblin: Alright, next one. A topic that you wrote about in your thesis, and then have expanded on in a PowerPoint on your site, and I think some blog post you’ve mentioned the difference between narrow and broad interventions of trying to improve the world. Do you want to explain what that dichotomy is?
Nick Beckstead: Sure. It goes back to what we were talking about a little bit earlier. I suppose you agree that you want to shape the distant future for the better. Then, there’s two kinds of strategies. There’s some access you could think of as broad versus narrow strategies for pursuing that goal where a narrow strategy might be betting on a specific problem or a specific kind of outcome. For example, I think preparing for a potential risk from AI, a lot of strategies for dealing with that would be very narrow and concrete strategies. Thinking about the thing that I proposed earlier of having a specific plan for how society should respond to that is very much on the narrow end of the spectrum.
Then, on the broad end of the spectrum, there could be things that might benefit in many types of outcomes, or might be particularly responsive to potential unknown unknowns that we could be responding to. Maybe a great example of that kind of thing would be trying to have people or institutions make better judgments. If you imagined a world where Tetlock had its way, and all the pundits, and governments were making accountable precise forecast for everything, that has a potential to affect a lot of the quality of decision making on various levels, and could be valuable for a variety of outcomes. It would be a paradigmatic case of a broad type of intervention.
Robert Wiblin: The difference being that some approaches are very useful in one scenario and others are useful in many scenarios.
Nick Beckstead: Yeah, pretty much. Other examples of broad interventions, maybe you believe that if society has a faster overall rate of economic growth, then there’s a variety of good things you could hope for happening as a result of that. If our scientific institutions function better, if information is more freely available, is it all going to be on the broad category? Preparing for specific global catastrophic risk, it’s all very much in the narrow category.
Robert Wiblin: Since over time, you shifted from working on broader term or narrow interventions. What’s the reason for that?
Nick Beckstead: Well, let’s see. It’s not totally clear that I’ve switched so much in terms of where my attention is. Writing a dissertation about the importance of shaping the far future is, I guess, it’s a little bit of both, but maybe more on the targeted end of the spectrum. Working at FHI was more the targeted end of the spectrum. Promoting EA, I don’t know, it’s somewhere in between, insofar as EA might do some of each.
It’s true that I’ve had a shift in my thinking over time about becoming more and more in favor of the targeted end of the spectrum. Why is that exactly? I think over time, I’ve held some of these views for longer, and been in more arguments about them, and feel like pieces of it have held up maybe better than I might have expected at some points in the past. I think I have more of an understanding of some of the central issues related to AI. I’ve seen a more of a debate play out on that.
I think there was a time when I was waiting for a secret argument that people more knowledgeable than me had that they weren’t saying, or I wasn’t hearing publicly. I think I largely haven’t heard the secret argument that AI is not really as important as it seems, and my inside view model of the situation. I think that’s really been the main piece of it.
In terms of arguments that were already on my mind, a lot of the broad type stuff is just more popular in society, and less neglected for that reason. There’s a lot of people who, in some way or another, are interested in seeing faster economic growth. Everybody who has a relevant company that they want to see succeed is doing that. It’s just a much more popular lens in public policy making; although, I would like it to be a more popular one. Yeah, that’s a bit of the summary of it.
Robert Wiblin: The broad intervention seemed less neglected. The narrow interventions perhaps seemed more tractable than you thought 10 years ago or five years ago?
Nick Beckstead: Yeah. I think there’s some framework that says you should be making this decision on the basis of this tractability considerations, these neglectedness considerations. The more you have high confidence in a particular inside view understanding of the situation, the more sense it makes to bet on the neuro side.
Somebody whose view of the world is it’s all unknown unknowns in the future, we can’t really plan for stuff that might happen more than 5 or 10 years from now on the basis of technologies that don’t currently exist, I think someone who views the world more that way should go more on the broad side. Someone who views the world more that way should go more on the broad side. Someone who views the world more in a way that’s like, “Well, I don’t really know for sure, but I think there’s some possibilities that can be reasonably anticipated and prepared for in a way that is effective,” I think there’s more to be said on the neuro end of the spectrum.
I guess, as I have seen debates play out, and learned more about things, I have come down harder on the side of we can reasonably anticipate and prepare for some of these outcomes.
Robert Wiblin: I’m pretty firmly on the narrow end of things in terms of my preferences for what problems to work on. One of the reasons is that tools that you create that help to improve economic growth or just general human productivity, not only help people who are doing good things, but also help people who are either deliberately or unintentionally doing things that are harmful. I think the fraction of human activity that is unintentionally harmful is actually quite large potentially, but possibly even the majority. What do you make of that? I guess there’s a lot that could be said here, but I just want to flag that issue there. Just speeding everything up isn’t so good if, on average, things are not great.
Nick Beckstead: A number of things to say about that. I think there’s a reasonable debate we had here. I think some people might have a knee jerk reaction of like, “It’s definitely good to speed up the progress of society.” Maybe some people might have a knee jerk reaction of like, “It’s definitely bad.” I think there’s reasonable cases that you could make on both sides of the issue.
I think, thoughts to offer on this. I think, on obligatory first thought would be, “Well, it seems like putting aside factory farming, which is a big thing to put aside, I think there’s a very strong case to make.” Steven Pinker has made it and other people have made it. I think it’s a lot of common sense in a way that I’d much rather be alive today than 200 years ago. The things that has changed is largely progress in science and technology. We live in a different world, I think, more than anything else because of that those things at this point.
So far, it’s been a good argument. From my perspective, as a long-term future type person, the main question is, if it goes faster or slower, how does that really affect where we end up in the very long run? That’s not a question that can be answered by historical experience. That’s inherently a speculative question. I guess, the way that I would want to analyze that would be, I think there’s some kind of inside viewish ways you could analyze it, and some rougher and more heuristic ways that you could analyze it.
I think that when society has a faster rate of economic growth, it tends to be more peaceful and more inclined to allow for social progress, especially when it involves some people making sacrifices in terms of their status or future prospects in order to make things more fair for others. I think there’s a nice book by Benjamin Friedman I believe called the Moral Consequences of Economic Growth that makes the case for parts of this view. I’m fairly sympathetic to it, although I don’t feel like I’m in a position to that at all super deeply because it’s very broad-ranging and difficult.
Those are some of the thoughts on that. Yeah, overall, I think it probably is good for where society ends up if we have a faster rate of growth. Those are the heuristic considerations. There’s also some specific considerations. I have a bit of optimistic view about what happens, which is a very debatable view, but it’s a view I hold about what happens if society succeeds in developing very powerful AI systems. I think it would solve a lot of the other potential risks to society’s future. I think, in some ways, if that happens sooner, there’s an argument that it’s going to protect us from a number of other risks that we’re going to face.
I think there’s arguments on both sides of this. I think it’s less clear, especially if you think you might have AI in the next couple of decades, which is not my main gasp, but it’s a possibility that I think is important to consider. I mean, we’re going to be less prepared if these things happen sooner. I guess, I’m inclined to say that the considerations on the other side of that ledger are more important. I think, yeah, people could disagree about which side wins out.
Robert Wiblin: Yeah. Just to be clear, I definitely agree with you and Pinker. I guess all of the optimist folks that the state of the world in a sense has gotten better over the last few hundred years that the welfare of people is better, of nonhuman animals is a bit less clear. Mine will be worse, but again setting that aside. I think what’s gone worse, it seems to me, is that how the future or how the next year is going to be. It has become more and more variable than in the past. We’re in a bad state, but a relatively stable state.
Today, the risk of a disaster that could throw us completely off track is quite high in my view, possibly at a peak. It’s as high as it’s been since maybe the Cuban missile crisis, things like that. There was just no way really to drive humans to extinction in 1800s; whereas, now, it’s just so many obvious ways that civilisation could really be totally ruined, even if not everyone is killed. In that sense, a very important sense, the world has gotten worse. It’s a bit less clear whether we’d be in a better or worst situation there if the prices of development had happened more quickly almost slowly. I guess, you were laying out the consideration on either side there.
Nick Beckstead: Yeah. I think that’s the right framing. One way you could think about it is if you think of it like there’s some states of progress that society has to go through in order to reach some desirable end state, there will be some of these risks that you just inherently go through whenever you reach a certain level of progress. There’ll be some risks that you’re accruing year by year the longer you’re in one of these states.
For example, there’s some clock that maybe starts ticking once you have nuclear weapons, or you have enough nuclear weapons to have a devastating nuclear war. Maybe every year, you’re in that state, you’re suffering some risk of a war happening, and something terrible happening to society, or maybe at some point in the future, we’ll get into a state where we have very, very powerful bioweapons that a lot of people could deploy if they wanted to. Maybe every year, we’re in that state without a solution to it. We’re accruing some risks.
I think that there is a state where if we get to it eventually that our annual risk on the clock is low. There’s some prima facie argument that if you’re going through this thing more quickly. Then, in one way, you’ll be better off. You’ll be better off for these risks that you’re accruing every year. I think the countervailing consideration to that is the step risks that are triggered when you reach certain points on this trajectory. If you’re going through it in a faster rate, there’s questions about how that affects those risks.
Maybe there are certain things that are how prepared society is for the risk, which varies somewhat independently of the state of progress that we’re in. Maybe if we’re going more slowly through the state of progress, then we have more time to adjust, and get accustomed to the states that we’re in, or we can see something coming further along, and prepare for it, and have more time to prepare for it. I think that’s one kind of consideration.
Then, there’s other considerations like ones I already mentioned on the other side. Maybe, if we’re in a state of greater prosperity, maybe people have a less of a zero-sum mentality, and are more willing to compromise and feel good about what’s happening less likely to be in wars and stuff like that. Those are how I see the sides of that coin. My guess is that we’re better off with the state of prosperity, which corresponds to moving to through the stages of progress more quickly.
Robert Wiblin: We started at talking about broad inventions. Then, ended up talking a bit specifically about economic growth rates. That’s interesting that one broad intervention that I think we would both think as fairly reliably good is moral improvement because while scientific technology or economic improvement can be misused or have negative side effects, it’s a bit harder to see how people having good moral values and concern for others could result in negative outcomes. It might turn out not to be that valuable. It seems like it’s either neutral or good.
Nick Beckstead: Yeah, I agree with that. I think this also be said of some of the other things like greater wisdom, or ability to make predictions, or have more functional bureaucracies, or things like that.
Robert Wiblin: The improvements in forecasting is similar. I guess, it’s possible to see how that could be used in an adversarial way. It seems more like most countries would be happy if other countries also have good foresight into the effects of their actions.
Nick Beckstead: Right. Perhaps it’s worth discussing a little bit like Tyler Cowen’s writing on this topic in his recent book online Stubborn Attachments, which is overlapping themes with many things we’ve discussed on here today in terms of making lot of claims that I have a lot of interest in and sympathy with. I think we would agree on like what’s been so great for humans over time has been tied up in a big way with economic growth. If you’re counting the scores so far, you’d be really excited about things that enhanced that rate of economic growth.
He has very similar views about the role of economic discounting, and presents a similar view in terms of saying “Well, on many possible ways of doing aggregation of good across people, the way that you calculate the long-term consequences of actions that benefit the distant future will dwarf the short-term consequences.” He frames his bottom line view about what important as maximise the sustainable rate of economic growth. I guess, I would have a couple of differences of opinion with that as framed.
One, I tend to take a view of the structure of possible progress as looking more as an S curve than the exponential that has been experienced by humanity so far.
Robert Wiblin: It means, it goes up and then it levels off?
Nick Beckstead: Yes. I think on that view, the consequence of having a faster rate of economic growth is not that we’re going to be in a much better state, 20 million years from now, if we’ve had a faster rate of economic growth. Instead, it’s more like, we’re going to get to that really nice state we could be in sooner.
My view, and I think Bostrom explains this pretty well in some of his papers on astronomical waste, and I argue for it in my dissertation, my view is that it’s much more important where we end up than how quickly we get there. How quickly we get there is mostly going to matter in terms of the considerations that we were discussing earlier like, how much does that affect the probability that we actually, eventually, end up in a great place or not? Anyway, that’s an interesting foil for a lot of this conversation, his views on this. I guess that would be one of my disagreements.
Then, the other one, in some ways, we have a similar view to me if sustainable were interpreted very broadly. I mean, for me, the use of the word “sustainable” isn’t exactly focused on environmentalist’s concerns, although that’s like a part of the picture. It’s more about, are we going to manage to not destroy ourselves, and manage all the important technological transitions ahead of us well?
Robert Wiblin: Yeah. I’ll put up a link to that book by Tyler Cowen, and a one-hour podcast of where he explains his views in brief. I think over the last 10 years, I’ve literally read something by Tyler Cowen every day, or, at least, on average, once a day. I’m quite familiar with Tyler’s views by this point. It’s such an unusual book. It’s such an uncanny experience reading it because I find that he and I agree on all kinds of weird things where always no one agrees with me-
Nick Beckstead: Totally agree.
Robert Wiblin: … and I guess you as well.
Nick Beckstead: Totally agree.
Robert Wiblin: Then, there’s this twist where he arrives at a completely different conclusion that seems quite obviously wrong-headed to me. Then, I’m not sure what to make it. I think, maybe it would be interesting to have him on a show where both of us could interrogate him, and try to understand why he’s gone off on a different track at the last second.
Nick Beckstead: Yeah, it would be interesting to do it. I imagine that his central disagreement might be this broad versus targeted thing. I think he might think we’re not going to get any attraction on the speculation about what specific risks we’re preparing for, and would have a view more of like we’ll try to have functional institutions in our society. That’s going to be the best hope we can have for affecting where things end up. I don’t know what he would say about the argument about the S curve versus the other thing, and what that implies.
Robert Wiblin: It seems an odd thing to neglect because that seems the obvious response, and it’s a response that was made decades ago. Yeah, I guess, I’ll send out an email with maybe some of these ideas, and see if we can get him on the show.
I was going to mention and discussed a third blog post that you wrote where you tried to analyze quantitatively whether we want faster technological progress or slower, but we’ve already covered that. There’s a tool on that post that I’ll link to which allows you to stick in your own estimates for a couple of different parameters that are relevant to it. Then, get your own decision from this calculator as to whether you think faster or slower economic growth is better for the world.
Now, before we move on to the section on concrete career advice, I just want to see if there were any other scientific research areas like malaria eradication or meat alternative research that you wanted to talk about in more detail?
Nick Beckstead: I think biosecurity. There’s a great role for people in the Effective Altruist Community to get involved with that. I think, although I’m less optimistic about clean meat or cultured meat, I am really optimistic about plant-based meat alternatives and I don’t think I know everything about cultured meats. If you disagree with that, with my analysis, it might be a plausible area to bet on.
Robert Wiblin: What about malaria eradication? I think, Open Philanthropy made a pretty significant grant to a group that’s doing research into whether it would be possible to change mosquitoes in such a way that they would no longer carry malaria. Is that something you’re excited about? How do you think it might compare to the cost effectiveness of distributing bed nets?
Nick Beckstead: It’s really hard to estimate what the cost effectiveness of that is compared to distributing bed nets. We did a calculation on our grant page on Target Malaria. It suggests that the cost per life saved is going to be more favorable for working on gene drives specifically than distributing bed nets. I know less about exactly where people could fit into that operation.
I think, for somebody who feels like they have more of a responsibility to the people that are alive today, and really unfortunate in suffering from things like malaria, I think we’re getting involved with that problem would be a really good bet. It would also be a compelling idea for somebody who’s skeptical of just this whole conversation being like a bit out there.
Robert Wiblin: I think they might have stopped listening to other people.
Nick Beckstead: Yeah. I think there’s going to be a lot of work to be done in terms of being convincing both from a policy side and from a scientific side to bring the technology to fruition, test it, and confirm that it is going to be safe, figuring out what the regulatory pathway is going to be. I do think there’s a question of how long that’s going to take. If somebody was just starting their undergraduate degree right now or something, it might be too late for you to contribute to that.
Robert Wiblin: The problem might be gone by the time you studied long enough to contribute.
Nick Beckstead: Yeah, yeah.
Robert Wiblin: With the last half hour of the episode, we usually like to try to get as concrete as we can for listeners thinking about specific things that they could potentially do now in order to have a larger impact with their career. They’re thinking jobs they could apply for, or PhDs that they could study, or what they should major in as an undergrad, or where they should volunteer, and how they can make connections.
In your case, Nick, we could talk about how they could potentially do work that’s similar to you, doing global priorities research, and making grants on the basis of that, and also just based on your experience trying to make grants, where would you love to see more people who could take grants, and use the money from Open Phil to do really good things? Maybe let’s take them in order. If someone wanted to work at Open Phil or a similar organisation, what should they be doing when they’re an undergrad or what should they do a post grad on?
Nick Beckstead: If somebody wanted to work at Open Phil, I think there’s not a very obvious degree for them to be studying. I think it’s more like we’re looking for people who are very interested in what Open Phil is doing, have good judgment, and calibration are generally sharp. I would encourage people who are interested in that to … A lot of the people who now work at Open Phil is generalist types rather than specialist Program Officers. They’ve come through working at GiveWell, or working with GiveWell or Open Phil on an internship.
I think I would encourage those people to just apply for one of those internships maybe their junior year of college, and see how it goes, and see if they are a fit for the culture. In terms of what they study, I think it’s not super important. I think they should just study a serious discipline that they’re particularly interested in.
Robert Wiblin: Is worth doing postgraduate study before you apply at Open Philanthropy or would you take the right person straight out of undergrad?
Nick Beckstead: We would take the right person straight out.
Robert Wiblin: Okay, interesting. If someone is not quite yet ready to apply for an internship, is there any way that they can meet the folks at Open Philanthropy in any conference or socially?
Nick Beckstead: I mean, often, folks at Open Phil will go to things like EA Global, at least some of them. There’s some opportunity to meet them there. If you aren’t ready for one of these internships, maybe it didn’t work out, but you still wanted to go, and work at open Phil, I guess maybe your options would be like stick around the EA community, and try to make some valuable contributions, and show that you have good judgment. I think over time, I think it’s possible. Open Phil might look back and be like, “Wow. That person actually has done some really useful things. Maybe we should reconsider that,” or maybe, “Our needs of change where. We become a larger organisation. We need to hire the person at a later stage.”
The other option would be to go and get some skills that might make you a valuable asset as a specialist in one of the areas that we’re particularly interested in. Maybe if you got a deep background in biosecurity, maybe there would be a role for you in the future on our team working on biosecurity.
Robert Wiblin: We’ll talk about those in just a minute. First, I wanted to ask what are the … I mean, Open Philanthropy is only about 10 or 15 people. It’s a small organisation to plan your career.
Nick Beckstead: It’s in the 20s now.
Robert Wiblin: It’s in the 20s now. It’s larger.
Nick Beckstead: Still, your point stands.
Robert Wiblin: It’s still quite a small organisation to plan your career around. What are some other similar foundations or organisations where someone who’s a good fit for you would also be a good fit for them?
Nick Beckstead: Yeah, that’s a good question. There’s a saying about foundations that is like if you’ve seen one foundation, you’ve seen one foundation. I think, that’s basically true in the sense that it’s hard to generalise across them. I think the thing that would maybe work at other organisations in the EA community would be the natural ecosystem. If you are interested in this work and wanted to gain experience with, it would be the natural place to consider.
Robert Wiblin: Let’s talk now about the second option you’re considering, which is, what can people do in Open Phil’s priority problem areas? What kinds of young people are you most excited to find out about? What are they studying? What are they planning to do with their careers? What path are they going to take to get there? You can feel free to, basically, describe as many archetypes as you like. We got people here who are really excited to discover.
Nick Beckstead: Somebody who’s really interested in deep learning, very quantitatively oriented, cares about AI safety, and just generally crushing it in their study of that, I think that’s an archetype that’s really useful. I’d encourage that person to apply for the Google Brian Residency Program as a way of learning more about deep learning and getting into the field. I think it could go more quickly than going through a PhD. It’s a quick way into the industry.
I think otherwise applying for a PhD programme in Computer Science focused on ML would be a great natural path. Working with one of the labs that Open Phil funds to do work on AI safety might be a natural thing to do also. They could potentially start working on AI safety issues right away, and specialise in that in their career. That could be really valuable. Yeah, that’s my answer for technical AI safety in that particular archetype.
The other category we mention was AI strategy work. I don’t think there’s a super natural field specific archetype there. I think the person who does that doesn’t need to be quantitatively-oriented. They need to be very sharp, and they need to have a good judgment, and they need to be interested in thinking about how institutions and politics work.
I think the thing to do would be to take a stab at some of the questions that have been highlighted on the 80,000 Hours post describing this area, and to seek out conversations with people in the area, especially Luke Muehlhauser who is focusing on trying to find the right people to work in the space.
In biosecurity, I’m not sure exactly what the right programmes to apply to are. I think there’s two paths; one of which is the side of more policy, and one, which is more like learning the science. Getting a PhD in some area of biology, perhaps focused on immunology, or vaccine R&D would be a natural place to go, or getting a PhD, or doing a fellowship at one of the places that do work on biosecurity, perhaps the Center for Health Security that Open Phil funds.
Another category would be jobs and the effect of Altruism Community. I guess we already mentioned that. I don’t think there’s a supernatural background for that, other than majoring in a serious discipline, and studying it seriously, doing well, and thinking about the issues that the Effective Altruist Community cares about and getting to know it, and debate it in person I think would be my advice for that category.
I would love to see more people getting jobs in the US government that could be relevant to AI and to other cause areas. I don’t know what the most relevant parts of government are to work for. The archetype of success I think is the career of Jason Matheny. If you like to try to reverse engineer that, that would be on the right track, but I think somebody needs to think through in a more detailed way who to do that.
Robert Wiblin: Jason said that he’s happy to come of the show. As soon as we can find time, then we’ll help. I’ll pass it to him, and so people can see about reverse engineering his life. What about people who are interested in animal welfare?
Nick Beckstead: I think animal welfare is super important. If somebody wanted to make a difference in that area, I think my top tips might be if you’re more of a STEMy type person, then I would advise you to get a background in biology, and try to find that place that you can work in the animal product alternative space. If you are not exactly a STEMy-type person, then you might be interested in advocacy. Then, I would advise you to learn more about that space, and consider working for some of the grantees that Open Phil funds, or ask Lewis Bollard what to do.
Robert Wiblin: I have a three-hour long episode of Lewis Bollard. There’s plenty of suggestions in there. We’re very interested in that. In fact, we have episodes on basically all of the topics that we’ve just discussed there.
Nick Beckstead: Great.
Robert Wiblin: You talked about this for a few minutes. In most of these cases, we have hour-long discussions where all of the options are flushed out a bunch more.
Nick Beckstead: Great.
Robert Wiblin: Are there any other options that you wanted to highlight?
Nick Beckstead: I think those could be valuable too. I think, they’re less close to my experience and knowledge. In some ways, I might be more excited about a lot of the other ones I listed, but I’m not very firm on that.
Robert Wiblin: Your job seems pretty attractive, but every job has its negatives. What’s the worst thing about the path that you’ve taken in your career?
Nick Beckstead: My job used to be more focused on individual research projects that we’d spend a lot of time on, and bring from start to finish. There’s something satisfying and intellectually very interesting about that. Also, I don’t know, if you’re looking at the history of home runs that we’ve had as a species or something, I think more of them are coming from somebody who’s out in the trenches, and coming up with a new idea, or creating a new organisation. It’s less behind-the-scenes funder type person who’s making a lot of good things happen.
Although there are big wins in the history of philanthropy, and they have a slice. People get a slice of a lot of those other wins. To me, there’s something to be said for that. It would be like maybe that would be the biggest question or uncertainty that I have about whether I’m doing the best thing.
Robert Wiblin: Yeah, Open Phil did a report on the history of philanthropy and found that there were some successes, but maybe not as much as you might hope.
Nick Beckstead: Yeah. I mean, I think there are some successes, and they’re important successes. There are other people who put together compendium list of big achievements of humanity in commerce, or politics, or science, or things like that. When I read through those, the role of philanthropist seems to be fairly limited.
Robert Wiblin: Yeah. Interesting phenomenon. Well, this has been a super fun discussion. We’ve covered lots of things that are of personal interest to us. I think someone who’s still with us this far into the podcast should probably think about whether they could see themselves working at Open Phil. We have a lot of overlapping interests. Hopefully, we can really get Tyler Cowen on the show some time, and see if we can figure out what’s going with the disagreement that we, three, have.
Nick Beckstead: That sounds fun.
Robert Wiblin: Excellent. Okay. Well, yeah, look forward to doing another episode of you in the future.
Nick Beckstead: Yeah, it was fun. Pleasure to be here.
Robert Wiblin: Yeah. My guest today has been Nick Beckstead. Thanks for coming on the show.
Nick Beckstead: Thank you.
Robert Wiblin: I hope you enjoyed that episode. If you did enjoy it, consider sharing it with your friends on social media so they can find out about the show.
If you want to work on any of the approaches that Nick described as high priorities, including tackling existential risks, growing the effective altruist movement, or meat-substitute research, then you should apply for free one-on-one coaching from 80,000 Hours.
There’s a link to the application process in the show notes and the associated blog post, where you’ll also get a full transcript and links to learn more.
Thanks so much, talk to you next week.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.