Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause
By Robert Wiblin and Keiran Harris · Published March 20th, 2018
Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause
By Robert Wiblin and Keiran Harris · Published March 20th, 2018
…social norms have to be evaluated on the basis of their outcomes, like everything else. And that might prompt people to think that they should break norms and rules fairly frequently. But we wanted to push against that…
Stefan Schubert
How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or, should we attempt to set a new norm for ourselves?
Dr Stefan Schubert, a researcher at the Social Behaviour and Ethics Lab at Oxford University, has been modelling this in the context of the effective altruism community. He thinks people trying to improve the world should hold themselves to very high standards of integrity, because their minor sins can impose major costs on the thousands of others who share their goals.
In addition, when a norm is uniquely important to our situation, we should be willing to question society and come up with something different and hopefully better.
But in other cases, we can be better off sticking with whatever our culture expects, both to save time, avoid making mistakes, and ensure others can predict our behaviour.
In this interview Stefan offers a range of views on the projects and culture that make up ‘effective altruism’ – including where it’s going right and where it’s going wrong.
Stefan did his PhD in formal epistemology, before moving on to a postdoc in political rationality at the London School of Economics, while working on advocacy projects to improve truthfulness among politicians. At the time the interview was recorded Stefan was a researcher at the Centre for Effective Altruism in Oxford.
We also discuss:
- Should we trust our own judgement more than others’?
- How hard is it to improve political discourse?
- What should we make of well-respected academics writing articles that seem to be completely misinformed?
- How is effective altruism (EA) changing? What might it be doing wrong?
- How has Stefan’s view of EA changed?
- Should EA get more involved in politics, or steer clear of it? Would it be a bad idea for a talented graduate to get involved in party politics?
- How much should we cooperate with those with whom we have disagreements?
- What good reasons are there to be inconsiderate?
- Should effective altruism potentially focused on a more narrow range of problems?
The 80,000 Hours podcast is produced by Keiran Harris.
Highlights
So I think it is really important to develop expertise in order to get influence. The effective altruist concept itself is not enough to get people to listen to you. You also need to demonstrate expertise in specific areas. I think we’re really good at that, in existential risk in particular. And I think that’s really crucial in order to get people to listen to us.
I think there’s a strong focus on quality now. EA is a very intellectual movement, or community. And I think in intellectual matters, the very best are disproportionately impactful. Someone like Nick Bostrom is many orders of magnitude more important than an average professor.
People who tend to be reasonable generally, there’s something that happens when politics enters the picture. And I’m somewhat worried about this. I was at some point a bit worried that effective altruism would develop into a fairly traditional progressive social movement, and with associated poor epistemics. I think that risk mostly seems to have passed now. But I do think it’s important to emphasize.
And I think in general, people are sort of naturally tribal, or tribalistic. And they sort of want to side with their own tribe, and they want to have other tribes to point to the finger against.
Cause-neutrality is just this idea that you should be willing to consider new causes. Because, unless you’re lucky, you might have chosen intuitively a cause which is not the most effective. And it might actually be many times less effective than the most impactful cause if it’s the case that the distribution of impact across causes is very uneven.
So, I think one misconception is that people think that cause-neutrality entails that we should work on this broad range of different causes. So, cause-neutrality sets EA apart from other movements. Other movements like the feminist movement or the environmentalist movement arguably aren’t cause-neutral. But another thing which sets EA apart is that it’s working on a number of different causes. So in the paper we call this cause divergence. But it doesn’t actually follow from cause-neutrality that we should be cause-divergent. Because it could very well be that we should invest all our resources in one cause — the very best cause.
Articles, books, and other media discussed in the show
- Considering Considerateness: Why communities of do-gooders should be exceptionally considerate
- In defence of epistemic modesty
- Hard-to-reverse decisions destroy option value
- Why you should focus more on talent gaps, not funding gaps
- LessWrong
- Stefan Schubert and ClearerThinking’s Fact-Checking 2.0
- Political bias test by Stefan and Spencer Greenberg
- Youth Movements, by Robin Hanson
- The Centre for Effective Altruism
- The Elephant in the Brain: Hidden Motives in Everyday Life
- Charity Science Health
- Schumpeter’s concept of creative destruction
- Warren Buffett: ‘Credit worthiness is like virginity’
- Understanding cause-neutrality
- Existential Risk Prevention as Global Priority
Transcript
Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, the show about the world’s most pressing problems and how you can use your career to solve them. I’m Rob Wiblin, Director of Research at 80,000 Hours.
Today’s episode focuses on the projects and culture of professional effective altruism – what’s good about them and how they could be better.
If you haven’t heard of effective altruism, you may well want to listen to episode 17 with Will MacAskill, or episode 21 with Holden Karnofsky to help set the scene, as we don’t stop to explain what it is.
This interview was actually recorded at EA Global London four months ago. In the meantime Stefan has moved on from being a researcher at the Centre for Effective Altruism to a a researcher at the Social Behaviour and Ethics Lab, a moral psychology research lab at Oxford University.
And just a reminder that if you enjoy this discussion, you can join similar conversations by going to an EA Global event yourself. The next one is in San Francisco at the start of June. You can learn more and apply at eaglobal.org.
Without further ado, I bring you Stefan Schubert.
Robert Wiblin: Today I’m speaking with Stefan Schubert. Stefan is a researcher at the Centre for Effective Altruism, which is the larger organization that 80,000 Hours is a part of. Prior to joining CEA, he was a postdoc in philosophy at the London School of Economics, during which time, he also did a range of outreach to promote rationality in politics, a noble if difficult goal in the modern age. Thanks for coming on the podcast, Stefan.
Stefan Schubert: Thank you, Rob. I’m very happy to be here.
Robert Wiblin: Let’s talk about epistemic modesty. What’s the issue there?
Stefan Schubert: I guess one theme is just general realization that humans naturally are over-confident, that we need to compensate for that. It’s something you sort of say to yourself over and over again. But to really act in that humble and modest way – I certainly don’t think that I do that myself. In the moment, you might think, “Well I actually have good evidence for this.” But then you look back at your own decisions and you realize “Hey, I actually acted in a very overconfident way.”
Robert Wiblin: I guess the modesty aspect is also respecting the opinions of other people, and realizing even if you feel very confident-
Stefan Schubert: Yeah, yeah.
Robert Wiblin: That you’re right, that doesn’t necessarily mean that you are.
Stefan Schubert: This is a very topical subject right now. There has been this discussion online, between Greg Lewis and others, about how modest should one actually be.
I broadly adopt Greg’s approach there, that one should defer to others. However, I think it’s one thing to say, I don’t think I actually live up that. I was having this discussion with Greg about this – about what would it actually look like if I actually acted on this? I think I would behave quite differently.
Robert Wiblin: It feels a bit like the epistemic equivalent to becoming a monk. It’s eschewing all of your own views.
Stefan Schubert: Right, yeah.
Robert Wiblin: And just like, adopting a completely different lifestyle.
Stefan Schubert: I mean, I guess, there was this nice distinction that came up in that piece which was this, between your private signal, or someone proposed the name impressions. Which was what you felt you had evidence for. And then there are all-things considered beliefs, that you have after you have deferred to others. Perhaps we should call that our belief, and the first thing our impressions.
And that we should systematically use these two terms in discussions. “My impression is X, but my belief is Y.” And in that way, if you do that, then you don’t have to become a monk, as it were. Because you can actually tell people your impressions. But then when you actually act, well, then you do have to be a monk, as it were.
Robert Wiblin: I’ll put up links to the various blog posts that have been written about this recently, representing different views of how modest one ought to be, and how much one ought to defer to others, and how much one should trust one’s own judgment. So people can have a look at that, and decide for themselves.
Stefan Schubert: Yeah, you should. I was really happy with this discussion because it was really like a high level intellectual discussion that we want to see more of in the EA, I think.
Robert Wiblin: So you mentioned that people have some misconceptions about what people at the Centre for Effective Altruism believe? What are some of those?
Stefan Schubert: Yeah, I think this long term focus is something that people haven’t really grasped the extent to which we think that’s a priority. I think another thing, is with donations versus direct work, so like, initially EA was heavily associated with donations. And now, the funding situation has become much better, and Ben Todd, the CEO of 80,000 Hours, he wrote already two years ago this great blog post on how we should focus on talent gaps instead of funding gaps. And he also predicted that the situation would be even more lopsided going forward. Now we can see that he was right. But it still seems that this hasn’t really gotten across. So we should de-emphasize donations. We should also de-emphasize earning to give.
Robert Wiblin: That’s something that 80,000 Hours has been writing about a bit. Because we have basically the same issue.
And it’s an interesting dynamic, that it can take years for your initial message to get across. And then if you have a change of mind, or the situation changes, it takes years to communicate that things are different now.
Stefan Schubert: Yeah, I was thinking like, why is that that it takes so much time. I think, you know if the British Labor Party changes their mind which they did with Tony Blair, with New Labor. It doesn’t take that long time. But I guess we don’t have quite the same amount of information. People don’t write about EA in daily media. So, therefore it takes time.
Robert Wiblin: I suppose the thing is, we’re working on this all the time. So we know how people think, and how their views change. But if you’re only paying a bit of attention, then if you’re not reading every day about what our views are, then of course you don’t keep track of it like that.
Stefan Schubert: Yeah.
Robert Wiblin: So before you were working at CEA, what exactly were you doing? Doing this work on rationality in politics, and you’re at the London School of Economics? How was that?
Stefan Schubert: I did my PhD in Sweden in formal epistemology, in Lund, Sweden. Then I came to London School of Economics to continue these studies. And then I found out about EA, actually via LessWrong in 2014. And I quite rapidly got into it more and more. And I went to EA Summit, I think it was called then, in the summer of 2014. And then, my first thought was, “Hey this is a great concept.” But I hadn’t really grasped the extent to which people who were involved had not only a general, broad concept of doing good effectively, but also lots of specific ideas about how to do good.
So instead, I went like, “Well, given that there is this concept, and it seems good. What can I do, with my experiences and my special competences?” And then I thought that, well, on the one hand I have this training in rationality and epistemology. And on the other hand I have this interest in politics. I also studied political science. And I was always interested in “What would it be like if politicians were actually truthful in election debates, and said relevant things?”
Robert Wiblin: You’re a dreamer, Stefan.
Stefan Schubert: Yeah, people have said that.
So then I started this blog in Swedish on something that I call argument checking. You know, there’s fact checking. But then I went, “Well there’s so many other ways that you can deceive people except outright lying.” So, that was fairly fun, in a way. I had this South African friend at LSE whom I told about this, that I was pointing out fallacies which people made. And she was like “That suits you perfectly. You’re so judge-y.” And unfortunately there’s something to that.
So that was one project I had. And then there was also, I was impressed by these groups working on evidence-based policy in the US and the UK. And I went, “We don’t have anything like this in Sweden.” So I started a similar network for evidence-based policy.
And then I also created this political bias test at Clearer Thinking. I think you had Spencer Greenberg from Clearer Thinking on your show before. And we got that published in Vox. Spencer is excellent at getting his stuff out there.
So, yeah. All of this was lots of fun. And I got some traction. But when there were these positions at CEA in the fall of 2015, I decided to apply for them. And that was when my postdoc was running up.
So then I started working with Global Priorities Project, on policy because I had that policy background.
Robert Wiblin: What kinds of things did you try to do? I remember you had fact checking, this live fact checking on-
Stefan Schubert: Actually that is, we might have called it fact checking at some point. But the name which I wanted to use was argument checking. So that was like in addition to fact checking, we also checked argument.
Robert Wiblin: Did you get many people watching your live argument checking?
Stefan Schubert: Yeah, in Sweden, I got some traction. I guess, I had probably hoped for more people to read about this. But on the plus side, I think that the very top showed at least some interest in it. A smaller interest than what I had thought, but at least you reach the most influential people.
Robert Wiblin: I guess my doubt about this strategy would be, obviously you can fact check politicians, you can argument check them. But how much do people care? How much do voters really care? And even if they were to read this site, how much would it change their mind about anything?
Stefan Schubert: That’s fair. I think one approach which one might take would be to, following up on this experience, the very top people who write opinion pieces for newspapers, they were at least interested, and just double down on that, and try to reach them. I think that something that people think is that, okay, so there are the tabloids, and everyone agrees what they’re saying is generally not that good. But then you go to the the highbrow papers, and then everything there would actually make sense.
So that is what I did. I went for the Swedish equivalent of somewhere between the Guardian and the Telegraph. A decently well-respected paper. And even there, you can point out this glaring fallacies if you dig deeper.
Robert Wiblin: You mean, the journalists are just messing up.
Stefan Schubert: Yeah, or here it was often outside writers, like politicians or civil servants. I think ideally you should get people who are a bit more influential and more well-respected to realize how careful you actually have to be in order to really get to the truth.
Just to take one subject that effective altruists are very interested in, all the writings about AI, where you get people like professors who write the articles which are really very poor on this extremely important subject. It’s just outrageous if you think about it.
Robert Wiblin: Yeah, when I read those articles, I imagine we’re referring to similar things, I’m just astonished. And I don’t know how to react. Because I read it, and I could just see egregious errors, egregious misunderstandings. But then, we’ve got this modesty issue, that we’re bringing up before. These are well-respected people. At least in their fields in kind of adjacent areas. And then, I’m thinking, “Am I the crazy one?” Do they read what I write, and they have the same reaction?
Stefan Schubert: I don’t feel that. So I probably reveal my immodesty.
Of course, you should be modest if people show some signs of reasonableness. And obviously if someone is arguing for a position where your prior that it’s true is very low. But if they’re a reasonable person, and they’re arguing for it well, then you should update. But if they’re arguing in a way which is very emotive – they’re not really addressing the positions that we’re holding – then I don’t think modesty is the right approach.
Robert Wiblin: I guess it does go to show how difficult being modest is when the rubber really hits the road, and you’re just sure about something that someone else you respect just disagrees.
But I agree. There is real red flag when people don’t seem to be actually engaging with the substance of the issues which happens surprisingly often. They’ll write something, which just suggests, “I just don’t like the tone” or “I don’t like this topic” or “This whole thing makes me kind of mad” but they can’t explain why exactly.
Stefan Schubert: Yeah.
Robert Wiblin: So you’ve been involved in effective altruism for a couple of years now. How has your view changed as you’ve gotten to know it better? And I guess, how do you think effective altruism is actually changing itself?
Stefan Schubert: Right, thanks.
So let me focus on the second part first. I think Robin Hanson called EA a youth movement in 2015, I think …
Robert Wiblin: I’ll put a link up to it, to that web post where he characterizes us that way.
Stefan Schubert: That’s a very Hansonian way of characterizing something, you might think. And he said we were all about signalling and so on which I’m sure is true.
He has a great book, by the way, coming out on this subject. I much recommend reading it.
So, anyways, he calls us a youth movement. And I think there was a lot to that. I still think that there’s something to that. But I think that effective altruism is gradually growing up. And I think there’s like this big push now, towards expertise and specialization. Obviously you, at 80,000 Hours are coaching lots of people to go into specialist fields, like AI safety, and AI strategy and biosecurity.
We share offices with Future of Humanity Institute, which is a great institution. And I think during the time I’ve been at CEA, for nearly two years now, I’ve seen FHI changing quite a bit, as they get more subject matter experts. The proportion of these big picture philosophy thinkers is decreasing.
So I think this is really important to develop expertise in order to get influence as well. The effective altruist concept itself is not enough to get people to listen to you. You also need to demonstrate expertise in specific areas. I think we’re really good at that, in existential risk in particular. And I think that’s really crucial in order to get people to listen to us.
And relatedly, I think there’s a strong focus on quality now. EA is a very intellectual movement, or community. And I think in intellectual matters, the very best are disproportionately impactful. Someone like Nick Bostrom is many orders of magnitude more important than an average professor.
And that leads us to focus on them, and that’s absolutely right. It follows from a general focus on impact.
Robert Wiblin: So that’s one way that EA seems to be getting better. What are some ways that you think we might be going wrong, or could even be getting worse?
Stefan Schubert: I’m not sure I have some aspect where we’re getting worse. I think many things are actually going in the right directions, but then you can discuss whether they are changing fast enough.
So, one thing, it’s often been pointed out that there are clear analogies between effective altruism and profit-seeking companies. So, in market economies, companies try to maximize effectiveness, whereas charities typically don’t do that. But EA tries to apply this sort of effectiveness mindset to charities. Therefore, I think we can borrow many insights which economists have made from the market economy and sort of use them to understand our own activities.
And one of them is Schumpeter’s idea of creative destruction. How in capitalist societies, someone might have invested a lot in acquiring a certain skill, and then you have a new technology, and that skill is not really useful anymore. And when capitalism developed, many people were complaining about that. There are all these craftsmen who have these skills, and now they’re not useful anymore. And they were sort of romanticizing this.
And similarly, I think in EA, you should expect a lot of creative destruction. Especially now, we’re still at the beginning. We’re finding new ways of doing good, and new dimensions along which you can do good better. So, we should then expect that we need to throw out old ways of doing things. I worry a bit that we’re not doing that fast enough.
I mean, of course, one difference that you have, still, is that in a market economy, you get very clear feedback. If no one wants to buy your product, you go bankrupt. Whereas in EA, someone might not have much of an impact, but it’s not as visible. So you might go on doing things in the old way.
Robert Wiblin: They can amble on, with donors unaware that their resources aren’t being used very well.
Stefan Schubert: Yeah.
Robert Wiblin: So, you worry that we’re not shutting down enough projects, basically.
Stefan Schubert: Yeah.
Robert Wiblin: And then, we could of course take those resources and use them some other way, use that money or those people.
Stefan Schubert: Yeah. I think we at CEA, we’ve become a bit better at this. We have shut down some projects recently. But I’m sure we can improve. Just as we all can improve.
And I think this is also, sometimes it almost feels like some people get to be disappointed. “Oh, I like that thing. Why we aren’t doing that thing anymore?” That’s just what you should expect if you’re really having an impact. If we’re never changing anything fundamentally, then we’re doing something wrong.
There should be a lot of creative destruction.
Robert Wiblin: Yeah, a lot of people won’t remember this, but 80,000 Hours had this whole membership process at the beginning, and also a pledge where people would say I’ll do a whole lot of good with their career. And we ended up cutting that, because it was just taking a huge amount of time, and we didn’t think it was accomplishing very much.
I suppose, we haven’t gone the full radical way, and just shut down 80,000 Hours because we don’t think it’s useful, but we have changed our focus a bit over the years.
Stefan Schubert: Yeah, and other projects have been shut down.
Robert Wiblin: Yeah. Could you give a few examples of things you’ve maybe changed your mind about since you first got involved in effective altruism in 2014?
Stefan Schubert: Yeah, I think one thing is this underestimation of the importance of the very best which I guess I share with many others. I just think that the very best, not just in terms of competence, but also in terms of willingness to change and to really go for something that’s really important. I think that is really a key insight.
I also think that to some extent, I was being over-optimistic about some causes, other than existential risk, like institutional change, and broad policy change. I probably thought we would have accomplished more in other cause areas by now. And we haven’t really. And potentially, that’s because we invested so much in existential risk.
Robert Wiblin: So you mean you used to be more optimistic about institutional reform.
Stefan Schubert: Yeah.
Robert Wiblin: Do you think that’s maybe because the problem was harder than we thought? Or just that we haven’t focused on it?
Stefan Schubert: Yeah, that’s the question that I’m asking myself. But probably it’s to a significant extent that it’s harder than one thinks.
I also think that I committed this typical mind fallacy, where I was like, “Oh, to me EA is so obvious. Everyone should apply this.” And that’s just not the case.
Robert Wiblin: You’re not the first to be a bit too optimistic about how easy it is to change things.
Stefan Schubert: And then, another thing is that I thought that there would be more orgs by now. I think this is something that many people thought. They saw that initially lots of organizations were formed in EA, and they just thought that that development would continue. And it hasn’t really. Some organizations you might think have grown fairly dominant. Like Open Philanthropy, and 80,000 Hours, and others.
And I think, actually, that one reason for why more organizations haven’t been started. It’s very hard now to start a project that could compete with these really successful players.
Robert Wiblin: We’re the Google of career advice now.
Stefan Schubert: Again, you can compare with the market economy. There is this drive towards monopoly in economies. You saw that in the US in the nineteenth century. You had Rockefeller in oil, and you had other monopolies.
And then, what the government did was actually that they went in and regulated it. Perhaps we should do the same. We should break up 80,000 Hours.
Robert Wiblin: Bring in the anti-trust to break up the Centre for Effective Altruism.
Stefan Schubert: Exactly. And then you should have another 80,000 Hours with something like, a counterpart Rob. Having this bizzaro-podcast.
Robert Wiblin: It’s an interesting idea. I’m not sure whether that there’s too many organizations setting up, or too few. There’s definitely benefits from having more organizations, because more people can try different things, and different perspectives get a greater audience. On the other hand, you also get benefits from having proper scale. When 80,000 Hours was just one or two people, you have to spend so much time just keeping things running, that it’s very hard to grow.
For a dominant organization, we only have seven people. So it’s still a very small organization in a way.
Stefan Schubert: People must be surprised when you say that, isn’t that the case?
Robert Wiblin: I hope the audience is impressed, but I don’t know. I’ll leave other people to judge how well we’re doing.
But definitely with seven people, everything becomes more efficient. Because one person can write an article, and then you got someone else who really knows how to get the word out there, and market it properly and make sure that people see it. So you get all of these benefits from specialization within the organization that you couldn’t get if we split into two.
And also, potentially, if you have a lot of new organizations starting, some good ideas get tried that otherwise wouldn’t have gotten a hearing. On the other hand, people can do silly things. You could start an organization that represents effective altruism badly, or just has a bad strategy and wastes resources. So it’s a little bit unclear where the ideal balance is between lots of new organizations starting and that kind of churn. Versus just having professionalization and insuring that the best managed organizations can grow to a decent scale.
Stefan Schubert: I think that’s absolutely right. I wasn’t actually saying so much in regards to how many organizations we should actually have. I was just saying as a prediction, I was wrong.
Robert Wiblin: Ah, yeah.
Stefan Schubert: I think it’s very important, regardless of what you think about the normative question, to have accurate beliefs about the predictive question, because then you can tailor your strategy to whatever answer to that question is correct.
Robert Wiblin: Yeah, it’s very interesting to think what is the reason that more people aren’t starting new organizations. I suppose one thing I’ve been a little bit disappointed at is that not more people have started organizations in the developing world. Trying to do entrepreneurship with charities. Taking really effective interventions that we know work, and building an organization that scales them. I think there’s a lot of opportunities there, and so many people who want to have a big impact on global poverty.
There is Charity Science Health which is great. They’re doing vaccination reminders in India. I think they’re now considering doing some work to reduce anemia – iron supplementation, or some kind of micronutrient supplementation.
But I think they’re the only ones. I suppose it is a very difficult path, to move countries to the developing world and try to start an organization there on the ground.
But it would be interesting to ask around and find out what is preventing people from starting new organizations, and are their reasons good or bad? I mean, a good reason would be they think it’s already covered. Other people are already doing a great job. I guess a bad reason would be, I don’t know, that they feel like they wouldn’t be given the necessary assistance by other people, or people would be uncooperative.
Stefan Schubert: That’s interesting. I guess one common reason for why people don’t start projects is that they don’t really know how to go about it – it’s unclear what the path to impact would be, and so on. But that can’t really be the argument here, because, as you’re saying, we actually have concrete interventions and are just waiting for someone to implement them.
Robert Wiblin: So, your masters is in political science. And you did an undergrad in philosophy and political science, is that right?
Stefan Schubert: Yeah, I have a double undergrad in political science and philosophy.
Robert Wiblin: So, do you think that effective altruism should get more involved in politics, or should we stay clear of it?
Stefan Schubert: Yeah, I definitely think that we should do policy on the important questions like existential risk, working in the background, and give advice to politicians. But if we’re really talking about more traditional party politics, I think we should be quite wary to get involved in that. Partly because of the pulling the ropes sideways issue – many of the key political issues are not neglected at all.
And also I think this politics as the mind-killer issue — it is really true. And I think I see that even in effective altruism sometimes, people are–
Robert Wiblin: Perhaps in me, Stefan?
Stefan Schubert: Hmmmmm…
Robert Wiblin: You wouldn’t want to comment? I’m reasonable at all times Stefan – that’s the answer you were looking for.
Stefan Schubert: I have noticed.
People who tend to be reasonable generally – there’s something that happens when politics enters the picture. And I’m somewhat worried about this. I was at some point a bit worried that effective altruism would develop into a fairly traditional progressive social movement, with associated poor epistemics. I think that risk mostly seems to have passed now. But I do think it’s important to emphasize.
And I think in general, people are sort of naturally tribal, or tribalistic. They want to side with their own tribe, and they want to have other tribes to point to the finger against.
Robert Wiblin: And they’re always looking for divisions, divisions even within their own group, that they can get annoyed about.
Stefan Schubert: Exactly. I don’t think that we should have this attitude. Instead, I think we should have broad visions which unite people, rather than those which pit people against each other.
And I think potentially, this fits neatly with this long term focus. This vision of a great future for humanity could be such a vision.
Robert Wiblin: It could be appealing to people across the political spectrum?
Stefan Schubert: Yeah. I think this long term focus it has this great feature, that might work as this uniting feature. If you have a short term focus, it might seem that-
Robert Wiblin: There’s more trade-off between different groups benefiting. But if you’re looking over a hundred year time scale, mostly we all get better or worse off together.
Stefan Schubert: Exactly. We might go extinct together, or we might build this great future together.
Robert Wiblin: Just to push back on that a little bit. It’s true that as a community or as a movement, we don’t want effective altruism to just become basically an arm of some specific political party, or some specific political agenda. But what about, if 80,000 Hours was advising a young person who was very interested in politics, and was interested in going into party politics, and perhaps trying to get elected and trying to do good that way, would you think that that’s a bad idea? Especially, if they didn’t try to force the party politics onto the movement as a whole?
Stefan Schubert: Yeah, that’s a good question. I distinguish between two cases. On the one hand, there was this giving expert advice on specific policy issues of great importance. And on the other hand, there was this broad political campaigning. And I was saying the first thing is good, the second thing is not so good. What you’re now saying is something like in between, I guess. I wouldn’t necessarily be opposed to that, I think? I would need to think more about it. It depends a bit how it is framed, I guess. If it leads to 80,000 Hours getting this partisan image, then that could potentially be bad? I wouldn’t expect that to happen?
Robert Wiblin: I guess you might worry if too large a fraction of all of us, if our jobs were in party politics, how that might change the culture, and the norms around having accurate beliefs and always saying what you think, even if it doesn’t benefit your party at that time.
Stefan Schubert: Yeah. And I could also see that five years down the line, who knows what EA will be like. Perhaps it will be the right call then. So one thing I think is very important is to set the culture of EA right. We want to have a culture where people have excellent epistemics, where people really care deeply about doing good in the world, and are very thoughtful and reflective. And I have worried that focusing on politics would have very bad cultural effects.
But suppose that we are able to form a community of really excellent people, both intellectually excellent, and morally excellent, and we have this excellent culture. Then perhaps these worries would go away. And they could enter politics and not be mind-killed.
Robert Wiblin: It’s difficult. It’s difficult. I mean, I feel it myself. I imagine that we all do. That once we start engaging in political issues, it very quickly becomes very tribal, and it’s hard to stay reasonable, as you might on the purely scientific issues or something like that. It’s a difficult balance to strike. Although I do think we want to have at least some people involved in party politics. At least we can experiment and find out, do those ideas get any traction within party politics. Like important ideas that are floating around the effective altruism community.
Stefan Schubert: I think that has been tried to some extent.
Robert Wiblin: There’s been some people who have gone into politics in the UK and the US. It’s a little bit too early days to tell how their careers will go, and whether they’ll be able to get any influence to work on issues like existential risk or aid policy, that kind of thing.
Alright. Let’s move on to these three pieces of research that you’ve published over the last year. The first one is called “Considering Considerateness: Why communities of do-gooders should be especially nice.” What was the case that you were making there?
Stefan Schubert: I guess one reason to write this was many effective altruists are consequentialists. And consequentialists obviously don’t think that there is any intrinsic reason to follow social norms. So social norms have to be evaluated on the basis of their outcomes, like everything else. And that might prompt people to think that they should break norms and rules fairly frequently. But we wanted to push against that, and point out that this has all these invisible negative effects which might be much less salient.
So, suppose that you want to increase donations to some valuable target. Then it’s very salient to you that lying about how effective this is, distorting the truth about this is going to have this very salient impact. Whereas the negative effects, like eroding trust in the EA community in general …
Robert Wiblin: They’re hidden from you. They’re not so obvious. And very hard to measure as well.
Stefan Schubert: Yeah.
Robert Wiblin: Basically, you’re arguing that those effects are quite large, and so people should generally try to be extremely honest and very nice to one another.
Stefan Schubert: Yeah. That is the claim. I guess this point about how norm-following is more important than what one might intuitively think, that’s been made by many consequentialists over the ages. But I guess our contribution was that we said that if you’re part of a community of do-gooders, then these effects are stronger. Because then these negative reputational effects of dishonesty, they will not only affect you yourself, but also the whole community, and it will also affect the internal culture of the movement.
Robert Wiblin: So your point is, if you’re a movement of ten people and one of you gilds the lily in order to get more donations, then you’ve just somewhat reduced someone’s trust in ten people. But if you’re 100 people, then the cost is ten times as large, because 100 is ten times as many as ten. But the amount of extra donations you get is just the same. So, it could become a lot less advantageous overall to treat someone badly, or to mislead them.
Stefan Schubert: Yeah. Broadly, that is the case.
Robert Wiblin: What are the kinds of considerateness that you’re thinking of that are important here? We’ve mentioned dishonesty. Are there other ones as well?
Stefan Schubert: Yeah, I think dishonesty is a key one.
Robert Wiblin: Especially as it’s the key selling point of effective altruism is to just the facts, kind of thing.
Stefan Schubert: I mean, then there’s also just following common sense norms, more generally. Of loyalty, and friendliness, and being respectful, and so on. But I think that the epistemic norms might be especially important.
Robert Wiblin: What about just helping people out? So, people ask you for a favor. Should you do them a favor in order to build up the reputation of the community as being friendly?
Stefan Schubert: Yeah, that’s a very good question. That somewhat can cut against this focus on some people being disproportionately impactful. On the basis on considerateness, you might think that Nick Bostrom should spend a lot of time writing letters of recommendations to everyone. But then, you think about his extreme impact, and then you realize perhaps that’s not a good use of his time, so he should not do that.
Robert Wiblin: The more effective you are, the more license you have to be rude and dismissive to people to save your time. You can see how that could go wrong.
Stefan Schubert: Definitely. I’m not saying that he should be rude. I don’t think that he is rude. I think he’s a nice person. I do think-
Robert Wiblin: He shouldn’t go out of his way to help people necessarily.
Stefan Schubert: Yeah.
Robert Wiblin: Interesting.
Are there any particular kinds of inconsiderateness that you observe in people? This issue has been discussed quite a lot. But I find it kind of interesting, because I just don’t notice people being that inconsiderate or that dishonest. Maybe this is happening in some parts of effective altruism that I’m just not observing. Is there anything in particular that you want to single out as something you’d like to see less of?
Stefan Schubert: Yeah, I don’t want to single out specific people or specific incidents. But I do think that this question about portraying impact accurately and getting uncertainty across is very important. I do think that there’s a huge risk that you see you could attract more donations-
Robert Wiblin: By exaggerating the effectiveness of your organization or organizations that you particularly like.
Stefan Schubert: Yeah.
Robert Wiblin: You’re saying also over-simplifying … for example 80,000 Hours, we write things and we try to communicate something that is very complex, and we try to make it a bit easier, and reduce some of the complexity and the subtlety. Is that another thing that you might worry about?
Stefan Schubert: Sometimes, but I do think that that is definitely needed. And I do think, I myself, probably sometimes do the opposite thing, where I sort of have too many caveats and so on, because I want to do the sort of epistemically diligent thing.
Robert Wiblin: A related question is, what should you do if you think that there’s a particular cause that’s very effective to work on, but a lot of the other people you know are working on solving other problems that you think are significantly less effective, how much should you cooperate by, for example, telling one another about potential donors who might be interested in supporting the other person’s project, or people who might be interested in working there, versus you know trying to hoard those, and just keep them for yourself. Do you have any views on that?
Stefan Schubert: Yeah, I think this is somewhat related also to this issue of EA focusing on broad visions. I do think that we should in general have the cooperative culture, where we help each other. We should incentivize that. We should celebrate when people do cooperate. At the same time, of course, if you think that one cause is many times more important than another one, then it’s natural to sort of privilege that cause. Yeah, it’s a judgment call. But in general, I do want to emphasize cooperativeness.
Robert Wiblin: Yeah, that one comes up with me quite a lot. Because obviously I have views on which causes are more effective than others. And people are often asking me for favors of various different kinds, or asking for me for advice. By and large, I will help them, regardless of what problem they’re working to solve. Maybe it’s just because you want to be nice, so I’m just rationalizing it. But I think it is good to set an example of being cooperative. And even if your help isn’t directly useful, because the problem they’re working on isn’t so effective, at least you’re creating good cultural norms which could be really valuable.
And also, then it means people are inclined to help you back. So, I think that being selfless can be very good for you in its own way, because you have a better reputation.
Stefan Schubert: Yeah, I think that’s absolutely right. I think that’s especially true of these quite small and tightly knit communities. It’s very important to get a good reputation, so that’s actually an important reason for being considerate.
Robert Wiblin: So what do you think is the best reason to be inconsiderate?
Stefan Schubert: Yeah, sometimes of course the payoff from breaking norms can be very large. That’s obviously true, and so …
Robert Wiblin: You sometimes just have to make a judgment call, that it’s worth doing something that ordinarily would be wrong just because the payoffs are big?
Stefan Schubert: Yeah, I mean, I guess … suppose that you have the norm that you should help people, and then you’re Nick Bostrom, so you have a very high impact. Then you could say, he’s being inconsiderate through not helping people, not writing letters of recommendation. But I think that’s not really the natural way of framing that. It’s more like, it’s actually the right call for him. He’s not being inconsiderate. He’s just judging the considerations accurately.
Robert Wiblin: Trying to set his priorities. You can’t help everyone.
Stefan Schubert: Yeah.
Robert Wiblin: What about this angle: in a community that really prizes its considerateness, it’s possible that people would become so interested in not offending one another or being nice — this is a slightly different conception of considerateness — but if people want to be always very cooperative, then they might not just tell one another when they think that the thing that they’re working on isn’t very effective. So for example, when someone comes to me and asks for advice on how to make a project that I don’t think is very good or effective, one thing I could say is, “Well, you just shouldn’t work on this thing at all. You should quit completely.” But that in a way feels a bit abrasive. It’s not very friendly. Am I creating good cultural norms? But at the same time, that kind of bitter pill of truth, or at least my opinion, could be really useful to people.
Stefan Schubert: Yeah, that’s a very good question. And that also gives me the opportunity to expand a bit on the very term “considerateness.” Because we had this endless discussions about what title to use. I know you also put a lot of effort into choosing the right titles, because it really colors the people’s reading of the text.
So what we wanted to say was not actually that you should be nice. That was not the claim. But rather that you should follow norms. And perhaps we should have chosen a different title, because people interpreted it as a claim about niceness. I think in this case that you mention, it’s actually this trade off situation between two different norms. Norms of niceness and friendliness, and norms of honesty. Because what you’re doing there, if you’re not telling them, that is basically that you are being slightly dishonest. You’re not lying perhaps, but you don’t want to hurt their feelings. And there I think probably that honesty should, to quite a large extent, trump niceness. If someone isn’t effective, then you might sometimes have to tell them that.
Robert Wiblin: And in a way, I guess, that is considerate. Because one, you’re being honest, but also, you’re giving them information that could be very useful in a sense. Even if they don’t like it at the time.
Stefan Schubert: Exactly. Yeah. Someone made this comment that what we should care about is people’s long term interests. And sometimes in the short term that might create emotional pain, and you’re telling them that their project isn’t working.
Robert Wiblin: It’s a bit like if you’re in a relationship that’s not working out, then it could be very painful to break up with someone, but it might well be even in their long term interest.
So, on this issue of following norms, there’s this debate that’s come up a bit online before. Should we follow the correct norms, or should we follow whatever norms exist? And how much should we question them? So, one view might be: we should follow the norms of society no matter what they are, even if they’re flawed. Because basically people are going to judge us by that standard. Another might be: we should come up with our own norms. We should reason things through and try and improve on society’s norms. And then we should follow those ones.
So, for example, if society has a particular standard of honesty that’s reasonable, but not extremely high, then you might say, “Well, that’s the standard of honesty that we should follow.” On the other hand, you might say, “No, we should set an internal norm that’s far above that, and then judge people by this different and superior standard.” And I don’t mean superior as in higher. I just mean better according to our reasoning process. Do you have any view on that?
Stefan Schubert: Yeah, that is a good question. I probably think that on questions which are of special importance for effective altruists, we should probably develop new norms. So, like anything that has to do with epistemics, there we really should think through which norms we want to have, and then follow those norms. So we might require more evidence than one would normally require. But then there are sort of all sorts of other everyday norms which are more removed from the EA core mission, and there you might think, well, it’s just a lot of work, trying to develop new norms, and getting people to follow them. So, it’s just the cheapest solution to go with the mainstream norm.
Robert Wiblin: Which is what people outside the community would be judging us on anyway.
Stefan Schubert: Yeah, I think that is right.
Robert Wiblin: I think agree with that answer. It seems like a tricky one.
Alright, let’s move on to the second piece that you put out which is called “Hard-to-reverse decisions destroy option value,” an absolutely striking, viral title there. What did you have to say in that blog post?
Stefan Schubert: That’s funny that you say that. We also changed that title over and over again. I agree, it’s not the most striking title.
I guess the thought was there that it’s very easy, if you’re not careful, it’s easy to go down paths which are hard to reverse, or even irreversible. And this consideration might be more important than you think. We also had some specific examples in mind. Going political, for instance, might be a decision which is very hard to reverse. Whereas if you stay more removed from party politics, we will always have the choice to become more political. And similarly, if we’re small, then we have the option of growing large. The reverse doesn’t hold.
Robert Wiblin: Are there any other examples of things where it’s kind of a one way trip? Once you’ve gone somewhere it’s very hard to turn it back?
Stefan Schubert: Yeah, so, the third example that we had was exactly this, about a reputation for considerateness, a reputation for integrity and friendliness and so on. I think we have that to a reasonable extent now. But if you started caring less about that, that would be very hard to reverse.
Robert Wiblin: Reminds me of that quote from Warren Buffet: “Creditworthiness is a bit like virginity. It’s very easy to lose, and very hard to win back.” In a sense, once you become known for being a jerk, and lying, it’s very hard to convince people that you’re an honest and decent person.
Stefan Schubert: That’s a great quote.
Robert Wiblin: I’ll put up a link to that blog post, about irreversible decisions, and people can have a read for themselves if they want to learn more.
The third piece of research that you put out is called “Understanding Cause-Neutrality.” And cause-neutrality is this term that people use to suggest that someone is open to working on solving a wide range of different problems. And potentially, they’d be willing to change to working on whichever one they think will do the most good. What are the misconceptions that people have about cause-neutrality?
Stefan Schubert: As you said, cause-neutrality is just this idea that you should be willing to consider new causes. Because, unless you’re lucky, you might have chosen intuitively a cause which is not the most effective. And it might actually be many times less effective than the most impactful cause if it’s the case that the distribution of impact across causes is very uneven.
So, I think one misconception is that people think that cause-neutrality entails that we should work on this broad range of different causes. Cause-neutrality sets EA apart from other movements. Other movements like the feminist movement or the environmentalist movement arguably aren’t cause-neutral. But another thing which sets EA apart from other movements is that it’s working on a number of different causes. So in the paper I call this cause divergence. But it doesn’t actually follow from cause-neutrality that we should be cause-divergent. Because it could very well be that we should invest all our resources in one cause — the very best cause. So I think that is an important point. You sometimes see people misunderstanding that.
Robert Wiblin: Are there any other misconceptions about cause-neutrality?
Stefan Schubert: Yeah, I guess yet another concept which I introduce which is related to cause-divergence, is this concept of cause-agnosticism: that you’re uncertain about which cause is the most important. Sometimes people have used the term “cause-neutrality” to refer to cause-agnosticism.
Robert Wiblin: But you want to say that someone could be cause-neutral, but very sure about what’s the best cause to work on, and they think that all of the effective altruism should just be focused on that one cause. That that’s how you want to use that term.
Stefan Schubert: Yeah.
Robert Wiblin: Yeah, interesting. Do you think that kind of outcome is likely, if someone is truly cause-neutral? Because if I met someone who is really sure that they knew which problem is most effective to work on, I might start to wonder, “Do they really know, or are they just being very over-confident, and maybe they have pre-commitments to particular problems that they think are going to be most effective.” Just because the problem seems so difficult to answer in the first place.
Stefan Schubert: Yeah, that’s a fair point, and I guess there’s a continuum here. One should always be willing to change one’s mind, but you don’t have to go so far as to be agnostic. You might think that one cause is likely to be the most important, but you’re still open to changing your mind.
Robert Wiblin: And do you think, potentially, that effective altruism should be focused on a more narrow range of problems?
Stefan Schubert: I find that not implausible. People have talked about Cause X, and some people have thought that there should be very many causes. But perhaps just like the number of organizations hasn’t grown that fast, similarly, the number of causes hasn’t grown that fast. And perhaps it’s just the case that some causes are much more important, like existential risk and we should focus on them.
Robert Wiblin: And we already know about them?
Stefan Schubert: Yeah.
Robert Wiblin: You mention this concept of Cause X. What’s that?
Stefan Schubert: Cause X would be this unknown cause which we don’t know about yet.
Robert Wiblin: So it’s like trying to give a name to some unknown thing that would be really effective, and encourage people to try and figure out what it was. A bit of a mystery. But I suppose since that term came into use, I haven’t really heard that many new causes – there have been some suggestions, but none of them has really caught fire.
Stefan Schubert: Yeah.
Robert Wiblin: So, I guess if we’re not finding new ones, then maybe we actually already know the ones that are likely to be the best, would be the argument.
Stefan Schubert: Yeah. Those who prioritize existential risk, they might say that, “When did Bostrom come up with this maxipok rule which essentially means that we should focus on existential risk?” That was quite a few years ago.
Robert Wiblin: I think almost twenty years, maybe.
Stefan Schubert: Yeah.
Robert Wiblin: I’m not sure that I really buy that. I just think there’s so much deep uncertainty about the nature of the world, and how the future might go. I suppose if you think about existential risk as just this very broad class of problems, of trying to make the long run future go better, then maybe it’s likely that the best problem is in that class. But then you’ve created a very broad set of things, and may different problems could fall into that. So, in a way, you’ve cheated to get the answer.
Stefan Schubert: I think that’s an important point, because this is another concept which there is some misunderstandings about; the notion of existential risk. I think that many EAs sort of equate existential risk reduction with AI safety work, bio-security work, and perhaps AI strategy work. But it could encompass this broader social, institutional changes or improvements, that make civilization more resilient.
That’s not the most popular approach to reducing existential risk now, but it’s one possible approach.
Robert Wiblin: I’m pretty on board with the argument that shaping the long term future is the best way to do the most good. But I’m not sure about that either. There are arguments on the other side, both philosophical arguments, and then practical arguments. And while a lot of people have been convinced of this, not everyone is. If you try to be epistemically humble, as we were talking about earlier, then you should retain a reasonable level of doubt about whether that’s actually true.
Stefan Schubert: That’s absolutely true.
Robert Wiblin: And so, there’s people who differ on this. Even when I feel reasonably confident about something, I almost just always want to adjust back and say, “I almost never want to be more than 80% confident about any complex question.” Because there’s so many things that I might not have considered. So many things that it might not even be possible for a human to really think about. I don’t know exactly what is the nature of the universe. The world could be very different than what I understand, in which case I could be wrong about lots of practical questions. So, for that reason, I always just want to say, in a sense, I have all of these answers to practical questions that I’m going to operate on day to day, but in a deep sense, I’m just very uncertain about what’s going on, and what I ought to do.
Stefan Schubert: I agree with that.
Robert Wiblin: It’s not always easy to live that. At least when you’re in conversation. The temptation is to be overconfident.
Stefan Schubert: Yeah, I mean, I guess that’s the thing. But it’s also the case that you have to act, right? Effective altruism is very focused on action. And when you act very decisively, it might seem as if you’re more convinced than you actually are. What you’re effectively say is, “I’m very unsure of what’s actually the right thing to do. But I need to try to have an impact in the world. I will try to do that in a decisive way, even though I assign some probability that my actions aren’t having the impact that I thought they would have.”
Robert Wiblin: Yeah, I can’t just lie in bed forever, worrying that we don’t have the solution to the problem of induction. At some point, you got to get amongst it, and just hope that induction does actually work.
Well, my guest today has been Stefan Schubert. Thanks so much for coming on the 80,000 Hours podcast, Stefan.
Stefan Schubert: Thank you very much, it was a pleasure.
Robert Wiblin: We’re trying to grow this show’s audience by getting the word out there to people who would find it useful. If you know someone like that, maybe you could send them a text with a link to the show so they check it out.
The 80,000 Hours podcast is produced by Keiran Harris.
Thanks for joining, talk to you next week.
Learn more
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.