Why we have to lie to ourselves about why we do what we do, according to economist Robin Hanson
By Robert Wiblin and Keiran Harris · Published March 28th, 2018
Why we have to lie to ourselves about why we do what we do, according to economist Robin Hanson
By Robert Wiblin and Keiran Harris · Published March 28th, 2018
In fact, your conscious mind is more plausibly a press secretary. You’re not the president or the king or the CEO. You aren’t in charge. You aren’t actually making the decision, the conscious part of your mind at least. You are there to make up a good explanation for what’s going on so that you can avoid the accusation that you’re violating norms.
Robin Hanson
On February 2, 1685, England’s King Charles II was struck by a sudden illness. Fortunately his physicians were the best of the best. To reassure the public they were kept abreast of the King’s treatment regimen. King Charles was made to swallow a toxic metal; had blistering agents applied to his scalp; had pigeon droppings attached to his feet; was prodded with a red-hot poker; given forty drops of ooze from “the skull of a man that was never buried”; and, finally, had crushed stones from the intestines of an East Indian goat forced down his throat. Sadly, despite these heroic efforts, he passed away the following week.
Why did the doctors go this far?
Prof Robin Hanson – Associate Professor of Economics at George Mason University – suspects that on top of any medical beliefs the doctors had a hidden motive: it needed to be clear, to the King and the public, that the physicians cared enormously about saving His Royal Majesty. Only extreme measures could make it undeniable that they had done everything they could.
If you believe Hanson, the same desire to prove we care about our family and friends explains much of what’s perverse about our medical system today.
And not only what’s perverse about medicine – Robin thinks we’re mostly kidding ourselves when we say our charities exist to help others, our schools exist to educate students, and our political expression is about choosing wise policies.
So important are hidden motives for navigating our social world that we have to deny them to ourselves, lest we accidentally reveal them to others.
Robin is a polymath economist, and a font of surprising and novel ideas in a range of fields including psychology, politics and futurology. In this extensive episode we discuss his latest book with Kevin Simler, The Elephant in the Brain: Hidden Motives in Everyday Life. We also dive into:
- What was it like being part of a competitor group to the ‘World Wide Web’, but being beaten to the post?
- If people aren’t going to school to learn, what’s education for?
- What split brain patients show about our capacity for self-justification
- Why we choose the friends we do
- What’s puzzling about our attitude to medicine?
- How would it look if people were focused on doing as much good as possible?
- Are we better off donating now, when we’re older, or even after our deaths?
- How much of the behavior of ‘effective altruists’ can we assume is genuinely motivated by wanting to do as much good as possible?
- What does Robin mean when he refers to effective altruism as a youth movement? Is that a good or bad thing?
- Should people make peace with their hidden motives, or remain ignorant of them?
- How might we change policy if we fully understood these hidden motivations?
- Is this view of human nature depressing?
- Could we let betting markets run much of the government?
- Why don’t big ideas for institutional reform get adopted?
- Does history show we’re capable of predicting when new technologies will arise, or what their social impact will be?
- What are the problems with thinking about the future in an abstract way?
- Why has Robin shifted from mainly writing papers, to writing blog posts, to writing books?
- Why are people working in policy reluctant to accept conclusions from psychology?
- How did being publicly denounced by senators help Robin’s career?
- Is contrarianism good or bad?
- The relationship between the quality of an argument and its popularity
- What would Robin like to see effective altruism do differently?
- What has Robin changed his mind about over the last 5 years?
The 80,000 Hours podcast is produced by Keiran Harris.
Highlights
So it’s not just that we are ignorant about education and about medicine, we are surprisingly ignorant. I have to say it’s really surprising that incoming college students who pick a major hardly know anything about what happens to people with that major. How often do they get jobs, where the jobs are, how many hours a week do those work. Amazingly enough, people choose majors and career plans without knowing even the basics of what will be the consequences of that, which is suspicious because they know an awful lot about, say, their dorm and where they are living and which meal plan they’re having. I mean it’s not like they don’t get information about anything.
Pretty much every choice you make on some parameter, there will be your personal optimum and it won’t be the social optimum, and you should just shave it in the direction of the social optimum. And so the main thing you need to do is just be able to know which direction is the social optimum for all the parameters of choice that you make. Now, it helps to be an economist to be able to figure that out, but if people are interested I think we could teach them a couple-day course where they would be able to apply this all over the place.
One standard example is being nice and being generous, having gratitude, having a positive attitude. All these things seem to be useful for the world on average. Just be a little bit nicer to everybody you interact with. You already have some reason to be nice – reputation and not wanting to feel like a jerk – just be a little bit nicer, right? Smile a little bit more, take a little more of a moment to look them in the eye and then be friendly. And this isn’t an original thing for me. I mean there are many people who over the centuries have said, “A way to help the world is just to be a little bit nicer in each of your interactions.” And that’s basically what I’m saying, just be a little bit nicer in every little thing you do.
It’s not enough just to convince you that this is an effective charity, you need to convince the people you’re trying to impress that it’s an effective charity so that you will want to impress them this way. A problem with that of course is often that you will create the impression that you feel you’re holier than thou and other people may then criticize you for that. So when you try to tell everybody that this is the most effective thing, are you creating the impression that the people who think they’re doing this think they’re better than everybody else? And then that puts a bad taste in people’s mouth and they might want to actually step away from that.
Articles, books, and other media discussed in the show
- Rob Wiblin’s favourite 75 episodes of EconTalk
- The Elephant in the Brain: Hidden Motives in Everyday Life by Robin Hanson and Kevin Simler
- The Age of Em: Work, Love, and Life when Robots Rule the Earth by Robin Hanson
- The Case Against Education: Why the Education System Is a Waste of Time and Money by Bryan Caplan
- Construal level theory (near/far mode)
- Fourteen wild ideas from Robin Hanson
- Futarchy: Vote on values but bet on beliefs
- Project Xanadu
- Robin’s blog post on EA as a youth movement
- Robin’s Google Talk on The Age of Em
- Robin’s Cato Unbound essay — ‘Cut Medicine in Half’
Transcript
Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, the show about the world’s most pressing problems and how you can use your career to solve them. I’m Rob Wiblin, Director of Research at 80,000 Hours.
Today’s interview is with someone whose Hansonian views are distinctive enough to have been named after him.
It’s likely to be entertaining to most people who are subscribed to the show. You may have heard other interviews about Robin Hanson’s recent book, but don’t be put off by that – we cover plenty of original ground here.
Before that I wanted to suggest another podcast you might like to subscribe to. That show is EconTalk, and it’s the first podcast I started listening to, nine years ago, and it still brightens every Monday morning for me.
The format is similar to this one – hour long interviews with experts on particular topics, which range from economics, to war, to how to run a business. The uniting theme is thinking carefully about the social world.
The show has had a lot of Nobel Prize winners on, but my favourite episodes are usually with people you’ve never heard of. Some you probably have heard of though are Christopher Hitchens, Milton Friedman and Thomas Piketty.
The host, Russ Roberts, has distinctive political views, but is unfailingly polite and often invites on guests with alternative views – it’s really a model of how to have good conversations.
There are about 600 hours worth of episodes in the archives, so if you haven’t listened to it yet, it can provide 25 days of straight entertainment, assuming you can go without sleep.
That number might feel overwhelming, so I’ve made a list of my 75 favourite episodes which I’ll link to in the blog post on the show.
And without further ado, I bring you Robin Hanson.
Robert Wiblin: Today, I’m speaking with Robin Hanson. Robin is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. Robin received a bachelor of science in physics from UC, Irvine, in 1981, but then switched fields and went on to finish his Ph.D. in social science from Caltech in 1997.
Statistician Nate Silver described Robin the following way. “He is clearly not a man afraid to challenge the conventional wisdom. Instead, Hanson writes a blog called Overcoming Bias, in which he presses readers to consider which cultural taboos, ideological beliefs or misaligned incentives might constrain them from making optimal decisions.”
Robin’s unusual views range across a pretty wide range of fields including psychology, politics and futurology, and his latest book is The Elephant in the Brain, Hidden Motives in Everyday Life.
Thanks for coming on the podcast, Robin.
Robin Hanson: Great to be here, and the book is co authored with Kevin Simler.
Robert Wiblin: Absolutely. Can’t forget that. We planned to focus a fair bit on lessons from Elephant in the Brain, but, first, let’s find out a bit more about how you got where you are today. You’re pretty unusual for finishing your Ph.D. at the age of 38 in a pretty different field than the one you did your undergraduate work in and then still going on to have a successful academic career. How did you do that, and what were you doing in between your undergrad and your Ph.D.?
Robin Hanson: I got lucky, I think I have to admit. I got started in undergraduate, in engineering, and then I switched to physics, and then I started graduate school, philosophy of science, but then I switched back to physics, and so then I got a master’s in physics and philosophy of science in 1984 from University of Chicago, and then I got stars in my eyes reading about cool things happening in artificial intelligence and the Web out in Silicon Valley, and so I went out to Silicon Valley and I got a job doing AI and, on the side, played with the Web with the Xanadu group, and I did that for nine years at Lockheed and NASA, and then I finally started my Ph.D. at the age of 34 with two kids aged zero and two at Caltech.
Robert Wiblin: Cool, and what’s Xanadu? That was like an alternative to the World Wide Web or a different way of organizing it?
Robin Hanson: Yes. It was inspired by Ted Nelson sort of, who’s the visionary leader, idea guy, and they had a vision for what the Web could be and they were working to make it, and their main failing was that they tried to add too many features and insisted on all these features, and so then Tim Berners-Lee finally just delivered a very simple version of the Web and took off.
Robert Wiblin: That’s what we have today.
Robin Hanson: Right, but I did learn some things about futurism. Some people later have said the World Wide Web was just one of those things that no one could have foreseen, and, of course, there was a group of people who did foresee it, and I also know that they didn’t get much out of it. That is, they didn’t get much personal benefit from foreseeing the future, and so it suggests both that it’s possible to foresee the future and not that rewarded, which may explain why it’s not done as much as you might think.
Robert Wiblin: What made you switch from physics to economics and social science?
Robin Hanson: I noticed that in physics and in engineering and in software engineering, people were eager for innovations that improve things, and it was hard to find things that could improve things very much, and I started to read about social science, and it seemed to me there were these really large innovations that were just there for the picking, and either I was a genius or it was really easy picking, but whatever it was, I wanted to switch over to try to gain those advantages, and what I eventually realized after I had switch was that the reason it’s so easy to find, apparently, large improvements is because they’re almost never adopted. They would just sit there decade after decade not being used, and that is puzzle that The Elephant in the Brain is in part intended to address.
Robert Wiblin: Do you think the reason that these things aren’t adopted is because they’re actually not as good ideas as they seem to be at first glance or because they’re not in the interest of the people who are most powerful under the current system?
Robin Hanson: Neither. Some people would say, “Well, it’s just impossible to find improvements in social science or to prove that there are improvements, uh, because social science just isn’t rigorous enough.” I think that’s just wrong. We can do theory and lab experiments, even field experiments to show the things for improvements.
I think the actual problem is that social scientists and policy analysts start from the assumption that the thing people say they are trying to get is the thing they are actually trying to get, so when people try to study education and how to improve education, they take the usual story that education is about learning the material so that you can be a more useful worker or citizen and they study how you could learn the material faster, better and, when they come up with those answers, they offer them to the world, and the world’s not very interested, and my best explanation is that the world kind of knows that it isn’t really at school to learn the material. That’s what we say, but it’s not really why we’re there.
Similarly, through a lot of other institutional areas, in politics, in medicine, we say we go to the doctor to get well, and people offer better institutions for getting well and we aren’t interested plausibly because we kind of know that that’s not why we actually go to the doctor.
Robert Wiblin: Okay. That brings us to the book, Elephant in the Brain. The subtitle is Hidden Motives in Everyday Life. What hidden motives are you talking about?
Robin Hanson: As we just said the example that, in education, your motive isn’t to learn the material, or when you go to the doctor, your motive isn’t to get well primarily, and the hidden motives are the actual motive. Now, how could I know what the hidden motives are, you might ask? The plan here, that’s where the book is … In each area, we identify the usual story, then we collect a set of puzzles that don’t make sense from the point of view of the usual story, strange empirical patterns, and then we offer an alternative motive that makes a lot more sense of those empirical patterns, and then we suggest that that is a stronger motive than the one we usually say.
Now, just to be clear, almost every area of human life is complicated, and there’s a lot of people with a lot of different details and so, of course, almost every possible motive shows up in almost every area of human life, so we can’t be talking about the only motive, and so the usual motive does actually apply sometimes. Actually, you could think of the analogy to the excuse that the dog ate my homework. It only works because sometimes dogs eat homework. We don’t say the dragon ate my homework. That wouldn’t fly, so the usual story is part of the story. It’s just a smaller part than we like to admit, and what we’re going to call the hidden motive, the real motive is a bigger part of the story, but it’s still not the only part.
Robert Wiblin: You don’t mean people are kind of maliciously lying about the motivations? Do you think that they actually believe the standard story in most cases, but they’re just mistaken about why they’re doing what they’re doing?
Robin Hanson: Actually, there’s just a huge range of variation in who’s aware of what. We individually vary from moment to moment, depending on whether we’re on the public stage or talking privately to people. We vary in terms of which topics we are invested a lot in, so, for most of us, there’s some area of life that’s the most precious and sacred to us and we’re going to be the most resistant believing that our motives there aren’t the high-minded motives we like to think, but in somebody else’s area, you might be more willing, so if you’re an atheist, you’re more willing to believe that those religious people have a hidden motive that isn’t what they say. Whereas, if you’re religious, that’ll be harder to swallow.
Robert Wiblin: Whereas maybe if you’re a teacher, it’s a little bit hard to believe that people aren’t learning things.
Robin Hanson: Exactly. It also varies again by how public we are, so there is a sense in which when we’re on stage, when we write a letter of application to a school or a politician making a speech, those are the contexts where we most have to pander to the thing everybody wants to hear and expects to hear from us, and if we’re talking in a bar or privately to a lover, then we might be more honest about our other motives.
Robert Wiblin: Okay, so the broad claim is that many things in the social world aren’t quite what they seem to be and what people say they are, but let’s be specific there. If people aren’t going to school to learn, what is education all about?
Robin Hanson: Our book, the first third goes over the general theory of why it might be plausible that we would have hidden motives, and then the last two thirds goes over 10 different areas of life, and one of those areas is education. Each area has a chapter, and we can’t go into enormous detail, of course, because it’s just one chapter in the book.
Our chapter on education is taken from my colleague Bryan Caplan’s book whole treatment on this called The Case Against Education and, there, the usual stories that we’re learning the material, and some puzzles with that start from the fact that actually most people don’t remember most of what they “learn,” and most of what they do remember isn’t actually very useful, yet people who always … They get a college degree and become a bartender, make more than people who have only a high school degree and become a bartender. We do get paid more on average for more years of school, but the last year of high school and the last year of college get paid three times as much as other years even though you don’t learn more in that last year.
I lived near Stanford when I was going to Lockheed and NASA, and I often would sit in on Stanford classes, and I didn’t need to apply or register. I could just go get the best, one of the best learning in the world for free merely by walking in and sitting down.
You might think they would be very careful to not allow that sort of thing, but nobody cares and the professors tend to be flattered. In fact, one of them gave me a letter of recommendation on the basis of my performance in the class that I didn’t register for or even … I wasn’t even officially in the school, and so that’s a puzzle from the point of view of people mainly going to school to learn because you would think I’m getting all this benefit for free.
There are even more puzzles than the ones I’ve listed, but that should be enough to make clear that there are some puzzles with the story that we go to school to learn.
Robert Wiblin: What do you think better explains this kind of behavior?
Robin Hanson: Bryan’s story, which I certainly think is a big part of it, is that we are there to show off. We are there to show how smart and conscientious and conformist we are. In our book, we add to that the idea that you might also, of course, you can meet mates there. You can be babysitting. You can adopt the modern workplace practices. The government can use it for propaganda. These are also some of the functions. The main one is to show off, I would agree.
We do learn some things in schools. It’s not like it’s zero. Again, it’s like the dog ate the homework. It works because it does sometimes happen as an excuse, but it’s just not the main thing.
Robert Wiblin: In the case of education, if you’re right, why do we have to pretend that we’re going to education to learn rather than to separate smarter people from less smart people? Why couldn’t we just be upfront about that?
Robin Hanson: That is an excellent question. That’s in a sense the big theoretical puzzle here. All of the things that we say in the book people are doing are all reasonable things to do. They aren’t crazy things to do, and so you might think, “Why not just know that that’s what you’re doing?” and that’s where we get to the idea of norms and norm enforcement and evading norm enforcement, which we spend a lot on the first third of the book discussing.
Compared to other animals, humans had norms. That is, other animals just have usual behaviors, but humans have rules about what the usual behaviors are supposed to be. We have the rule that, if you see someone violating the rule, you’re supposed to do something about it. You’re supposed to tell people and then try to work to make them stop, and that means that we are constantly watching out for what we are doing and what other people are doing to see if they’re violating rules.
Humans have much larger social groups than other primates. Most other primates had a group that’s too large or just fragmented and they just can’t manage it politically, but humans were able to manage much larger groups, and that allowed us to do a lot of things other primates couldn’t, and the standard story is that we have the largest brains of all because we had the most complicated social world, and so the main environment for our ancestors was not the rain or the prey or predators. It was each other, and so we have these big brains to think about each other all the time and, as such, humans had these norms as a big part of how we managed and kept the peace than we were constantly thinking about, “Am I perhaps violating a norm at the moment, or are you violating a norm?”
In fact, our brain devoted a big fraction of its processing to constantly keeping track of what we’re doing and trying to manage a good story about what we are doing, so a lot of our norms are in terms of motives. That is, it’s okay if I hit you accidentally, but it’s not okay if I hit you on purpose, and so that means we want to keep track of our motives and keep track of what sort of plausible motives we could ascribe to other people, and that’s why we care a lot about our motives.
In fact, your conscious mind is more plausibly a press secretary. You’re not the president or the king or the CEO. You aren’t in charge. You aren’t actually making the decision, the conscious part of your mind at least. You are there to make up a good explanation for what’s going on so that you can avoid the accusation that you’re violating norms.
Robert Wiblin: Right. Okay. Your subconscious you’re claiming is figuring out what is the best thing for you to do that serves your interests, and then kind of your conscious mind believes that it’s doing things for different reasons so that it can compellingly tell other people that that’s why … that that was your actual motivation.
Robin Hanson: Right. There’s a set of experiments now 50 years old on the split-brain patients, and that suggests that we are really quite prone to making up explanations. Basically, these patients had the two halves of their brain split apart, and one half of the brain is attached to one eye, one ear, one arm, one leg, and you could set it up so you can talk to one brain and then ask it to do something like stand up and then you can talk to the other brain and say, “Why did you do that?” in such a way that the second brain doesn’t really know what the first brain was doing, and if you say, “Why did you that?” the honest answer should be, “I don’t know.” You’re talking to the other brain, but that’s not what it does.
Actually, it very consistently just makes up an explanation as necessary to try to attribute its behavior, so it might say, “I wanted to get a Coke,” even though it really doesn’t know. That’s the kind of brain you have. It’s just always ready to make up an explanation for what you’re doing even when it doesn’t know.
Robert Wiblin: Do you think this desire to confabulate explanations is because it’s socially advantageous to do so?
Robin Hanson: Yes, so we have huge brains mainly to deal with our complicated social world, and we have the biggest brains of all, and the main element of our social world that could get us into trouble was norm violations, breaking rules.
Apparently, human minds have this just part of their brain that’s called the default network. Unless we’re doing something else, it’s just always ruminating about what we’ve been doing and why and trying to make an explanation. This is the thing that you’re trying to shut off when you’re meditating. It’s so hard to shut off because your mind is just always doing this, tracking what you’re doing and why and making sure you have a good story.
Robert Wiblin: Which suggests that it’s extremely important for your survival.
Robin Hanson: Right, because it’s very expensive.
Robert Wiblin: Okay, but we could have our minds designed such that we’re aware of what our actual motivations are, but then we tell other people that we’re doing them from more high-minded reasons. Why can’t we be designed that way?
Robin Hanson: The human brain is not very modular. If it were very modular, then it would be more possible to lie with a straight face, but, in fact, when one part of our brain has one agenda going on and one set of feelings, it tends to just infect the whole brain and produce … affects all of our behavior and affects our tone of voice, the slant of our head, whether we’re shaking our knees, and because of that, it’s actually pretty hard to give one big part of our brain one set of beliefs and attitudes and other parts really different ones. Actors really have to spend a long time learning to act, and it’s hard.
Robert Wiblin: For that reason, it’s a lot easier to pretend if you actually believe the thing, so our brain has been designed such that we actually sincerely believe whatever we want to present to other people.
Robin Hanson: Salespeople know this. I mean, the most reliable way to be a good salesperson is to actually believe in your crappy product.
Robert Wiblin: Yep, and so evolution has designed us to believe whatever is easier to believe?
Robin Hanson: Right, mainly because we just don’t have a very modular brain that things just leak all over the place.
Robert Wiblin: Okay, so let’s come back to education. If I’m involved in the education system and I come up and say, “You know, I don’t think that the education system is explained by trying to learn information that’s useful. I think it’s just a matter of separating smart people from less smart people,” like why do we need to have a different story for that? Why would I lose out if I claim that?
Robin Hanson: Each of us, again, is trying to present a high-minded image and an image that’s not violating norms. Humans have a strong norm against bragging, and we actually do a lot with an eye to showing off, with an eye to creating favorable impression, and all of that violates the bragging norm. We can still do. We just have to pretend we’re doing something else, which we do, and so we can’t just say, “I’m going to school to show off. I’m trying to show you all how smart and conscientious I am,” because then you’re bragging.
Robert Wiblin: I guess, for teachers or for university lecturers, would it be hard for them to justify all of the funding that they’re getting if they say that all that we’re doing is testing people on how smart they are?
Robin Hanson: Plausibly, but I don’t think that’s the main thing. In almost every area, these people who supply a product will try to spin it in the highest-minded way they can, but they will mostly cave to whatever the customer’s perception is. If the customers don’t believe car repair is the most holy, noble profession, then the car repair people will just keep quiet about that even if they believe it privately. They’re not really going to push that on the customers. It’s when the customers are willing to believe that and even prefer that that the suppliers will also go along.
Robert Wiblin: If I studied philosophy or classics or something like that and then people ask me, “Oh, you know, why did you do that?” and I said, “Oh, I mean, it certainly wasn’t to learn anything useful. It was all completely, you know, useless, all of the information that I learned. I was just doing it to show how smart I was,” people would judge me negatively because I’d be bragging and also just suggesting I guess that I’m not actually interested in the topic at all. I’m all doing it to get one up on other people.
Robin Hanson: It’s very mercenary. It’s very manipulative.
Robert Wiblin: Conniving.
Robin Hanson: Exactly.
Robert Wiblin: Okay, and so kind of everyone is leaning in the direction of claiming, “Oh, no, it’s because this information is really useful,” and then that, over time, creates this kind of false consciousness about the purpose of education.
Robin Hanson: In almost all of our areas of life, we’re trying to look for as high-minded a motive as we can that’s plausible at least to some degree and also avoids violating norms.
Robert Wiblin: Education is one of them, but our listeners might not be convinced by that, so let’s just get through a couple of different examples. What’s the story that you have about religion?
Robin Hanson: Our story is just cribbed from the standards of social science of religion literature. The usual story about religion, if you will, is that people have these beliefs about the supernatural and god and other things like that, and these beliefs suggest that they have to follow certain behavior that the god might have commanded, and that explains the behavior. They’re doing these things because they believe god told them to. That’s the straightforward explanation that a religious person might give you.
Now, in fact, most religion in history didn’t actually focus very much on beliefs. In the ancient world, it was mostly that you needed to do the regular rituals that you’re supposed to do and what you believed wasn’t very important, but even then you might ask, “well, why do you do these rituals?” and you might just say, “Because everybody else does. Those are just what you’re supposed to do,” which isn’t much of an explanation, and there’s a lot of puzzles, of course, from the point of view of this theory.
The most obviously puzzle is that religious people actually win in a lot of ways very consistently. Religious people make more money, get married more, have less crime. They live longer. They use fewer drugs. They have more friends, so making a mistake or adopting strange beliefs seems an odd way to achieve these very practical ends, but religious people consistently win in these ways, and so that’s a puzzle that needs to be explained.
Robert Wiblin: Right, and so why do you think people are actually getting involved in religion?
Robin Hanson: The standard story in the social science of religion is that religion helps bond communities together, that when you are asked to follow rules about diet and dress and to believe strange things and you do that, you show your community that you are willing to pay a price to be part of them, and they can trust you more and then rely on you more. In fact, religions that demand more of their members do in fact trust each other more and are able to ensure each other more, say, against losing their job or health or things like that.
Robert Wiblin: Are there any specific characteristics that are common to many religions that are better explained by that motivation?
Robin Hanson: Most religions have relatively arbitrary restrictions and rules that don’t seem to have much function. In addition, they spend time in rituals together that they could better spend that time doing something else, and so there are a lot of common features in most religions, yes.
Robert Wiblin: I think one of the most common kind of hidden motivations that I think most of us would accept that we have to some extent, but don’t like talking about is just that we like spending time with and making friends with people who are successful and smart and attractive and, potentially, have a lot of money, and most of us don’t say, “Oh, I’m doing, uh, uh … I want to befriend those people because it’s going to be of practical value to me because they might be able to invest in my company, or they might be able to, to help in my project because they’re particularly talented.”
We’re more inclined to say, “Oh, it’s because they’re charming,” or they’re particularly funny or they’re just enjoyable to hang out with, but it’s interesting how our motivations to spend time with people does seem to really strongly coincide with how useful they are to have as kind of allies socially.
Robin Hanson: Yeah, that’s not one of our chapters in our book. We do go over 10 areas, but I think you could probably do another 10 or 20 chapters like we did in our book. I’m hoping we can inspire other people to do that sort of thing. I am hoping we are opening up a new area of … way of looking at the world that people will join in on, but, yes, this is one of them.
When we’re asked, “Why do you like these friends?” we don’t tend to talk in terms of, “Well, they help me get a job. They, they help me get sex. They, they might help me when I need to move.” We don’t want to talk about those things. We don’t want to admit those sorts of motives that we would use them, so we say, “We like them,” and, of course, we do like them. It’s just that we don’t think very much about why.
A similar thing is about fun. When you say, “Why do you do something?” “Because it’s fun.” If you step back, you realize that’s just not much of an explanation at all. That’s a feeling you have at the moment. No doubt, from your personal point of view, that’s enough of a reason to do it, but it doesn’t explain why you have that feeling why is this fun.
Similarly, it doesn’t explain, “I like this person.” Fine, but why do you like them? What causal process produces your liking them? How is it that after millions of years of evolution honed your senses to want to like this sort of person?
Robert Wiblin: Another chapter was about medicine. What are the unusual things going on with people’s use of healthcare?
Robin Hanson: Medicine is probably the chapter that will surprise most people because, at least in our society, people are pretty sensitive to medicine. They find it as a pretty sacred thing, so the usual story about medicine is that we go to the doctor to get well, and we push other people, of course, to go to the doctor so they can get well, and there’s a bunch of puzzles with this explanation. The biggest one that stands you right in the face is that people who get more medicine aren’t on average healthier. That is, we have variations across regions, like nations and states and counties and the places where people get more medicine on average either spend more money or have more visits. Those places do not on average have healthier people.
We also have randomized experiments where we’ve given some people cheaper medicine and other people more expensive medicine, and the people who faced the lower price chose to get more medicine, but they were not on average healthier, and that’s a big puzzle because we spend in the US 18% of GDP on medicine. There are a lot of other things that we know have big, strong correlations with health that we are much less interested in personally or policy-wise. That includes exercise, air quality, sleep, social status, nutrition. There’s just a whole wide range of things that have big effects.
I’ve taught health economics for many years, and if you asked people about whether they want to change policy to promote these other things, they just don’t think there’s much of a priority there, but if you talked about medicine, they’re really all over that and they think medicine is really important.
Robert Wiblin: It outrages people when someone doesn’t get access to healthcare, but people aren’t similarly infuriated by the fact that some people don’t exercise as much or don’t have those good opportunities to exercise.
Robin Hanson: Right, or have subsidized exercise, et cetera.
Robert Wiblin: Even though exercise potentially has a much larger effect than extra healthcare.
Robin Hanson: Absolutely, a much larger effect. We have other puzzles that people are surprisingly uninterested in information about the quality of medicine. People have the keeping-up-with-the-Joneses effect, where they tend to spend more medicine when people around them spend more, and an explanation is with the analogy, first of all, of a parent kissing the child’s boo-boo. A child scrapes their knee and cries, and the parent comes over and says they’re there and kisses the boo-boo, and then the child calms down and feels comforted. We know there’s no medical effect there, but it still works to comfort the child.
Another analogy is with Valentine’s chocolates. For many people, on Valentine’s, there’s a tradition of showing they care about people by buying them chocolates. When they do that, they don’t ask themselves, “How hungry is the other person I’m giving the chocolates to?” How many chocolates do they need?” What they ask is, “How much chocolates do I need to buy in order to distinguish myself someone who doesn’t care as much as I do?”
When they think about the quality of the chocolate, the signals of quality, they know to look for common shared signals of quality. If they have a private signal about the quality, that’s not going to affect their choice very much either as a receiver or the giver because they know the other person doesn’t see that signal in order to judge the generosity. You have to guess what could somebody have plausibly known about the quality, and these are analogous to medicine. In medicine, we give as much as medicine as it takes to show that we care even when a lot of the extra medicine isn’t very useful, but if we gave less, it would seem like we cared less, and we aren’t very interested in the quality of medicine, at least when it’s a private signal to that quality of medicine. We’re much more interested in common shared signals of quality.
Robert Wiblin: With the Valentine’s Day chocolates, there’s a couple of odd things. If I managed to get a really good deal on them, so I managed to get some really nice chocolates for very little money, I probably wouldn’t want to tell that, even though in other cases that would be good, because it’s not so much that you’re trying to get a deal because you want them to enjoy the chocolates. It’s about the expense to you.
Robin Hanson: Exactly. If you’ve got a friend who is about to undergo surgery, you might say, “Hey, I’ve got you this really great deal on surgery in Mexico. It’s a third of the price. The plane fare will easily cover the savings,” they may not appreciate your generosity there.
Robert Wiblin: Yeah. Another thing is we tend to buy gifts that are potentially much more extravagant than what the person would ever buy normally, again, because the point isn’t that they really enjoy the taste of chocolate, that they might like it. The reason you get the more expensive and extravagant chocolate is to show that you’re willing to spend the money.
Robin Hanson: Right, and so we can use this as cues to figure out what areas of life are actually there are gifts as ways to show that we care about each other rather than more functionally for the direct benefits.
Robert Wiblin: Your claim is that people buy too much medicine and potentially that they give too much medicine because they want to show to people who are sick that they really care about them a lot even if the medicine doesn’t help?
Robin Hanson: Right. There’s two sides to this. You might want to make sure a friend or associate gets the medicine and you might want to publicly show that you are helping them to do that, driving them to the doctor, visiting them, paying for it, et cetera, in order to show them that you care, and on the other side, they want you to show that you care and want many people to see that they are cared for, so, for example, on Valentine’s, if you don’t actually have someone to give you chocolates, you might buy some yourself and leave it on the desk at the office because you don’t want to seem like the sort of person that nobody cares about.
Robert Wiblin: Right. I guess you really don’t want to be seen as the kind of person who buys chocolates for themselves to make it look like other people care about you, but if you can get away with it.
Robin Hanson: Exactly.
Robert Wiblin: Just to be clear for the audience, this view that, at least in America, getting extra healthcare isn’t that useful, isn’t some like peculiarly a Robin Hanson view. This is kind of just the standard view that health economists have, right?
Robin Hanson: The standard view is certainly that we can see very little effect on the margin of more spending on medicine, on health, and certainly the standard view that we see a lot of other things that are much bigger.
I did a Cato Unbound forum about 10 years ago where my starting essay was cut medicine in half, and a number of prominent health economists responded there. None of them disagreed with my basic factual claims about the correlation of health and medicine and other things, but, still, many of them were reluctant to give up on many medicine. They said, “Well, yes, on average, it doesn’t help, but some of it must be useful and, uh, we shouldn’t cut anything until we figure out what the useful parts are,” and I make the analogy of that with a monkey trap.
In many parts of the world, there are monkeys that run around, and you might want to eat one. To do that, you need to trap one, and a common way to trap a monkey is you take a gourd, that is, a big container that’s empty, and you put a nut on the inside of that gourd, and the monkey will reach into the gourd and put his fist around the nut and try to pull his hand out because then mouth is too small to get his hand up, and he will not let go of that nut.
Robert Wiblin: Is that actually true? That’s just not a metaphor? That’s literally true?
Robin Hanson: Yes, he will in fact get caught and eaten because he will not be willing to let go of that nut, and this is a way to trap and eat a monkey. Now, I don’t know, I think this is how Curious George was caught, but it could have been, so this is a sad thing basically if you won’t let go of that nut, but I think that’s also true.
My colleague, Bryan Caplan, again, at the moment, has his book, The Case Against Education, and he’s getting a similar response. People tend to agree with him, “Yes, we don’t learn very much. There’s not much actual, uh, learning going on in the school or we don’t remember very much of it,” and he says, “Well, let’s cut the education allotment,” and they say, “No. No. No. Uh, let’s, you know, wait until we can figure out what parts are useful and, and, you know, focus more on those, but we shouldn’t cut anything,” which, again, I think would be the monkey trap.
Robert Wiblin: Okay, so I want to push back a little bit on this. One things is, although I agree with the evidence that a bit of extra healthcare or a bit less doesn’t really seem to make that much difference to health, when I go to the doctor, I don’t personally feel like I’m going there to be cared for by someone. I just find it hard to believe that that’s just a delusion on my part. When I go to the doctor, it’s usually because I want to get cured for some specific thing and, if it doesn’t work, then I really feel like I’ve basically wasted my time. Am I just deluding myself or it is, perhaps, just that the health issues that I have aren’t so serious that I’m kind of bedridden and I feel the need to make sure that it seems like people are still caring for me and wouldn’t let me starve?
Robin Hanson: If you take the analog of starving, the time when you notice that you really, really like food is when you don’t have enough. Okay, if you’ve got plenty food, but it’s not very tasty, then you’ll eat periodically and it just won’t be something you think very much about. The same for sex, for example. I mean, people who get enough sex, it’s not really an obsession with them. They can focus on other things. It’s the people who don’t get enough who get really obsessed with it.
Similarly, if you get enough medicine, whenever you want to, it’s there, you might not feel that there’s much of a strain or issue, but you should just ask yourself what would happen if you couldn’t get it? Say, you were hiking off in a distant place and you had some symptoms and, for several weeks, you couldn’t get anybody to look at them, and they seemed to be getting worse. Now, ask yourself how stressed would you be then?
Robert Wiblin: Yeah. Maybe what if I was just someone who didn’t have any friends and I didn’t feel socially secure that people would take care of me if I was sick, then going to the doctor might be very comforting because then it would show that someone did care about me.
Robin Hanson: Right, that is if you were stressed, there would be somebody who would watch out for you and then reassure you.
Robert Wiblin: Yeah. Interesting. So, and alternative explanation for many of these cases would just be that people aren’t that smart. They don’t like to read these papers about how useful is more marginal education or more medicine that they’re not studying the randomized control trials and so they’re just kind of making a mistake. They think that medicine is more useful than it is. Maybe doctors are very good at marketing their services and they overstate how valuable it is because it’s good for their profession. Can’t people just be kind of making errors here and why do we have to assume that it’s because of these hidden motivations?
Robin Hanson: Well, a simple theory is that somebody going to the doctor to get well or to show off. A slightly more complicated theory is that they are trying to do that but they are ignorant about a lot of things and therefore they make a lot of mistakes. So these modified error-prone theories have a lot in common with the error-free theories but they’re just going to have a lot of more variety and variance around them. So if you aren’t very sure about the effectiveness of any particular treatment, say, but you still want to get well then there will just be a lot more noise in your traces. You will sometimes choose things that are not effective and other times not choose things that are effective. On average though you wouldn’t choose too much medicine. You might choose the wrong kinds but it would be unlikely that you would consistently choose too much medicine if you are just making mistakes about which medicine is effective.
Robert Wiblin: I agree that’s true about specific treatments, but couldn’t we all just be kind of fooled by … Okay, so people get sick and then they go to the doctor and then they tend to get better. And a lot of that is just because of regression to the mean because you go to the doctor when you’re most ill, and then in general when people are much more ill than usual they tend to get better over time. Perhaps we’re all just getting kind of conned by this statistical illusion that medicine seems more useful than it is because people tend to get better after seeking treatment. Is it possible to have illusions like that that cause everyone to be mistaken about the value of these different services?
Robin Hanson: Well that theory would to apply to many other things besides medicine. That theory would predict that we would way overspend on pretty much all advisors about anything that we would be anxious about. Any time you are anxious about your romance you would go to a romantic advisor who would reassure you and do something and then later things would get a little better and so you’d be doing that a lot. So this theory in a sense predicts too much, it predicts that we do too much a whole wide range of things, not just medicine.
Robert Wiblin: And you don’t think that that’s true?
Robin Hanson: No, I don’t think it is. I think we’re actually pretty reasonably skeptical about most things. We don’t actually hire romantic advisors even if we are often stressed about a romance.
Robert Wiblin: So basically here your approach for showing these hidden motives is to look at a whole bunch of different odd things that people do and then say that their actual behavior is better explained by these hidden motivations that you might expect them to have anyway given how humans evolved? And I suppose your defense would be that people might challenge any one of these, they might not by the story on religion or education, they might think there’s a different explanation. But you think as a whole it kind of paints a compelling picture?
Robin Hanson: Right, because each of us is going to be very sensitive in one area that we are especially sacred about. I can’t really expect to convince you of all 10 of these areas, for example, but if you believe eight out of them then you should believe the case that there’s a lot of hidden motives going on. That’s the main thing to make. If there are a lot of hidden motives going on here, that says that we are just misunderstanding a lot of human behavior right from the get go and a lot of our policy analysis is just going wrong, seriously wrong.
Robert Wiblin: Do you try to take a different approach to justifying it, saying that this is kind of what we should expect even before we look at any of these specific cases? Because human evolved to be good at reproducing themselves and sometimes that’s going to involve pretending to have different motivations than the ones that they do, and of course it’s not going to be obvious because no one’s going to want to fess up to this. Do you think that people should expect this to be true even before they look at the specific cases that you’re bringing out?
Robin Hanson: They should expect it to be plausible, but that’s not quite the same as being true. So the first third of our book, again, is to try to make plausible that we would often have hidden motives. But even people who tend to believe everything we say in the first third of the book, they don’t tend to have the believes that we describe here about our hidden motives in education and medicine, etc. They can still be quite surprised by each of these specific examples.
So just the general idea that we would opportunistically fool ourselves that we’re on an advantage is plausible but you still might not believe there are that many opportunities to fool ourselves. You might think these things are just so obvious that people self-deceive about, say whether they are spouses having an affair or they self-deceive whether they are an especially good driver or some other things like that, but you might not think that there was much point in self-deceiving about the point of going to school, the point of going to the doctor, the point of voting. So we really have to go through the details in order to persuade you of that.
Robert Wiblin: Another potential knock on the evidence you’re providing is just that it seems a bit unfalsifiable because you’ve got all these different cases where people’s behavior seems a little bit strange and then you’re coming up with a story that you think better explains it. But in many cases couldn’t you come with kind of a signaling explanation for the exact reverse? So let’s say that people didn’t spend very much money on medicine and they were reluctant to go to the doctor.
Couldn’t you then say, “Oh, this is because they don’t want to seem weak to other people, they don’t want to seem like they’re sick, and so this explains why we’re spending so little on medicine”? So isn’t the potential whenever you’re kind of trying to reinterpret why people are saying what they’re saying and why they’re doing what they’re doing in a way that’s very not literal but you can get a lot of false positives where kind of you can explain anything through this sort of signaling or hidden motivation approach?
Robin Hanson: The social world is large and complex and you have to be focused on explaining overall patterns. You kind of have to give up on explaining any particular person and what they’re doing at this moment and why. But we have a lot of people with a lot of data and it isn’t all random, there are a lot of patterns in what people do. And yes, if you take a simple theory without much structure it can almost explain everything.
So, for example, the error theory can explain just about everything. Whatever we were trying to do we could be accidentally doing something else instead because we are mistaken. So in a sense the error theory is too broad. The conformity theory is also a bit too broad. Many people have a simple conformity theory that we just whatever we’re supposed to do that everybody else is doing and that’s why we do things. But of course, that could explain almost any pattern in what we’re doing.
Robert Wiblin: You then have to explain why things started to be they way that they are.
Robin Hanson: Right. And so it’s the details that matter, yes, that’s the crucial point here. At a very high level you could come up with these explanations but they wouldn’t fit the specific details. So it’s the details where all the meat is, where all the evidence really lies. Yes, in some abstract level you could be going to the school to get too little learning, perhaps to avoid learning you could say, or to avoid sending a signal or something, but again when we have these specific detail patterns, that’s the goal that we can use to make these inferences. If we say “employers pay you three times as much for graduating as they pay for other years of school”, well that’s a detail that fits better with some theories than others.
Robert Wiblin: Okay, so one other chapter that you wrote was about charity, which is perhaps most relevant to this particular podcast. What do you think are the puzzles about how people engage in charity and altruism and what do you think better explains people’s behavior?
Robin Hanson: So as you know, being associated with the field of effective altruism, the concept of effective altruism can be this reference point for critiquing actual charity and supposed altruism. So people do not seem to pay much attention to the effectiveness of their charity. That’s one big clue. You would think if you were trying to help you would be interested in data about how much something helped and whether it helped and who it helped, and there’s really remarkably little interest in that effectiveness. People do not look it up. And if you go to these charities and ask for their effectiveness data, they just don’t have it. And when organizations have tried to specialize in collecting this information and sharing it, these organizations have not really been willing to help much to provide this information whether things are effective. That’s definitely one clue.
Another clue is the phenomenon that people tend to give more to one charity. In a large world where you’re just trying to be helpful it doesn’t actually make much sense to give it to more than one just charity, certainly not dozens. It’s hard to figure out what’s effective, you spend your limited time to find the most effective thing you can find and then give all your money to it. And, unless you’re ridiculously rich, you won’t actually change the amount of money going to that charity very much and that would be more effective.
If you’re trying to be helpful you would make a choice between directly helping yourself or earning money to give to somebody else to pay them to help. But of course people spend a lot of time directly helping even when they’re relatively well-paid and they could pay other people who earn much lower wages to do a lot more.
Robert Wiblin: This is the example of the high-flying lawyer dishing out soup in a soup kitchen.
Robin Hanson: Exactly. So these are some of the puzzles that suggest it’s not just directly about helping, there are a couple of more we’ll probably get to in a few minutes. But the alternative theory that we suggest is that you are trying to show that you feel empathy. That is you want to show there is an emotional capacity in you such that if you see someone around you in need you will feel like you want to do something about that. And existing charities do tend to successful show that. They show somebody who needs help in a direct way that invokes your emotions and you do help to some degree, you do the thing that people would say would help and that shows people around you that you’re not an uncaring person and it might show them, for example, that if they were in need of help later and they were near you you would see them and you would feel about them too. You want to show people that you will be useful ally. If either of you is in trouble the other will come to their aid.
Robert Wiblin: So why is it more important to show the people that you’re the kind of person who if they someone in pain that they’re going to try to help them right then and there than to show that you’re the kind of person who’s smart enough to think about which charities are useful and does their research and actually tries to help people? Because if you don’t care about whether charities are effective or not, my thoughts would just be that you’re not really going to pay attention to whether you’re actually helping your friends or not?
Robin Hanson: Right, but at least if I want your help and I’m your friend it will be my job to put myself in your face and to tell you about my problem. And maybe I figure I could successfully get myself in front of your face and make you pay attention to my problem and help you understand what I think is effective and then you would just do what I say, and that’s maybe what I’m mostly hoping for. And if you were this person who thinks carefully about how to help the best person in the world who needs help, well I’m plausibly not going to be that best person in the world who needs help so I’m not going to win out in that contest so it’s not actually going to be that useful to know you as the sort of person who will help the person in the world who needs the most help.
Robert Wiblin: In fact for most people it would be very bad to find out that their friend was only focused on helping the worst off people because that means that they’ll almost never want to help their friends directly, or at least not for that reason. They might do it for a different reason but …
Robin Hanson: Right, exactly.
Robert Wiblin: Okay, interesting. So I guess that’s a way in which effective altruists or people who are focused on giving to people very far away who they’re never going to meet signaling something that’s a bit potentially disturbing to their friends, suggesting that they have other priorities that they might be more important than them.
Robin Hanson: Right. Now, it might be fine as part of your portfolio of caring, you might care about your friends and about your neighborhood and about people starving in Africa – that might just show you’re just a person who cares all over the place. Just if you get really focused and obsessed with only the most effective altruism that other people raise an eyebrow and wonder if they can trust you.
Robert Wiblin: So there’s a couple of puzzles here. People who are very focused whether charities are fraudulent or not, this has kind of been a real fashion lately. Do charities spend too much time on fundraising? That’s something that a lot of people worry about. And also people do give a decent amount of money to people in other countries. Why would they do this if they’re just trying to show that they have hearts of gold and tender hearts that can’t help but help someone who’s suffering in front of them?
Robin Hanson: Well, as you know, on the world stage there are big things that happen that kill lots of people that we’re not very concerned about, but when a few people get hurt by an intentional act we are vastly more focused on that. So 9/11 was just 3,000 people but it weighs a lot more heavily in people’s minds because it was done on purpose. So certainly we’re far more wary of and watching out for harm that’s done on purpose. And in the case of charity where we want to seem caring, we also don’t want to seem like we’re dupes. So if the charity is spending their money and they’re not stealing for themselves, they’re just spending it, even if they’re not doing it very effectively, at least we’re not a dupe – they aren’t taking advantage of us, they are just not being very careful.
Robert Wiblin: I guess also if you’re being cynical you might think people want to say that they are caring people and that they would give a lot of money to charity but they don’t want to give money to these charities because none of them are going to help or something like that. And so this allows them to have their cake and eat it too, they get to keep the money and claim to be caring anyway.
Robin Hanson: Right, that’s certainly an accusation about people on the street who ask for money, that you don’t want to give it to them because you’re afraid that a large fraction of them are actually trying to dupe you, that they go home and they take off their dirty clothes and sit in their nice luxurious apartment and laugh at how they fooled you.
Robert Wiblin: Okay, so what are some of the things that you think people would do if they were more focused on doing as much good as possible?
Robin Hanson: Well we just mentioned a couple of them. We can add a couple of more. One is that not only would you try to have other people help instead of helping yourself if you can make more money than they, you might try to help at a peak point in your life. That is, when you’re young and 20, say, you might care about people and want to help but you really don’t have much money or much time even, but you don’t have many social resources and knowledge and experience and you just aren’t as good at judging what would be effective and aren’t as able to put together an organization and social group that could be effective at doing these things. So you might just think you would wait until you’re maximally productive. On jobs we have a standard trajectory of productivity over the lifecycle that people reach a peak productivity around the age of 40 or 50, and those are the ages where if they start a business they’re going to be the most effective businesses that are least likely to crash and die. And people just tend to be the most effective at that peak age.
So you might think that’s how you’ll be the peak effectiveness in charity as well. Early in life you would collect your resources and save and you would wait until you learned more and could better judge who is faking it and who is trying to dupe you and what the real problems were, and then when you finally reach some sort of peak in resources and connections and knowledge and insight, that’s when you would expect to have the biggest impact and then you’d peak the charity activity then. But that’s not what we see people do. In fact it seems like people are more eager to donate time and money when they are very young, and later on when they know a lot more they do less.
Robert Wiblin: So why would it be more important to show that you’re a caring person when you’re young than when you’re old?
Robin Hanson: Well, we form relationships when we’re young. So if you’re trying to convince someone to form a relationship with you on the basis of your empathy you have to do that before relationships are formed. As you may know, we form most of our relationships young in life, even work relationships but also lovers and friends, and later on in life we don’t have as many opportunities to form new relationships. And that’s actually something older people like myself might have wished we had been more clearly told when we were younger. Work harder to collect friends when you’re younger, just collect more than you need and keep them around because it’ll be much harder to collect them later.
Robert Wiblin: Okay, so once you’re 60 or 70, even if you do manage to show off what a kind-hearted person you are, you just don’t have that many opportunities to benefit from that, or not as many as when you were at college?
Robin Hanson: Right.
Robert Wiblin: Okay, and what are some of the other things that you think we should do?
Robin Hanson: Well, another thing we could consider doing is what I call marginal charity. That is slightly adjusting our behavior in order to make the world a better place. One example is if you’re building a building that’s N stories tall you might calculate your profit-maximizing height and that might be, say, 12 stories. And if your profit-maximizing height is 12 stories then there’s going to be a smooth peak near there, and if you do it 11 or 13 stories it’s actually not going to change your profit very much. Small changes in a maximum like there will have very low effects on your profit. But if you looked at the world around and said, “Well, how many stories does the world need?” Plausibly the optimal amount for the world isn’t exactly the optimal amount for you privately, it’s something different. Plausibly it’s higher, say 14 stories or something.
So if you adjust your decision in the direction of the social optimum i.e. add another story, it won’t cost you very much and on the margin it will help the world a lot. And the ratio of the help you give to the world to how much it costs you actually goes to infinity as you think of very small changes.
Now, of course, very small changes are maybe not worth the bother of thinking about perhaps, or if there’s any sort of transaction cost of making any change maybe it’s not worth the bother, but just in general you should just look at your life and ask, “How can I adjust my behavior just a little bit to make the world a better place?” Like, “When do I leave for work? Is it at time of traffic, is it after the peak or before the peak of traffic? If it’s before the peak of traffic, well if I got up five minutes earlier and left five minutes earlier it would hardly cost me much but it might help the world a lot.”
Robert Wiblin: Yeah. So this idea of marginal charity is very cool. It has this kind of nice theoretical aspect to it that having studied economics I find really neat. And I think maybe the reason that people don’t do this is because building an extra story on your hotel, it doesn’t really show what a caring person you are. People will never really know you did that, there’s no way of showing that you wouldn’t have built the building that tall anyway. But I’m not actually sure that this is very useful in everyday life if you’re trying to do a lot of good because how many opportunities do you really get to do this kind of marginal charity? And when the changes are so small, even though it’s very cost effective and the benefit-to-cost ratio is high, it doesn’t seem like that the total amount of benefit provided is very large relative to the amount of thinking that you might have to put into finding these cases and then acting on them. Do you think that that’s right?
Robin Hanson: Well, I think if you think about it you’ll find you make thousands of choices every day. And this argument in principle applies to all of them. There aren’t choices that this doesn’t apply to. That is, pretty much every choice you make on some parameter, there will be your personal optimum and it won’t be the social optimum, and you should just shave it in the direction of the social optimum. And so the main thing you need to do is just be able to know which direction is the social optimum for all the parameters of choice that you make. Now, it helps to be an economist to be able to figure that out, but if people are interested I think we could teach them a couple-day course where they would be able to apply this all over the place.
One standard example is being nice and being generous, having gratitude, having a positive attitude. All these things seem to be useful for the world on average. Just be a little bit nicer to everybody you interact with. You already have some reason to be nice – reputation and not wanting to feel like a jerk – just be a little bit nicer, right? Smile a little bit more, take a little more of a moment to look them in the eye and then be friendly. And this isn’t an original thing for me. I mean there are many people who over the centuries have said, “A way to help the world is just to be a little bit nicer in each of your interactions.” And that’s basically what I’m saying, just be a little bit nicer in every little thing you do.
Robert Wiblin: And it will cost you very little and it will help other people quite a bit.
Robin Hanson: Exactly.
Robert Wiblin: Yeah, but in that case how do you know that people aren’t already doing that?
Robin Hanson: I don’t. And if they are, great.
Robert Wiblin: Which I guess is part of the issue, because you don’t know whether people are acting differently than they would otherwise, it can’t really show that much about them.
Robin Hanson: Right. But just ask yourself, have you bothered to think about this? And if you aren’t then you should catch up with everybody else. If everybody else is doing it how come you aren’t?
Robert Wiblin: Yeah. I do want to push back a little bit more because it’s true that I make thousands of tiny decisions every day, but in most of those cases if I were to like get up a little bit earlier or, I don’t know, ate slightly different food, it’s just not clear that by shifting that a little bit it would even be worth the effort to think about the case. I suppose you’re thinking, “Well, you only have to think about it once and then you can just change your behavior forever.”
Robin Hanson: Yeah, just make some general policies. I guess it’s also that if you think of your life as devoted to altruism then you think this is going to be a small percentage of your overall effort in altruism. But for most people, they only devote a tiny percentage of their income to charity, and so for most people the amount that you might do through this would rival and be comparable to what they give directly and it would be much cheaper to them.
Robert Wiblin: I think it does apply to people who are running businesses though, or at least making big, big decisions about what kind of investments or products is a company going to make and what are they going to charge, because in those cases they actually can calculate it out and think about what the social effect would be roughly and then potentially have pretty large effects at very low cost to the company.
Robin Hanson: And I think just going through the exercise of thinking about all your major choices you’re having and what direction has social benefits, I think that would be socially useful just beyond personally doing that because that might inform us better about how we should push politics and social policy. If we all agreed that it would be better if we weren’t on the road when everybody else was there then we might be willing to support, say, traffic congestion prices, things like that. And so one of the reasons why we don’t have better policy in many ways is most people just aren’t really aware of which directions would make the world better.
Robert Wiblin: And you also suggested in that chapter that you think people should potentially leave money to be given out a long time in the future. Do you want to explain why you think that’s a good idea?
Robin Hanson: So I first talked about waiting until you’re older, which I think is the easier argument to make because when you’re older you’ll be fully knowledgeable and capable of making your choices. But there is a temptation to wait even longer, after you’re dead. Now, after you’re dead of course you can’t manage the money as directly. You’ll have to pay what we call the agency cost to tell somebody else what to do with the money and you’ll have to trust them to do what you told them and even to understand what you said. So there is a risk and a cost there. But money accumulates over the long run quite consistently at high rates of return. I believe a standard estimate over the last century in the major developed countries, stocks and real estate both grew basically at 5% per year overall, on average. 5% per year accumulates quite rapidly. That’s a doubling every, say, 13 years or something or less. And that means a small effort invested now over a long time accumulates to an enormous amount.
So if your money doubles every 13 years then in a 130 years it’s gone through 10 doublings, which is a factor of a 1,024. So you can honestly wait a 130 years and have a 1,000 times as much resources to hand out. That seems to me that that could cover a fair bit of inefficiency in the fact they don’t know exactly what you wanted. And of course if a lot of you were doing this then you could share the cost of managing the process of making them do what you want.
Famously, Benjamin Franklin actually did something like this. When he died he was relatively wealth and he gave the money, I believe, to the city of Philadelphia to invest for him. And the rule was that after a century they could start spending it but they had to spread it out over the entire next century. And the way they were spending the money, I believe, was to promote apprenticeships. So we thought that’s good for the poor people of the city, that they can have an apprenticeship and learn a useful skill and they would make more and the city would benefit. And so that’s how he spent his money and in fact the money did grow enormously over the century and then over the following century as well. He didn’t make a 5% because the city of Philadelphia, well it took some of the money by not giving him as high rate of return, but still he had plausibly a larger effect later than he would have had in his time.
Robert Wiblin: So there’s been a pretty big active debate in the effective altruism community for a number of years about whether it actually is a good idea to give later or to give sooner, and there’s a whole bunch of considerations one way and the other, which probably deserves its whole own show. But I guess your claim here is that almost no one really thinks about giving later and the reason is that it wouldn’t help them to show what nice people they are if they said, “Oh sure, I’ll give it in 50 years time.”
Robin Hanson: Right, although if they created some institution to commit to it they might seem to care more because one of the issues is the doubt that you will actually give later if you still have the choice not to give late, and people want to show that they have actually made the choice to give. But there’s also the empathy, you can’t really see all these people in need a century later or two centuries later and so you can’t be reacting via sort of the direct empathy of seeing them in need, it must be something more cerebral and abstract, which doesn’t endear you as much to people who hope that you’ll help them.
Robert Wiblin: So in effective altruism community there’s been this question for quite a long time, it’s like to what extent are people not doing the things that we are most effective because they actually don’t really care about helping other people, they care about something else like showing that they’re empathetic to others? And how much is just that they don’t realize that the things that they’re doing are not terribly effective?
And although I think the arguments that you’re making push in favor of the interpretation that maybe people just really aren’t that motivated to help others sincerely, that they’re engaging in charity for other reasons, I think you also have to think that people might just be making mistakes in a lot of these cases. So a lot of people, for example, try to become teachers or doctors in order to try to improve the world, and our research suggests that that isn’t as effective, at least as people think. But isn’t it understandable that people might think that being a teacher or being a doctor is really helpful – it seems like they’re helping on the face of it – and shouldn’t that perhaps temper our cynicism just that people could be mistaken? And sometimes when you explain to them that something else is going to be more effective they do actually change their behavior, not always but sometimes.
Robin Hanson: So in law, as in ordinary human norms, ignorance is often an excuse. So it’s okay if something happens under your watch if you didn’t know about it but it’s less okay if you made sure not to know. So the president had plausible deniability if he knew that he was not hearing things because they were not telling him so that he wouldn’t know so he wouldn’t be guilty. Well, that’s not really so good.
So a key question here is if we are ignorant, why are we ignorant, and are we comparably ignorant to other areas? And so I would say, for example in medicine, it looks like we go out of our way not to know things that we would know more in other analogous areas.
To be specific, there was a study done a while back of patients about to undergo surgery. And this surgery had a few percent chance of killing them so it was a high-risk surgery. This was a big deal. And these patients were asked if they would like the information about which surgeons and which hospitals in their area had what rates of death when they perform the surgery so that, for example, they could pick the one with the lower rate of death. Only 8% of patients were willing to pay 50 bucks to find this out. And when they were given the information even for free they didn’t act on it. And this is one of many examples we have that people are just not very interested in information on the quality of medicine. And relative to all the other behaviors where they asked for more information, this does stand out as different.
So it’s not just that we are ignorant about education and about medicine, we are surprisingly ignorant. I have to say it’s really surprising that incoming college students who pick a major hardly know anything about what happens to people with that major. How often do they get jobs, where the jobs are, how many hours a week do those work. Amazingly enough, people choose majors and career plans without knowing even the basics of what will be the consequences of that, which is suspicious because they know an awful lot about, say, their dorm and where they are living and which meal plan they’re having. I mean it’s not like they don’t get information about anything.
So in many of these areas there is really a surprising lack of attention to information where in other areas they pay a lot more attention to those sort of issues.
Robert Wiblin: And I guess, yeah, if we’re trying to figure out what’s the appropriate level of cynicism we could look at when GiveWell, for example, says, “Oh, we’ve looked into these developing world charities and found that they are extremely effective,” how often do people actually change their giving behavior on the basis of that, how often do they care? And I mean they’ve had a reasonable amount of success in the fact that altruism is growing but-
Robin Hanson: But as a percentage of the entire world activity of charity it’s still pretty small.
Robert Wiblin: Right, it could be growing a lot faster if people cared more.
Robin Hanson: And there’s also the issue of how clear it is. That is, if you don’t fundamentally care but it becomes visible that you are acting in deviation from this thing you’re saying then that will look bad for you. So even if you don’t directly care you will still move in this direction as it becomes visible not just to you but to everybody. So that suggests that it’s less important to tell each person that effective altruism is more effective and more important to tell everybody that it’s effective.
So we have this section on advertising in our book where we talk about how many people with products show advertising to people who will never buy their product. So Rolex watches, as you might have noticed, are advertised in mass context where everybody can see the ad for the Rolex watch even though hardly anybody actually buys a Rolex watch. And plausibly this is because the value of having a Rolex watch is the fact that when other people see you have a Rolex watch they believe that you are a special person, and that’s what you’re trying to buy. And that doesn’t work unless everybody knows about the watch and what it signals.
Robert Wiblin: So what you’re saying is if we want to convince people to give to charities we think are more effective we not only have to tell them that they’re more effective but convince everyone else to judge them on the basis of whether it was effective or not?
Robin Hanson: Exactly. You might think about Super Bowl ads. So it turns out that advertising costs more the more people will see the ad per ad per person who sees it. So that’s why often people want to get, say, a Super Bowl ad, is because they can get the knowledge that lots of people will know that lots of other people saw. They create more common knowledge about the ad. And that’s what you’re trying to do here with effective altruism, you’re trying to shame people into doing the more effective thing because they will see other people watching them and therefore not believe that they are caring if they don’t do it more effectively.
Robert Wiblin: Okay, yeah, so that’s interesting. Are there any other ways that you think we could improve outreach on the basis of this slightly more cynical explanation of people’s charitable behavior?
Robin Hanson: Well that’s the key one but it has more implications, there’s more detail to go into. The key one is that it’s not enough just to convince you that this is an effective charity, you need to convince the people you’re trying to impress that it’s an effective charity so that you will want to impress them this way. A problem with that of course is often that you will create the impression that you feel you’re holier than thou and other people may then criticize you for that. So when you try to tell everybody that this is the most effective thing, are you creating the impression that the people who think they’re doing this think they’re better than everybody else? And then that puts a bad taste in people’s mouth and they might want to actually step away from that.
There have been lab experiments about what’s called the public goods game. And a public goods game, everybody is sitting around the table, they each put money in the pot and the amount of money in the pot is then, say, doubled or tripled and then handed out evenly back to everybody in the room even if you didn’t put even amounts of money in the pot. So there’s a temptation to not put money in the pot and take advantage of the doubling or tripling of the money that other people get. And so there are many ways that people try to organize public goods games so that people will be encouraged to put more money in the pot.
And one of the things they do is they let you punish other people after you see how much money they put in the pot. So you might think then, “Well, if other people don’t put enough money in the pot then we’ll punish them and that’ll encourage to put more money in the pot and that’ll make more money go in the public goods game.” And that kind of works except what happens you punish people who don’t put as much money in the pot as others and you also punish people who put in more than others. They actually punish people who give more than the average amount in addition to those who give less because they’re basically interpreting it as, “You’re pretending you’re better than the rest of us. You’re putting yourself up there as being better and we’re going to knock you down from that.” Yeah, that’s not okay.
Robert Wiblin: This is kind of like the person who works too hard in the office and puts everyone else to shame and you kind of resent them for it.
Robin Hanson: It’s exactly like that. And so that’s a problem potentially in charity and altruism, which is if you try to make it seem like you think you’re better than everybody and that you’re claiming that social mental of being the superior moral person, they may resent that and try to knock you down, and therefore discourage people from doing what you’re telling them to do.
Robert Wiblin: Do you think we’re seeing that? I’m probably not a good person to judge because I’d be too drawn to this cynical explanation of our critics.
Robin Hanson: I certainly think it’s part of the emotional reaction. I’m not sure people would say that if they had something else they could say instead. They might not want to admit that that’s their reaction, they might wanna point to something else. And people do point to a lot of random things when they criticize effective altruism, so I think we are justified in asking, “What’s the real motivation there?” Because the things they point to kind of don’t make sense.
Robert Wiblin: This is good. Cynical explanations for other people’s behavior rather than my own.
Robin Hanson: We’re going to get to your own here too.
Robert Wiblin: Okay. Let’s talk about that. I imagine that you would say even people who claim to be engaged in effective altruism just because they want to help other people may maybe aren’t quite as altruistic as … They’re not quite as pure they want to let on.
Robin Hanson: So my basic model of being a social scientist, my basic approach is to focus on the average typical person and try to explain the middle of the distribution behavior knowing that people do vary. But when I look at the distribution behavior, then if i want to look at my own behavior, I’m mostly going to assume that I’m like everybody else unless I have a strong reason to think otherwise. Now, there is variations, and so I might not be the same as everyone. But especially in the direction I would like to believe that I’m different, I should be suspicious of whatever excuses I’m coming up with to make myself think that I’m better because maybe I’m just wanting to believe that.
So effective altruists should probably follow the same strategy of assuming they’re not that different from everybody else. Now we can start to think about in what ways might they actually be different and why that might be true. That will be the next correction in our calculation. But the first cut is, if you were like everybody else could we explain your behavior? Could that account for what we see or do we need to invoke anything else?
Robert Wiblin: So do you think we can explain my behavior or the behavior of other people who are involved in effective altruism by just saying, “Oh, we’re just as motivated by self interests as everyone else?”
Robin Hanson: We might go a long way. It might not get everything, but it goes a long way. Now I think another issue is the phenomena of a sincere nerd. I call this smart and sincere syndrome. Which is that people do vary in their social savvy and their ability to read other people socially and to play clever social games. Some of us are nerds, which literally means, at least in my description that we just don’t have such social skills. We can’t read the social situation as accurately. We can’t suddenly know when to apply one social strategy and when to hire another.
So in the situation where people are pretending to do one thing, but really doing another thing, we nerds are scared that we won’t know how to pull that off. We really won’t be able to correctly do the other thing exactly whether we can get away with it or not. So we nerds are tempted to just do the simple sincere thing. Just go with the thing you’re supposed to be doing because we know we can do that and we won’t make a mistake with that. We know people go to school for other reasons, but we’re just going to go to school to learn the material because hey, we know how to do that. We can manage that and we’ll look okay that way.
So even with charity, I think nerds tend to just adapt the simple sincerity strategy. That sort of a safe social strategy for them is just to tell themselves and other people that they’re just doing the thing that we say we’re doing.
Robert Wiblin: Okay. So the idea there would be that nerds who are involved effective altruism are trying to do as much good as possible because then they don’t have to play these social games of figuring out what other charity is going to make them look best? I’m not sure that that seems like such a great explanation.
Robin Hanson: Well, it’s the high road. So often again, there’s a high thing we’re all pretending to do. Then there’s the lower things that we might actually want to do. If you aren’t not as good at getting rid of the lower things, you might prefer to take the high road and then implicitly say, “The rest of you are not as good as me.” So often, people who are not as good as being depraved become religious and become the high-minded people who don’t sin. Because they don’t know to sin and really wouldn’t get away with it as well, and maybe wouldn’t enjoy it or don’t have as many opportunities. So hey, why not? And this is a standard observation that later in life when you don’t have as many opportunities to sin you suddenly become religious and very pompous.
Robert Wiblin: So I guess the cynical explanation of … Well, let’s take me for example. So I want to go out and say, “Oh, the reason I took this job is because I just want to help people as much as possible. I want to improve the world, taking everyone’s welfare impartially considered.” But then you’d say, “Well, but Rob, don’t you kind of enjoy the stimulation that you get from … You enjoy the intellectual challenge? Maybe you’re enjoying that because that allows you to show off that you’re smart and that you can write well and that you can produce this podcast and people find out about you and allows you to say that you’re actually more consistent and more able to figure out what’s effective than other people?” There’s all these kind of benefits that I get other than showing how empathetic I am to people directly in front. That’s kind of the story that you would tell?
Robin Hanson: Well, it’s a story you could tell. But I mean, the fundamental story here is just human behavior is complicated. People vary enormously from person to person and context to context. And in almost all these areas, the usually story does describe some people some of the time. So no doubt, some people of the time are in fact focused on helping. It’s less than we’d like to think, but it’s certainly true. The question is just to infer in which particular cases it’s how true. And so one of the variations in people is that some people actually care more. So the question is what fraction of the variance of some people being effective altruists is explained by that factor. And there are some factors that they are available to explain and I’m not going to say I know.
Robert Wiblin: Okay. I was trying to draw you out. Because you did know a fair few of us. So maybe you could tell if we’re hypocrites. But I guess you don’t want to name names.
Robin Hanson: Well, we may get to talking about the effective altruism community and some of its other features. And data could be more relevant data for illuminating this. Merely knowing that some people are effective altruists by itself doesn’t say that much. Again, that always comes down to the detail patterns. Those are key ques that I use.
Robert Wiblin: So are there any patterns in effective altruism that you think indicate that we may be deviating from being as altruistic as we could be? And I mean, maybe we could learn there are other ways that we could have more impact if we realize that we’re actually doing things for selfish reasons?
Robin Hanson: Well, one thing I’ve said and even given a talk at an effective altruism event years ago is that effective altruism is a youth movement. It has many features that are classically the sort of features that the youth movement has. And that’s distinctive, and data to interpret people’s behavior. So you think of say the ’60s kind of culture, think of libertarian movements after that, think of blockchain communities today. These are all youth movements. And they have the distinctive feature that they’re mainly composed of young people. And these young people have something new that they think it wasn’t there before. They’re focused on talking to themselves more than to other older people. They think that younger people are more appropriate for roles in their organizations and their groups. They have complaints that the older generation has not been keeping up and missing out on something and they’re going to replace that. Youth movements through the centuries have, it’s somewhat of a random gamble, but often they’ve benefited the youth in large ways and that’s plausibly what’s going on here.
Robert Wiblin: So yeah, I’ll stick up a link this blog post. We explained effective altruism as a youth movement. Is there anything wrong with being a youth movement. I mean, I read it and I kind of agreed yeah we have a lot of young people. Yeah, to some extent they criticize the older generation and the mistakes that they perceive them as having made. But is there anything concerning about that?
Robin Hanson: Well, often youth movements perhaps unfairly exclude older people from their new movement. So if the world doesn’t change, then when you’re young you have to slowly work for decades to rise up to the position that the old people existing now have if you rise through their existing hierarchies. If you can a way to set aside the existing hierarchies and create a whole new thing, then the youth can have a big advantage.
Robert Wiblin: And you can start at the top.
Robin Hanson: Right, exactly. So that’s an advantage the youth movement is to somehow set things aside. You can see that perhaps in the crypto coin world. Instead of just rising in the finance industry through the slow rise of the existing world, then you might start a whole new finance industry. And you can start in the ground flow and suddenly you’re as good as anybody else. Then as you do that, even if they were succeeding and even if there were older people who wanted to come and join you, you might not let them because, “Hey, this is us pushing you guys aside. Don’t try to grab on to us now.”
Robert Wiblin: Because then they’ll take the senior roles because they’re more experienced.
Robin Hanson: Yeah. “We came out of the ocean and we shot your boat down, and now you’re trying to swim to our boat. Get out of our boat.” So to the extent that that’s the kind of thing that going on here. It’s suspicious when you go out of your way to exclude the older people from participating. Not that it’s a common feature of youth movements. It’s not just direct exclusion, but also just a suspicious obsession with internal conversation. Like you create a whole new terminology for everything, you presume that all discussions are pretty irrelevant to your new different thing. And you basically reinvent a lot of thing creating new terms and new structures plausibly for the purpose of just keeping the people who are going to do things the old way from jumping in.
Robert Wiblin: Yeah. I’m not sure that I think effective altruism is doing that all that much. I guess it’s true we have criticism of the way things have been done in the past. But I think there’s also a lot of respect at least among many people for the fact that we’ve kind of grown out of existing movements that were kind of already happening and we’re just kind of the next step on a long chain of trying to quantify the effects that different things have. Trying to be more reasonable. Evidence-based policy, that kind of thing.
Robin Hanson: Just make a distinction between what might be the individual selfish policies here. Also, of course, youth movements have social benefits. I mean, the world does often get stuck in old ways. And often what does take to make a big change is to have a whole group of people together coordinate to try to change it altogether. That’s a perfectly reasonable way to help the world in general in terms of the existence of the youth movement.
I mean, what I’ve also said is that if you’re honest about it, you would expect that earlier on in the youth movement its direct actions on the world would be largely ineffective. Not very useful. The major benefit of youth movement is that as they grow up in life and reach peak productivity decades later, at that point, they will have these strong social bonds. They will be positions of influence and then they will have their maximum influence. But that’s not what people want to say. So people like to pretend that the youth movement just when it’s get going is having a big influence. But that’s not what it really has, a big influence. So the ’60s protests or counterculture didn’t actually have much of an effect back then. But later on when those same people rose in the positions of influence and bonded with each other and pushed for policies they had much more influence later.
Robert Wiblin: I guess that’s a somewhat hopeful message. That even if we’re not accomplishing much now, in the future as we become-
Robin Hanson: Well, it is right. But it means if you’re going to be honest about it, you should accept that you’re not actually going to be doing that much useful in the short term, you’ll be creating these bonds and this strong identity that you can draw on later.
Robert Wiblin: Okay. Well, let’s go back to thinking about value of hidden motivations for you of human behavior. What else can people learn from this to potentially be more effective in their own life? Should they kind of make peace with the fact that they hidden motivations and accept them and then find ways to make them work in favor of their higher goals?
Robin Hanson: Okay. First we should admit that evolution designed you to be ignorant of these things. So if I’m telling you about something I am countering evolution’s plan for you. And if evolution had your interest at heart and expected roughly the kind of situation you’re actually in, I’m doing you a disservice because I’m messing up the plan. So hopefully, of course if you realize that you could just forget about this and go ahead with your life. Because people do actually manage to forget pretty much every podcast they hear etc. It’s really not that hard. Okay.
But evolution might not have anticipated every situation that everybody might be in today. You might be unusually in need of a frank understanding of the world around you. You might be a manager, a salesman for example. Those people need to have a more direct understanding. You might also be a nerd, i.e someone for whom their intuition just doesn’t smoothly help them manage the social world around them. And for them conscious analysis of the social world can be more useful than it is for other people. And you might be someone who has aspirations or a practice of being a policy analyst. You might be someone who says, “We understand medicine or education well enough to think about how we should change it.” And if that’s the sort of person you’re claiming to be, then I think it’s more your obligation to understand what’s really going on even if it’s a little bit of a personal awkward consequence.
Think of the analogy of a mortician. Most of us would feel uncomfortable touching dead people. I think that’s safe to say. And if you came across a friend who seemed to be very comfortable touching dead people you might be a little grossed out by that. But a mortician whose job it is to touch dead people, well, that can be okay. It’s okay if they get used to something creepy that the rest of us aren’t used to. And as long as they can keep it within their community, you kind of expect within the mortician community that’s what they expect as they’re comfortable touching dead people.
Similarly for policy analysts. We should expect policy analysts to be more frank and more honest about the world in ways that we individually might not be. Because it’s their job to figure out what’s going on and tell us what to do.
Robert Wiblin: So the concern might be that if you didn’t understand what was actually motivating people and then you tried to change policy in one direction or another it’s not going to work out how you think. Because in fact people are behaving according to quite a different set of rules than the ones that you think they are.
Robin Hanson: Right. So I would say that most policy analysts try to find reforms that will give people more of what they pretend to want. And that goes badly when people know that it isn’t what they really want. So to be successful, more actually successful and get more people to actually embrace policy reforms, what we need to do is find changes in policies. Such that they allow people to continue to pretend to be trying to get the thing they want to pretend to want while actually getting more other things they actually want even if they can deny it.
Robert Wiblin: Do you have any good concrete examples of that where policy might go wrong if you didn’t take this into account?
Robin Hanson: Well, certainly with things like education or medicine. We subsidize them on the belief that the thing that we’re saying we’re trying to get is a good thing. Therefore, we should get more of it. Once you realize that’s not really happening, the case for subsidizing goes way down. In fact, the case for taxing might even go up. And maybe we should be taxing the school and medicine instead of what we’re doing now. So that’s one very simple variation. I mean, we could try to think about more complicated ones. But our book is not primarily trying to come up with solutions. We’re mainly trying to make the case that in fact we’re mistaken about a lot of different motives and that this has big implications.
Robert Wiblin: Moving away from policy to more to people’s kind of individual lives. I think one of the ways that I think this perspective can be helpful of making peace with the motivations that you have that maybe aren’t the most high-minded is that if you think about it quite consciously, then you can try to find ways that you can accomplish both your selfish goal and perhaps your more high-minded goal at the same time.
So let’s say actually I really enjoy the attention of running a podcast or something like that. Then I can think, “Well, now what things could I do that would allow me to achieve some level of fame that would also be really effective for the world?” And if you’re just very explicit about it, then you can potentially find ways of kind of lining up the selfish motivation and the altruistic motivation. Do you think that’s a big potential benefit?
Robin Hanson: Sure. Another way I would think about it is you can’t change the fact that you care what other people think. But you might be able to change which other people you’re focused on impressing. So the old, “What would Jesus do?” Is an example of focusing your attention on a particular sort of audience and asking, “What would they think about it?” Or you might ask what would Einstein think of what you’re doing? That is instead of trying to impress the average person who might like your tweet, think about somebody higher in your mind that you would be focused on trying to impress. And then you will still be caring what they think in trying to impress them. But maybe that will push you in a better direction.
Robert Wiblin: I guess this a benefit of potentially having a community of people who are focused on doing good effectively, is that it allows us to get the enjoyment of showing off to other people when we actually do do things that are useful because they’re going to judge us by that standard.
Robin Hanson: Absolutely. So medicine would probably be vastly more effective if more people knew about the effectiveness of individual medical treatments. When grandma’s sick and the doctor says, “Let’s do this surgery,” and you say, “Yes. Of course because I care about grandma.” That would seem a lot less caring if everybody knew the particular surgery that was being suggested was actually going to hurt her and just make her in more pain and then actually not do much. It doesn’t look very caring if you’re pushing something that doesn’t actually help much and just going to cost money. So because the audience doesn’t really know which medical treatments are effective that you can push for some medical treatment that doesn’t help and be credited for caring. The same for charity of course and altruism. The more that everybody knew about the relative effectiveness of charity, the more the people would be pushed to give to good charities just because they don’t want to be ashamed and looking like uncaring. It wouldn’t actually look like you care very much if you gave to a bad charity.
Robert Wiblin: So I think a lot of people listening might feel like this perspective on human nature is a little bit grim there. That imagining or thinking about the fact that everyone around us is a bit more self serving than they let on is a bit depressing. But it doesn’t seem like you find it that way. Why doesn’t this bother you at all?
Robin Hanson: If you stand all the way back and you compare humans to all the other creatures we know about, humans are spectacular. Humans are not only smart and capable, we cooperate really well. We cooperate in enormous complicated ways. And we rely on each other and we trust each other. So just from the point of comparing us to other animals and just looking at what we can achieve together, we are remarkable creature who also seem to be remarkably moral. That is we are remarkably helpful to each other, and considerate of each other in ways that other animals aren’t. We look great as long as you’re comparing us to other animals. The way we look bad is when you compare us to the angels we pretend to be. But as soon as you realize that those angels were just not very realistic possibility, that was just never going to happen then you can be much more okay with liking the creatures that we are.
Robert Wiblin: Yeah. I guess the whole way through I’ve been describing these explanations as cynical explanations. But so if you think about it just as people want to be loved by other people or they want other people to hold them in high esteem in a sense that’s not such a bad motivation, right? That’s a very prosocial motivation that you care what others think.
Robin Hanson: Especially when it’s others that the rest of us respect.
Robert Wiblin: Yeah. So was there anything else you wanted to add about the book? Perhaps give a pitch for people to actually buy it and perhaps they’ll find that more persuasive if they go through the details.
Robin Hanson: I want to mention that this book is greatly improved by my coauthor Kevin Simler. He paid more attention than I usually would to making it very readable with lots of examples, personal stories. That means it’s actually you can just pick it up and be carried along through it. That wasn’t true so much of my first book The Age of Em: Work, Love, and Life when Robots Rule the Earth, and that’s credited to my coauthor. So it is a book that you can just read and easily learn a lot. But it is also a book that makes a big claim that I think can change how we think about the world if enough people take it seriously.
Robert Wiblin: All right. Well, let’s move on to talking about kind of opportunities for systemic change in society which has been an ongoing interest of yours. What are some of the case that you can see for how we could reorganize kind of how society makes decisions that you think could have a really large impact?
Robin Hanson: So again, I started out my social science career focused on alternative institutions where we could redesign things. And first of course, I read mainly about other people’s proposals to reorganize our institutions and then I started to develop some of my own. And it seems we can find large improvements all across the social world. Large improvements that continue to be ignored and not adopted. So a basic question is, “Why aren’t we adopting all these improvements?”
And the perspective suggested by our book The Elephant in The Brain is that we aren’t being very honest about our main motives. So education researchers for many decades have come up with reforms that would help us to learn more material faster. And they consistently show that they work. And we have consistently not applied them to schools. Plausibly because we’re just not very focused on making schools learn more because this is not the point. There are a lot of different ways that we could improve medicine. A lot of them for example, is to merge health and life insurance. If you combine a package of health and life insurance then your health insurance sure now has a risk of losing if you die. And they therefore want to focus more on your health then you don’t need to be as regulatory about insisting that they do the right thing. That you can more trust them to have good incentives.
Robert Wiblin: So this would be the idea that your doctor gets rewarded if you stay alive or something like that?
Robin Hanson: Right. And punished if you die. And you could extend that to pain and disability. And it would be a straightforward to make medicine more effective. But there’s just very little interest in it.
Robert Wiblin: So that was medicine. Did you have any other big ideas that came out of the book?
Robin Hanson: Well, we don’t even have a chapter on law. But I’ve been teaching law and economics for many years, and I think there are some big potential improvements with criminal law. Just quickly, I think it would make sense to require everybody to get crime insurance so that they could pay off if they were found guilty of a crime. Then we could make more penalties into fines for when you did commit a crime. And then we could make private bounty hunters rewarded for prosecuting crimes. And we would take out the public police which are so problematic and be incorruptible and immune to prosecution. But we don’t have much time to go into that.
But in politics people come up with many alternative political institutions that would seem to be better. And people have very little interest in political institutions. They have enormous interest in politics i.e left versus right and who’s winning. But if tried to talk to them about voting rules or rules of how a bill is passed and all that sort of thing, they just get really bored. Which is a shame because I think we can actually at the metal choose better institutions for helping us to choose better institutions in medicine and school and law, et cetera. We could set up a political process that would most fundamentally choose better institutions when they’re actually effective.
Robert Wiblin: Yeah. What are some of those institutions that you think might be more effective?
Robin Hanson: Well, again, I was mentioning specific things in medicine or law or school. But in general, I think there’s a way to reorganize how we choose policy that would be more effective. And that goes under the name futarchy as I’ve given it or also decision markets. Which is a variation on the prediction markets. And I think that has enormous potential to reform governance.
Robert Wiblin: Okay. Yeah, futarchy is a pretty big idea. A little bit complicated, but let’s dive into it because it’s I think one of the most interesting ideas that you’ve had. What is futarchy and why would it do a better job?
Robin Hanson: So let’s start with prediction markets. A prediction market is just another name for a speculative market or a betting market. So of course, you could have a bet on for example, whether a sporting team will win a contest. If you have a project with a deadline you can have a bet on whether you’ll make the deadline. It turns out that these speculative markets do a remarkably good job of collecting information together into a market price that represents an estimate or a probability of something. And this is something we’ve consistently seen over a long time. That if you have a topic that you would like an estimate on that eventually we’ll know an answer to that we don’t know now. Setting up a betting market on it is a very effective way to create information on that. And you can just believe the current market price as your best estimate if you’re not a specialist in the topic. So that’s the idea of prediction markets.
And decision markets are a variation on prediction markets. Where the market makes a prediction about the consequence of a decision. And that can allow you to have markets directly advice you about decisions. So my favorite example is a Fire the CEO Markets. So at the moment we have stock markets and in a stock market you can trade cash for stocks. So if costs $21 for one stock then the price of the stock is 21. We could make trades in stocks conditionals. So we could say, “Well, I’m trading $21 for one stock, but this trade will be called off if the condition isn’t met.” And then we could have a market for those kind of trades and that market will give us a conditional price. So ordinarily when you’re estimating how much is a company worth, you’re trying to average over all the different scenarios that might happen to the company. You’re saying, “In each of those scenarios, how much is the company worth?” And then you do a weighted average to decide overall how much the company is worth.
In a conditional market, you’re averaging over many scenarios. But only scenarios consistent with the condition which could give you a different number. So if we have a market in the stock price in a company conditional on the CEO staying till the end of the quarter in that quarter, then when you’re estimating the value of the stock there how much money you’re willing to pay, you’re averaging over all the scenarios consistent with the CEO staying till the end of the quarter. Now if we have another market which the stock trades are called off if the CEO doesn’t leave by the end of the quarter, then in that market you’ll be focused on all the scenarios consistent with the CEO leaving. These two markets should give different prices because they’re averaging over different sets of scenarios. If the price of the company if the CEO leaves is higher than the price of the company is the CEO stays, you can interpret that as the stock speculators telling you, “Dump the CEO. This company is worth more without them.” And a board of directors could take that as advice and follow it. And that would be an example of a decision market because there’s a decision to keep the CEO or not. And there’s an outcome to the stock price of the company and we’re setting a decision conditional estimate. I.e, what’s the estimate of the stock price conditional on keeping the CEO or not.
And using those two numbers we can use the market to give advice about the decision. And this is a mechanism we could apply much more broadly.
Robert Wiblin: Okay. Wouldn’t you have this problem that the CEO is more likely to get fired if there’s been some negative shock to the company even if the CEO wasn’t responsible? So typically you would expect that the value of the company if the CEO was fired is going to be lower. Not necessarily because the CEO is messing up. But any bad lack will both cause the price to be lower and the CEO to be more likely to get fired.
Robin Hanson: That could be problem early in the quarter. As we get to the end of the quarter, it won’t be as much of an issue. That is if we’re about to make the decision to fire the CEO then we don’t have to worry very, very much about information coming up between now and the decision time being made. When there’s a lot of time in between then yes, you have to worry about what scenarios could reveal information and how that might be correlated with the price. And that’s why would recommend you mainly make this decision at one moment when the price is about set for that moment and not be too far ahead.
Robert Wiblin: So it seems like shareholders would really like having this information. It’s kind of a no brainer for them to set this up and then they’ll be able to figure out … Give them a good idea of whether to fire the CEO or not. So why don’t we already use markets like these?
Robin Hanson: Well, that’s related to the question of why don’t we use prediction markets for more other things in organizations? And this was one of the first puzzles that I came across that helped me to look for hidden motives. That is organizations talk as if better information for the organization would be something they want. Something that they’re eager for. It’s the story people like to tell about their activities. But it’s not actually a strong emotion as they say. That is in most organizations there is a lot of politics going on and these markets can get in the way of that politics and hinder somebody’s strategies.
So a more concrete example of that is a market for project deadline. So as you may know, most projects they have periodic meetings where they say, “Are we on trach to make the deadline? What’s the chance we’ll make the deadline?” And usually they say, “Yeah. We’re going to make the deadline. It looks pretty good.” And then a lot of the time they don’t. Okay, you might think that a market in whether you make the deadline would give you earlier warning about whether you’re going to have problems. And that would give you earlier chances to either change the project or abandon it which would save you money. And that’s true. And so therefore you should want this markets if you were running the whole company and just caring about whether the project works.
But you should think, “What if I were running the project?” If you were running the project you would think ahead and say, “This project might fail. How will cover my butt in that case? What will my excuse be?” And everybody’s favorite excuse on a project that might fail is, “The thing that killed my project came out of [inaudible 01:33:03] at the last minute. No one could have seen it coming and it will never happen again. So there’s nothing to do.” That’s their favorite excuse if you could make it stick. Now you can make that stick if you make sure that all the meetings up until the last minute keeps saying, “Yes, of course. We’re going to make the deadline.” It’s more of a problem if there’s this outside [inaudible 01:33:21] that keeps saying, “You’re not going to make the deadline. You’re not going to able to make the deadline well in advance.” That kind of kills your start. It makes it a lot harder to make excuses about why you didn’t make the deadline.
So if you’d rather protect your excuse if you don’t make the deadline, then raise the probability that we will make the deadline, you might not want to have a prediction market about the deadline.
Robert Wiblin: Wouldn’t it be helpful to have kind of the early warning because then you can adjust the deadline and then you kind of don’t have to miss it?
Robin Hanson: Yes, that would be helpful, but it also can get in the way of your excuse. And management would rather protect their excuse than help avoid the problems.
Robert Wiblin: I suppose that it also shows early on maybe that you’re not doing such a good job. So they might just get rid of you at that point.
Robin Hanson: Right. Now is still think there’s enormous long-term potential here. I just think it has to overcome short-term obstacles. I like to make the analogy with cost accounting. Today, almost all organizations do cost accounting on almost all projects just as a matter of cost. It’s just the way we do things. But imagine a world where nobody did cost accounting and you proposed to do cost accounting on a project. That could be interpreted by the people around you as saying, “Somebody is stealing around here. We should find out who.”
Robert Wiblin: What is cost accounting, sorry?
Robin Hanson: It’s just when you keep track of the cost in projects. Where we keep track of where the money went and how it was spent and where it came …
Robert Wiblin: Right. Okay.
Robin Hanson: Today, almost anything we do we track cost accounting. We track where the money went. And so when people steal that shows up in the cost accounting. That’s one of the main reasons we do cost accounting, is to watch out for people stealing. And so in anywhere where nobody did cost accounting and you’re proposing to do it would be interpreted as …
Robert Wiblin: An accusation.
Robin Hanson: … an accusation that someone is stealing.
Robert Wiblin: And I suppose if people actually were stealing they really wouldn’t like it. Because it is going to interfere with their stealing.
Robin Hanson: Exactly. Now in a world where everybody does cost accounting and you say, “Let’s not do cost accounting on this project,” that would be interpreted as saying, “Could we just steal and not talk about it?” Which will also not go over very well. So the analogy with prediction markets is in a world where nobody does prediction markets it might be hard to introduce one. You would basically be saying, “There’s a lot of bullshit around here. Could we just cut through that and find out what people really think?” Which might not go over well on the people who are bullshitting, right? In a world where everybody had prediction markets, say on every project there’s a deadline and you said, “Could we just skip the prediction market this time?” That could be interpreted as you’re saying, “We’re not going to make the deadline, could we not talk about that?” Which also will not go over well. So you can see there’s multiple equilibria. So I hope eventually that prediction markets could become the standard practice just like cost accounting is. Even though any one proposal in the absence of others would be seen as some sort of accusation.
Robert Wiblin: Okay. So across a bunch of these ideas with medicine, with education, with having these prediction markets. There’s big potential gains as you said. Because we’re not using the best known possible methods. But at the same time there are huge obstacles to putting them into place, because they’re potentially aren’t in the interest of the people who are currently part of the system. Do you spend a lot of time thinking about how you can make it in the interest of everyone who’s involved to implement these changes? Are there many success stories of that kind?
Robin Hanson: I have ideas but I think the main thing we need is a just lots of concrete trials. There is a large literatures among academics of these ideas that have been piling up for years in the journals, and academics are willing to write theorems, they’re willing to do lab experiments, sometimes even field experiments to collect certain amount of data, but they’re really not willing to get involved in the messy details of an organization, and just try out different variations until they see something that actually works. Because then you’re not giving the abstract concepts that academics like to deal with, you’re dealing with actual detail organizations. That’s what we need mostly to make the sort of thing happen.
There is a huge effective altruist opportunity here that by designing and fielding better institutions we can just get enormous gains in medicine and in law, in school and politics and all over the place we can just be a lot more effective with no particular downside cost. But it requires this investment in people going out and working out the details. And people have already made investments in working out the abstract academic concepts, what you need to do is work out the concrete social details, which is the sort of thing academics don’t get paid for.
Robert Wiblin: Right. Is it that they don’t get paid for or that it doesn’t help them to show off how smart they are? That’s one explanation I’ve heard.
Robin Hanson: Well, because they get paid to show off how smart they’re.
Robert Wiblin: Okay.
Robin Hanson: So, basically when you submit a paper to a journal, the overwhelming criteria the journal will use is how impressive this? And to be impressive you need to use standard tools in a difficult way, you need to show that you can master difficult data sets, statistical techniques, game theory models et cetera, and you can do it better than others. And you need to therefore be using one or standard methods in any way that’s comparable to other people, and we just don’t have a standard way to show off how smart you are by getting involved in the messy details of an organization, and finding a way to make something work.
Robert Wiblin: So, you’re saying the thing that’s most neglected in this sort of social science is not so much figuring out what I got in the big picture, but actually getting your hands dirty and trying to implement it in a specific organization or a specific case and then seeing what the barriers are, and just trying to overcome them?
Robin Hanson: Exactly. Trial and error, hands on because that doesn’t have the grand theory prestige associated with it.
Robert Wiblin: Okay. So, let’s say that someone who’s just in the business said, “You know I’m going to try to make it be the case that people waste less money on medicine that doesn’t work, because that’s just not useful and they’re doing it for this other reason that they want to show that they care and we can find other cheaper ways for people to show that they care about one another. But they just run into the fact that people kind of don’t want this change, there is going to be a lot of people who are going to be harmed, a lot of people aren’t going to realize that the reason you want to reduce the spending of medicine isn’t that you’re an evil person, it’s that you don’t think it actually improves their health. How do we even know that there’s really a practical opportunity to change things there?
Robin Hanson: So, as I said, the way policy analysts usually think about policy is they try to figure out ways to get people more of the things they say they want. And the real problem is how to let people continue to pretend to get what they want while actually getting more of what they really want. So, that’s a more complicated design problem, and it’s not something that shows up in the journals as well. And so, you have to pay attention to that more complicated problem when you’re trying to figure out things that actually work in real context. We could do more abstract academic work on that, but we also need more practical experimentation.
So, if you’re going to find a way to produce more effective medicine, you don’t need to wait to let people continue to show that they care, you’ll need to continue to pay attention to how empathetic it looks to do something, and if you could find a way that someone could seem to care even more while also being more effective then that’s a win. So, plausibly for example a hospice is an example of that, which is an innovation that wasn’t always there. In a hospice you say, “Well, we’re not actually helping very much American wide and you don’t have much prospect of helping. Why don’t we stop focused on curing this person and just make them more comfortable as they die?” And that’s caring. That is if you believe that in fact there’s not much more to do and the things you might try to do will just make them in more pain and discomfort, you might think, “Well, let’s focus on comfort now.” And that can both save money and be more caring.
Robert Wiblin: Are there any people who you admire for having been particularly good at solving these kinds of problems?
Robin Hanson: I wish I knew, maybe if I find more research I’ll find more examples, but so far I’m happy enough just to point out this problem the way that we are been misunderstanding the policy issues we’ve been facing. And again, hopefully we’re just opening up a big new area and lots of other people can come in and do lots more work, we certainly haven’t said the last word here at all, we’ve just said there’s this basic problem, we given 10 examples, and, again, there could probably 10 or 20 or 30 more other areas of life that could be given the same treatment, and in each of these areas we can think about how to produce reforms that would still let people pretend what they’re pretending what actually giving them more what they want.
Robert Wiblin: So, before you wrote Elephant in the brain with your co-author, you wrote this other book The Age of Em. It’s quite a unique book unlike almost anything else I’ve ever seen published, maybe it is actually just completely unique in trying to use complicated social science to map out a very kind of detailed possible future that the world could take. I think we don’t have time to go into a kind of all of the ideas that you had in there, but what is your approach to futurology? And do you wish kind of more people would try to make these concrete predictions about how the future could go?
Robin Hanson: Yeah, I was trying to create an example to inspire other people to copy me, and we’ll see if that succeeds or not. The word futurology just the phrasing of it just grates on me…futurism is a little better, although it’s not that much better. We have a lot more people studying history than the future, and as you may know we can’t do anything about the past, but we have at least a chance of doing something about the future. So, you might think we would study the future more than the past in terms of the value that we can get out of it, but in fact, we don’t.
If you ask people, well, why don’t we study the future more? The usual straightforward answer will be, well, on the past we’ve got data, we’ve got documents and artifacts, and we can study these documents and artifacts to draw inferences about the past. And we have no such data about the future, so we can’t study the future. And, of course, that’s true as far as it goes, but we don’t just study the past with data, we also mix in theory, the data by itself actually wouldn’t tell us very much. But, with theory we can infer a lot more, and we can infer the future if we just apply theory, and I think we can go a lot farther than we have.
Now, when I was a physics undergraduate, my physics professors basically said that those people in that other building over there called social science they were just making it up, they didn’t know anything. And I think a lot of people who do tech Futurism have that same attitude, they have a tech education in physics or engineering or computer science and they were basically told that there is no such thing as social science, it doesn’t exist. So, when they start to think about future technologies as they often do, or when they get to thinking about the social implications of those technologies they often decide that their own speculations or thoughts on the top of their head or about the best anybody could do because there is no social science.
And so, you do see tech Futurists who have specialized in forecasting technology, they don’t go call up a social scientist and ask them to analyze their scenario, they just take what’s at the top of their head and they write it down as if it was the best thing anybody could do, and that’s all they even think of doing, and I think that’s just really mistaken. There is social science, I’ve learned a lot of social science, I’m now a professor of economics. And so, I think it is possible to take a specific concrete technology scenario and work out a lot of social implications.
Now, doesn’t mean we can figure out everything, but we can figure out many things as you may know. We have physics enough to help us predict the weather that doesn’t mean we can predict everything about the weather, it means we can predict many things about the weather, and other things we have a good theory that says we shouldn’t be able to predict it. And social science is like that too, we can predict many things and there are other things we don’t think we can predict. So, my basic story is, we have been neglecting analyzing the future, the right way to analyze the future or at least a good way is to break it down into technology scenarios, of which technologies appear what, when and what form. And then for each technology scenario try to predict the social implications by just applying the standard social science tools to make predictions. And that we should just do this for a lot of different technology scenarios, we shouldn’t get too argumentative about which scenarios or how likely.
I mean, I would actually prefer to have a prediction market about that if we could do it, but we should just consider a lot of scenarios and for each one ask what would be the social implications. And so, my book The Age of Em: Work, Love and Life When Robots Rule the earth is intended to be an example of this by taking one common scenario of a technology that appeared often in science fiction and Futurism of brain emulations and just trying to work out as many implications of that as I can. And my book just doesn’t try very hard to argue for that scenario being a plausible one, I just think people have done enough of that elsewhere. I mainly focus on what would happen if?
And so, I see myself as being pretty conservative, I’m not speculating wildly. In the sense that I’m just applying our standard social science tools and many other tools to that scenario. Now, this is a book you couldn’t really have written unless you knew a lot of different fields, and I think that people who know a lot of different fields should be the kind of people who write this book. That is I’m describing an entire civilization, so I have to use physics, computer science, political science, economics, business, some psychology, to think about mating, I have to think about friendship, I have to think about cities.
And so, you just need to know a lot about how many different areas of the world work in order to put together a whole picture of how the world would change. But I think that’s straightforwardly possible, you just need to read a lot of different areas and learn our standard theories. So I am mostly applying our most straightforward standard theories in each area. I’m not going into really complicated models, I’m just going for the simplest things we know about each area and just saying, what does this theory apply? That is, what does the theories we have here say about the particular scenario at hand?
Robert Wiblin: So, I think most people who think they know anything about Futurism would think that they know that people in the past who tried to predict the future have a dismal record, and in fact it was absolutely no better than chance. Does history show that we can predict the future with any reasonable degree of accuracy?
Robin Hanson: I have some personal history here. Back in the late 1980s, early 1990s I was in Silicon Valley associated with the group called Xanadu who was trying to create the World Wide Web, and they didn’t call it that. And they succeeded in part in the sense that they had some design principles and the person, Tim Berners Lee, who actually created what became the World Wide Web listened to them and included some of their insights. They failed in part because they tried to implement too many features and couldn’t get it done in time. But I have the experience of seeing people who are trying to predict the future and create the future, and having a lot of people around them pooh-pooh them and think that was pretty wild speculation.
Then having it actually happen and quickly grow, and then hearing people say, “No one could have seen this coming,” when I knew that people didn’t see it coming, and also seeing that the people who saw it coming didn’t actually get much personal reward, which is plausibly why we don’t actually try that hard to predict the future. So, that should be certainly that it’s possible to predict a big radical technology change before it happens, to foresee the rough outlines of it and get some of the key social issues right. And that people would initially before it happened pooh-pooh and then think that’s pretty speculative and wild, and after it happened say no one could have seen it coming, and see that the people involved didn’t get much personal reward.
Robert Wiblin: So, what kinds of things did the people in the Xanadu project accurately predict?
Robin Hanson: They accurately predicted the issue of quality control, and the key importance of links, the need for some sort of standard URL naming convention that would specify in a global way different files. There is the issue of versioning and to what extent could you go back to a previous version of a website as opposed to seeing what the current version is. A lot of these key issues they got basically right.
Robert Wiblin: Did they foresee things like Amazon or social media or was that too far out?
Robin Hanson: That’s too far out. You know, mainly they were focused on documents and what you could have in documents, and they were somewhat misled in terms of … They were really focused on what they called back links, which would allow you from a document to find the documents that link to it, which would then help you find criticism or document, and they thought that would be important. We don’t actually have back links directly with the web, but Google can help you get something pretty close. And the black links didn’t actually help so much to promote criticism in the way they’d hoped.
Robert Wiblin: So, one thing that really frustrates me is when people point to really bad predictions that someone made 50 or 100 years ago. And the reason it annoys me is typically that they’re pulling out kind of the most extreme predictions that someone made often someone who is just trying to get attention, someone who is basically doing it just for the sake of entertainment, and then they use this to say, well, even someone who was serious couldn’t do a good job of predicting the future. Have you ever seen someone do a kind of more neutral survey where they try to look at a lot of different predictions that were made, not just by entertainers, but by serious people who kind of cared whether they were right or wrong to see how often, you know, what was their strike rate?
Robin Hanson: I cite such a study in the book, I believe it went over 1,000 different predictions of technologies, and when those technologies would appear and then when they did appear. And the predictions were certainly better than random in terms of timing. Many predictions were of technologies to appear that had already appeared, but the people predicting didn’t know that. You know, 5% or 10% of the predictions were about things that were already true. But even ignoring that, yes it is in fact possible to predict when future technologies will appear because we have this whole data set of predictions and when they actually appear.
Now, you know, the error rate is still pretty high, but it is possible. Now, I’m somewhat of a futurist and I’m in the habit of looking at other futurist books and I noticed that a lot of people who make futurist predictions today aren’t really trying very hard. The ones who get the most press are often the ones who are not trying very hard at all, and they’re just trying to sell books and trying to be engaging, et cetera, which is fine for that audience, but it seems pretty clear. Even today we’ve got people making spectacularly wrong predictions that we could already tell right now are not going to be right. If we’re somebody who studies the future, but if you’re out there just try to make inspiring talks and dramatic stories, et cetera, you don’t need to pay attention to that.
So, I think you should always distinguish between people who are selling just inspiration and a vision or whatever from people who are seriously trying to forecast and we’ve always had that difference. But, that’s always been true for all areas of intellectual life. At any one time if you just pick random people who might make money telling people about their physics visions those people could make a lot of mistakes about physics in the way that somebody who carefully studied physics for decades won’t.
Robert Wiblin: So, that was predicting when technologies would arrive, what about predictions about the social implications? Is there any track record that we know of about that?
Robin Hanson: I don’t know of a more formal dataset, but I know of a lot of good examples. Early in the dot com boom there was this book called Information Rules put out by Varian and Shapiro. And their stick was, you know, we’ve been studying industrialization for many decades, and we can apply our standard theories to this new dot com world. People in the dot com world have been saying all the old rules are gone, there’s a whole new world and the old rules don’t apply. And that’s just wrong, will just tell you what the old rules say about this new world. And they got an awful lot right.
There was a perfectly reasonable analysis of the new internet world from the point of view applying all the standard old analysis rules about industrial organization, and that’s a model that I think should be emulated. Just take all your standard old rules and see what they say.
Robert Wiblin: So, whether futurism actually works or not is pretty important for us because as you know we’re particularly concerned about the potential downsides of new technologies. How these things could accidentally be used to make the world worse and trying to foresee that and then prevent it from happening. If it is possible to predict things many decades out then I would suggest that maybe we should be actively trying to prepare for problems that could appear a decades out. If the track record of futurism is really quite bad then we might want to focus only on what bad things might happen in the next couple of years that might be more possible to foresee. What do you think about that? Do you think people who are trying to change the future should be thinking more long-term or more short-term?
Robin Hanson: Well, first I’d say the track record of people who are serious and can be distinguished as being serious and careful is a lot better than the track record of people who just sell inspiring books, or tell stories. So, there is a track record there to go on. I think that first you need to study an area and work out its broad outlines before you get too far into figuring out all the things that go wrong. So, I think about it in terms of distribution of outcomes. You should be first focused on guessing roughly the middle of the distribution of outcomes. Roughly the typical case and as you have a clearer vision of the typical case then you can start to think about all the extreme tail event cases. The tails of a distribution and a high dimensional space are just a lot more detailed than the middle of the distribution. It’s just a lot harder to work out the tails to figure out which ones are interesting.
Of course, but it’s still valuable to think about the tails, but after you’ve gotten this broad idea, the center. So, my book The Age of Em, primarily focuses on getting the typical case write about a world of brain emulations. And, of course, even after a whole book and I’ve got a revised version of the book coming out in a few months, there’s still a lot I haven’t thought about, but I still think there’s enough there now that you could start to think about the tails of the distribution in that case, and ask what are the things that could most go wrong in that world? I think before you had something approaching the level of detail in books, you were just in a much worse position trying to guess what were the important tails.
Robert Wiblin: Okay. I was going to say you might not want to think just about the most likely case, but have a particular focus on the cases where there’s, especially large upsides or downsides, or that you might be able to have a large influence of how things go. But you’re saying it’s important to understand the kind of the baseline most likely scenario. Even if you want to understand the tails you have to do that first so that you have a picture of what you’re dealing with.
Robin Hanson: If you think back as the industrial revolution was just getting started, and you look at the fears that people had back then about our world, they were pretty dramatically and emotionally potent fears that they had. I’m not sure how well they targeted the actual main risks of the industrial world, I think the more that you had understood the typical case in our industrial world the more you could have thought about the main actual risks we face. In the early years they were really focused on the fact that industry was more regimented than the world had been before.
They had seen factories and even slave camps where life was very regimented. Not only did they tell you what time to work at which factory slot, and what arm to use to do which twist, but in the stories at least they were imagining that they told you who to marry and when to eat, and what color of clothes to wear. And they imagine the entire world became very regimented, and that was the main fear that people presented about the industrial world well before we saw a lot of the detail.
And those fears were somewhat misplaced, that is we have become much more regimented at work, we’re much more structured and we’re told what to do, and have status ranking much more than our ancestors would have tolerated. But because we’re rich we don’t do all that stuff outside of work, even though we could have much more efficient homes and food and clothing et cetera if we did them all in a very structured regimented way. We’re rich enough to afford to do it some other way, which we do.
Robert Wiblin: Well, what do you think really were the biggest risk from industrialization?
Robin Hanson: Well, as we look back now we might concern that industrialization would destroy the environment, we might be concerned of extremely powerful weapons that could destroy the world, we might be concerned of a global government taking over the world, and that might be hostile to others. Those were, and still are reasonable concerns about the industrial world.
Robert Wiblin: Did you think there was a systematic reason why people at the time focused more on the regimentation rather than those other concerns?
Robin Hanson: Well, it was the most emotionally potent one that stood out. It was the one that, in terms of the axis of control, which is a very standard axis in fear and fiction, domination and submission is just an overwhelming obsession in fiction, in terms of negative scenarios. People have always been overwhelmingly focused with the scenario of some them dominating us and making us do it their way. And so, that was the focus of the regimentation. The government would take over the fact that the factories would take over, the companies would take over and they’d make us do things their way whether we liked it or not.
Robert Wiblin: Do you think we suffer from that same mistake today? Like, maybe people are worried too much about Google becoming too big a company and not enough about other issues?
Robin Hanson: Yes I do, actually. A basic fact about long-term trajectory is that, organizations have slowly been getting larger and more coordination has been happening at large and larger scale. So, various functions of government for example have moved up from neighborhood to city, to state nation et cetera. As we get better at coordinating our firms have gotten larger, people who work at Large firms tend to be more innovative, they make more money. Large firms have just lots of advantages and plausibly large firms are in fact the main cause of the industrial revolution, they’re the main thing that was different from before the industrial revolution.
So, we should be celebrating our large organizations, but in fact we usually criticize them and futures often have the hope that somehow now will finally get away from the big companies, and will start to have small agile start-ups as a future. People like to present small agile startups as the future but we’re just actually getting less and less of them over time, because in fact the long-term trend is toward large, more effective organizations, not small agile startups.
Robert Wiblin: So, presumably you would accept that there could be some downsides to having a small number of very large companies. Because I guess it creates like a smaller number of like very serious like potential failure points where if these very big companies make bad decisions that could have worst results. You’re a bit less diversified, but you just think that the game is also potentially very large and people are kind of biased against them because these are kind of big, scary organizations that we don’t really understand and don’t really trust, because they’re just outside the human scale.
Robin Hanson: I would actually say the more the risk is that we’re integrated at a global scale. If we broke the world into three different regions each of whom was independent, than if any one region crash the other two could continue on, we aren’t doing that, we’re basically having integrated global economies where each thing is happening mainly at some place in the world, and if that fails and the rest of the world doesn’t get it. It’s less about one company taking over any one industry and just the industries are specialized at a global scale. That’s more the fundamental risk that’s causing us to all be dependent on each other more. And I can see a reason for concern about that. If we worldwide become more correlated and interdependent then something could take us all down together, it being one big company is less plausible than just the interdependent.
Robert Wiblin: So, one other way that you think we can predict the future wrong is this near versus far more distinction that people can think about things that seem distant in the future in quite a different way than they think about the world that they actually live in. And that, that can change that perception. Do you want to explained that?
Robin Hanson: So, this is my just learning about an area of psychology that I find to be potent and insightful, and just trying to apply it and tell people about it. I didn’t do any work in this area, I just read about it and I’ve been keeping track of it. In fact, when I emailed the people who have been working in this area, I never got any replies. So, I’m not like in this world, I’m just talking about it, but I still think it’s powerful to see. And so, the main observation was actually first noticed in reasoning about the future. So, in future analysis really shows this effect to a very strong degree, we have two extreme modes of reasoning in a continuum in between, the theory is called Construal Level Theory, and by level they mean abstract versus concrete.
So, we can even think about things very abstractly, or we can think about things very completely. And there’s a whole bunch of cues that induce us to one direction or the other. So, when we think about things far away in space or time, we know less detail about them and so we think about them more abstractly. We also think about things more abstractly when they’re far away in social distance, when they’re more hypothetical, also when they are more focused on abstract general values versus practical constraints on decisions. These are contexts where we think more in what I’ll call far mode. Conversely, the more we think about something very close to us in space or social distance or time, very typical case the more we’ll think about it in near mode.
In near mode we see things as having a lot of detail, and that there are categories they’re in aren’t so important as all the little details. We expect a lot of deviation of each case from the typical average for its category, and we expect in any one set of items the items are more diverse and varied from one another. And when we’re making a decision we expect it to be complicated and hard to apply fundamental value principles to. Conversely, if we’re thinking in far mode about things far away, we think there’s less relevant detail, we focus more on the general features of the category, we think items in the category how are our more uniform, and we apply basic value principles more directly and are less willing to tolerate exceptions.
We not only invoke them as by these abstract things, but if you’re standing in a space that is physically larger and hear more echoes you will be a more of a far mode. If you look out into the distance and see that things further away are more blue, then blue puts you more in far mode and red puts you more in near mode. So, there are actually classic futurist style is predicted by far mode, that is classic futurist visual style has more blue or shiny surfaces have less texture, there are fewer surfaces in the image.
Robert Wiblin: Everyone wears the same thing.
Robin Hanson: Exactly. In the future they all wear the same uniform, they’re all the same, and then in the future, of course, they’re very focused on their fundamental values and we don’t tolerate much exception about their values, and the music is echoey and ethereal. And in far mode we’re also just less emotionally intense, more reflective and less passionate, near mode red is more passion and desire. So, for example lust is near and love is far, because lust is more in the passion with the detail and less attention to moral constraints, whereas love is more thinking about the values and the-
Robert Wiblin: Idealized.
Robin Hanson: Right, exactly.
Robert Wiblin: What are some ways that this kind of far mode can be dangerous when thinking about trying to influence the future? I guess it means that we’re not focused enough on specifically how you might accomplish things, and what challenges you might face along the way. Because it’s all to abstraction, you imagine that it’s just a question of whether people are good or bad?
Robin Hanson: So, as you indicated about futuristic movies will tend to assume that there’s a small number of categories and items in each category are very uniform compared to the others. So, for example you might think there’s a small number of classes like Welzer’s description of the future where there’s the two classes above ground and below ground. And, you know, they’re very distinct and inside of them they’re very uniform. And so, whatever theories you have you’re confident in them in terms of how they apply to each case. You’re overconfident, so you should just be less confident about your theories and how they apply, you should expect more categories and more variety inside each one, including categories of people. And you should expect that it’s not so easy to apply your fundamental moral principles to their cases, and you should be a little more tolerant of the variation.
Plausibly, this whole capacity of having a near mode and a far mode helps us to be naturally hypocritical. That is, when we look at ourselves in our current situation in our moment of lust, for example, we see a lot of complexity in detail and fundamental moral principles seem less useful than all the practical details. But, when we look at other people doing other things at other times and places, we’re more inclined to impose our basic value principles and to be intolerant of exceptions. So, that is we apply our moral rules more strictly to other people in other places than we do to ourselves right now, which basically allows us to be more hypocritical, but in a very natural way that we don’t even notice.
Robert Wiblin: So, how does this matter outside of futurism? I’m imagining I guess like when we’re thinking about politics in other countries, and the big picture things we lose track of the details.
Robin Hanson: Yes, or even politics in our country. Politics is a big picture thing, and so in politics we do get more far mode and less in near mode. We, pay too much attention to values, I would say, in politics and in futurism as one of the mistakes. So, if you think about any practical decision you’re about to make in the next hour, you’ll know of course that values and facts are relevant to every decision you make. You need to know facts and you need to know values, but in most practical decisions you’re about to make the facts are more important. You’ll pay a lot more attention to the details of the facts and you’ll mostly assume background values that are not changing that much from case to case.
However, the furthest you get up into thinking of grand, world or national politics or farther into the future you will switch the emphasis and you will focus a lot more in values that you will on facts. So, I’ve certainly noticed that when people start to talk about almost any futurism issue they immediately try to talk about values, and they don’t spend very much time talking about facts.
Robert Wiblin: What’s an example of that? What would they say that’s about values?
Robin Hanson: Well, we could talk about AI risk, if you like. People don’t take very much time to think about the actual facts about AI risk. They quickly discuss whether they value various kinds of AI and AI scenarios, they’re eager to talk about their preferences over various scenarios and what they would prefer rather than thinking about which scenarios are high likely, and what probability and what structure. And so, that’s a consistent issue, but it’s also true for say, thinking about world government or even other big materialism people quickly talk about their values. Even for something like global warming people quickly focus on their values about global warming. Are we too materialistic, are we not coordinating up at a global level? And they will focus less on the concrete facts about which regions will be heated too much or have too much weather change, or which plants et cetera, will suffer.
Robert Wiblin: I mean, maybe does that make sense just given the fact that it’s much harder to come up with concrete facts about things that are going to happen further in the future in other countries that you don’t understand very well? You just don’t have many facts to go on, so instead you kind of default to what you do know, which is like what you like.
Robin Hanson: Well, that’s why I wrote the Age of Em to say that you could say a lot of details. So, people have told me that it’s too much detail, it’s not as fun, I should have just had less detail in the book and then people could finish reading the book quicker, but my priority was to show just how much detail I could say. I’ve tried to prove and I will continue to try to do related projects to show that, in fact, you can show a lot of detail about these future scenarios, it’s available if you’ll bother to work on it.
Robert Wiblin: Do you think that Construal Level Theory is part of the reason that people tend to think that their political opponents are bad people rather than just say misinformed?
Robin Hanson: It’s part of it, I think it’s also part of why we disagree. So, I have some research on the nature of sort of rational disagreement and why people disagree. And I also think that when we see our own arguments and details up close, we feel differently about that and we think abstractly about other people. We abstractly know they must have some sort of arguments and reasons for their beliefs, but we just find it hard to give enough credence to their really being there, and being as complicated and thought-out as our own. So, yes, I do think Construal Level Theory says a lot about many cases where we are thinking differently about things far away from us to things up close.
Robert Wiblin: We haven’t been able to do the Age of Em justice here, we don’t quite have time. But, maybe find a good interview or a good summary of the book, perhaps in the next set that we can link people to if they’re interested in learning more. I think it really does demonstrate that you’re right, that you can map out a particular scenario, I guess, even if you’re not exactly sure which scenario is going to happen.
Robin Hanson: I’ve got to say that a lot of the people that we know in common like the fact that the book is there and they praise me for writing it, but I get relatively little engagement with the details of book. People don’t seem very interested in talking about the details of the scenario. They are focused on other sort of grand, higher, abstract issues, I guess, so that’s a way in which people are abstract.
Robert Wiblin: It’s hard, Robin. It’s hard. It’s a lot of work.
Robin Hanson: Well, yes, but you could just critique or say which parts you thought were more plausible, less plausible, where did I make a mistake or what parts should we elaborate more detail on that we haven’t elaborated so far? Again, I think a lot of people in this futurism and altruism space, they like to be really abstract, I think, as I’ve said.
I actually think one of the most common elements in common with the people we know that’s different from other people, if you want to say what’s that one factor explanation, I think it’s a taste for abstraction. A taste for abstraction predicts the taste for discussing altruism and abstract ethics, also other kinds of abstraction in terms of decision theory or quantum mechanics.
A taste for abstraction is what many of the people we know have in common. That means they really lose interest quickly in getting into detail. Many people abstractly think prediction markets are a great idea, but aren’t really interested in getting into the details of making them work in particular organizations.
They might be interested in the abstract issues of the future, in terms of broad category, but they’re less interested in picking a scenario and working out its details.
I think it’s fine for people who are more abstract to do more abstract things, but if most of the people in the area are more abstract then the more concrete things will get neglected. I think that is happening.
Robert Wiblin: It sounds like you think maybe the effective altruism community should really try to attract some people who are perhaps a bit less abstract who can put things into practice.
Robin Hanson: Right. Now, this is related to the youth movement factor because in fact it is easier early in life to focus on abstraction and easier later in life to focus on concrete details. There is literature, for example, on fields where the people who made the biggest contributions made some sort of conceptual reorganization and fields where the people made the biggest contributions sort of integrated a lot of detail over a long time.
The second fields, the people were at an older age when they made their biggest contributions, so I think that’s a problem with having a field dominated by young people is they will then focus on the abstractions and then neglect the details until later in life when they get older.
Robert Wiblin: Well, we’re running out of time. Let’s move away from talking about the specific books, talking about books in the abstract. Over the course of your career when you were an early academic I suppose you mostly published papers, and then you started writing this, the blog, Overcoming Bias, that ended up with a pretty large audience.
Just recently you’ve written two books all of a sudden. What do you feel is the best outlet for your ideas? Which one has actually managed to promote them a lot? Was it worth the switching to writing books even though it’s a bit more challenging?
Robin Hanson: Well, the main reason to publish papers was to get tenure. Tenure is this prize that gives you decades worth of free time to study whatever you want. It’s an enormously valuable prize. There’s a lot of competition for it, but still you should be tempted to grab that prize, especially if you’re younger than I was. I have lost some of the prize because I got it later in life, which means I don’t get to use it for as many decades, but still I’ve been using it and enjoying it.
Early in life you’re more of a seller than a buyer, so people tend to think of themselves in buyer mode when they’re thinking about their intellectual [inaudible 02:11:29] what ideas would I like to buy? What research would I like to buy?
Early on you’re a seller. You have to think, what do other people want to buy? What could I sell that they would be willing to buy? If you can sell enough then you can perhaps get tenure, or some sort of established position, and then you can be more of a buyer. Then you can focus on what you want.
I focused on publishing papers primarily because that’s what sold. That’s what I could sell to others. Then, the rise of blogging happened [inaudible 02:11:55] with my getting tenure, so that was tempting to switch to blogging because basically many published papers the contribution could fit in four paragraphs of text.
They have to fill it out to be a long paper and do a lot of complicated things to impress people, but the key intellectual insight can be explained in a few paragraphs. It’s tempting to just write the few paragraphs than add to the intellectual world of contributions, so I was tempted by that, especially to find the low-hanging fruit, the insights you could get in a day’s work and a few paragraphs of explanation. I spent a few years following that path of just thinking about things and getting some insights and explaining them in a blog post.
Robert Wiblin: Then the switch to books.
Robin Hanson: Yeah. Then I decided that blog posts won’t last as long. What if I want to have a legacy? What if I want to, say, have a longer impact? There’s a problem that … For example, many newspaper columns are read by 100,000 people, but most of them will read it, nod their head, and do nothing with it.
Robert Wiblin: Yeah, and then it’s very hard to access in future.
Robin Hanson: Right, so if you want to have a legacy, you want to build on your work, it’s not enough just to have people read it. You want people to get excited enough that they might build on what you’ve done.
Somebody who wrote a journal article for an academic journal might only get ten people reading their article, but three of them might build on it, which is quite a temptation, even if it’s a very small audience.
A key question about your intellectual contribution is not just how much space will it take to explain and how many people will I get to read it, but how will I get the people who might actually build on it and accumulate more insight to read and build on it? What does that take?
That takes not just explaining it clearly, it also takes some sort of credential for it, that it is an important thing and worth noting. That’s what journals and books do is they add that sort of credential.
Now, books have weaker peer review in the sense that, at least in my area, in journals you submit an article to a journal, and even if referees like it they will ask you to make a whole bunch of changes just because they can. It will take you years to make all those changes and get it accepted and finally get out into the journals.
Books, there’s less of that, sort of referees editing your book for you. You can write the book the way you wanted. Also, the size of a unit is related to how different an idea you need to explain. In four paragraphs you really can’t explain a very radical idea. It has to be pretty close to some other ideas that you’ve already explained or somebody else has already explained. There’s just not enough space.
You can explain a more radical idea in a 20 page paper, although, if you have to have a lot of rigor in terms of your method and everything else there’s still not that much space, but there can be more. A book is a place where you can really explain an idea that is big and will take a whole book to explain and persuade somebody of. That’s the kind of books that I’ve been trying to write and the books I prefer to read.
I do have a lot of more radical, big ideas that I want to get out there. I do fear that blog posts certainly, and even a journal article, are just not enough space to make a case for it to really convince a reader. This book that we’ve just been talking about, The Elephant in the Brain, I think is an example of that.
My colleague, Bryan Caplan, has a book on the case against education. He’s taken a whole book to focus on the one topic of education. I fear that education analysts and researchers look at his book and say, “Well, he’s put a lot of evidence together and made a plausible case, but even so his initial hypothesis is so implausible that we just can’t believe it because there’s just the usual way of thinking about things, and then there’s this weird alternative way that is just hard to take seriously.”
I think the contribution of our book, The Elephant in the Brain, is to show that that same sort of thing applies to lots of areas. Seeing that altogether in one book could convince you that there’s a lot of that going on in the way that one book on one of those things couldn’t. That’s an example of how a book can be needed to make a point that you couldn’t really make in a sequence of articles.
Robert Wiblin: Has it been a success? Have the books helped to get your ideas taken more seriously and get other people to take them up?
Robin Hanson: We’re still pretty early. Too early to tell, as I guess Deng Xiaoping or somebody once said about the Industrial Revolution.
Robert Wiblin: I think it was some riots in… no it was the French Revolution as I recall.
Robin Hanson: The French Revolution. That’s right.
Robert Wiblin: Yeah.
Robin Hanson: One key problem is that I hadn’t realized that this is really about a disagreement between disciplines, psychologists on one side and policy analysts on the other. The book has been classified as psychology, so a psychology editor took it along. They had psychology referees evaluate it, seven of them, all of whom thought it was great.
Then, psychologists have been the ones to write reviews of the book, and they also think of course it’s true, and even imply, but it isn’t very new, so is it really worth having a book about something that’s really kind of repetition of what everybody else has said?
That’s from the psychologist point of view, but from the policy analyst point of view, as, again, Bryan Caplan is experiencing, people are pretty reluctant to believe the claim that in their particular area we have hidden motives that are substantially important there.
Policy analysts really disagree with the psychologists about the plausibility of hidden motives in many of these policy areas, so I would like the book to engage that disagreement, but so far it’s only the psychologists who have thought they should engage the book, who mostly agree with it. I have so far largely failed to get the policy people to respond.
Robert Wiblin: Why do you think they’re reluctant to access this? I suppose, well, in the case of doctors saying that people aren’t using medicine to get healthy, it’s understandable, but if you’re a healthcare policymaker why would you be resistant to this idea?
Robin Hanson: Well, I did experience this, as I said before, when I wrote the Cato Unbound essay ‘Cut Medicine in Half’, so there were a number of health policy people who responded there. I think if you spend your life in health policy, health is somewhat sacred to you. Similarly, if you spend your life doing education policy, education is somewhat sacred to you. It’s an important, precious thing that you’ve devoted your life to, so it’s really hard to see it deflated that much.
Similarly, if you are religious, thinking of religion as a relatively cynical thing is also pretty hard, or if you’re really into politics, thinking of politics as mostly about personal loyalty as opposed to making for better policy. That’s also kind of hard to swallow.
Consistently when people have an area of life they’ve devoted themselves to it tends to be hard to swallow the idea that that whole area of life is just not nearly as important as people say.
Robert Wiblin: You published your first book fairly late in your career. Do you wish that you’d started writing books earlier? Did you think then that you’d have more time to get more of those ideas out there in a really thorough way?
Robin Hanson: I’m not sure. That’s a reasonable critique. Certainly some of my colleagues think I should’ve written a prediction market book much earlier as soon as prediction markets became a thing back right after the policy analysis market blew up in the press.
I could’ve been wrong, but the biggest thing that caused the long delay is my not getting my PhD started till the age of 34. You can imagine a whole decade gained by pushing that back, but the price there was that I wasn’t very sure of what was the most important subject to study. I searched across subjects and I finally think I made a good choice.
If I had made that choice well initially of course that would’ve been better, but of course if I had made a bad choice initially it would’ve been worse, so it’s hard for me to really say whether I made the mistake there or not.
Robert Wiblin: Well, let’s talk about what advice you might have for listeners who are earlier in their career. Has it hurt you to change fields so much, especially to be such a generalist? Academics are known for specializing in just one thing, but it seems like you’ve just ranged across everything. It’s almost amazing that you’ve managed to actually become an academic.
Robin Hanson: I got lucky, I’ll have to admit. About the time tenure review came up here at George Mason I had just been in the press recently with the policy analysis market explosion, so basically a project I was involved in to study prediction markets for defense policy blew up in the press where some senators accused us of having markets in terrorists attacks and betting on terrorist attacks.
That isn’t what we were doing, but the accusation was enough to kill the project, and then for a few years there was a lot of discussion. The next day after the senators accused us of having this, the Secretary of Defense in front of Congress declared the project was killed, and other senators, including Hillary Clinton at the time, denounced it as a terrible, immoral thing.
Because they were denouncing it in part because it was a market in betting on death and terrorist attacks my colleagues here at George Mason who are relatively pro-free market took my side. They saw me as fighting the good fight against the other ignorant folks out there, and therefore I kind of got tenure.
It’s not like they couldn’t have given me tenure, but there’s usually a lot of discretion in terms of how much is enough. They never announce a simple formula because they want to have the discretion, so they used that discretion in my case to give me tenure because I had pissed off senators.
I got lucky, so that’s part of the explanation for how I succeeded is I got lucky. It’s not the only thing, of course. It wouldn’t have worked if I had not produced anything of interest.
I also think that once you get tenure there’s just a lot more freedom to be a generalist. Now, you might say that I was a generalist before I got tenure, which is partly true, but I did focus enough to have a body of research that they could give me credit tenure for. I was managing to restrain my generalist tendencies for a while while I tried to get tenure. That’s the focusing on selling rather than buying.
Robert Wiblin: Did you find it hard to motivate yourself to do that?
Robin Hanson: Well, I guess, yes, but I succeeded. It is on the margin. I actually think, this is relevant for most listeners who maybe sort of want to be intellectual academics, academia rewards you for focusing on one thing and being really good at one thing much more than is the natural inclination of most humans.
Most of us when in our free time we want to think and be an intellectual, we want to be more general. People who are intellectuals as a leisure activity, they are pretty general. They read on a lot of different topics and think about a lot of different topics. Even when they focus, they don’t focus remotely as much as a successful academic needs to focus.
If you want to be a successful academic you will need to focus more than you are inclined because you are selling to a world that wants you to be that focused. You can realize that just like with any job you’re allowed to have a hobby, you could spend 25% of your time on your hobby, but as long as you spend the other 75% on your job you may well do your job well enough. Later on you will have a lot more freedom after you get tenure, so it’s a great prize to go for.
Robert Wiblin: You’ve dived into kind of a lot of different topics over the course of your career. Are there any kind of research agendas that you’d be really enthusiastic to see listeners pick up the mantle on which haven’t already come up?
Robin Hanson: Well, I’ve tried to mention them so far. I’m really excited about the possibility of prediction markets and decision markets. I think there’s enormous potential there. I think there’s a lot of potential for a lot of other related innovations in policies, various ways we could change law, change zoning, change medical purchases, etc. that is just enormous potential there, but we just haven’t done very much to actually improve things.
I of course think there’s a lot of potential in analyzing the future by defining particular technology scenarios and working at the social consequences. If you can learn a lot of fields and then just methodically apply all the standard tools I think we can say a lot.
The Elephant in the Brain, I think there’s huge potential in just continuing to apply this idea that there are hidden motives to lots of other areas. We did ten areas in the book, but you could do another 20 or 30 and find a lot of hidden motives elsewhere as well.
Robert Wiblin: One thing I didn’t follow up on earlier that I should have is you were saying that just psychologists kind of read The Elephant in the Brain and they almost yawn because this is just such common sense to them.
Robin Hanson: Yeah, basically.
Robert Wiblin: Oh, okay. Maybe I shouldn’t have been pushing back as though this was such a controversial idea if it’s just the mainstream consensus.
Robin Hanson: Among psychologists, but not among policy people. There are a lot of disagreements across disciplines. In fact, they’re some of the least excusable disagreements we have in academia.
Within any one subfield, if there are disagreements, they usually hammer it out. Now, they might hammer it out unfairly. They might squash a view that deserves more attention, but in fact they will in fact come to a consensus within a small area where people talk to each other a lot. That’s quite consistent, so the rest of the world will see a consistent consensus within each area, but they don’t coordinate nearly as well on much larger scales.
We don’t have any process for saying, “What are the fields that people should be studying and how much money should go into them?” We mostly just have a process where we just continue on funding whatever fields have been funded, even if other fields think they’re not very useful.
When fields disagree with each other across boundaries, we mostly just ignore that. For example, large areas of literature, they really hate economists. They just think we’re evil.
Robert Wiblin: Yeah.
Robin Hanson: We economists just go on ignoring their critique. We just don’t engage.
Robert Wiblin: Do economists think that literature analysts are evil?
Robin Hanson: No, no. We think literature’s fine, although we might think it’s not scientific enough or something perhaps, but we certainly don’t hate them. There are many other areas where people think economists are evil and just shouldn’t be doing what they’re doing, but we just ignore them and go on our way.
There’s a failure of engagement there. Again, our book, The Elephant in the Brain, is about a failure to engage a disagreement between policy people and psychologists. Psychologists think, yes, of course humans have hidden motives. We [inaudible 02:25:20] why we do things. The policy people say, “Yes, of course school is about learning, medicine is about health. Are you crazy?” Basically. Isn’t it obvious?
Even if there is a bunch of data that we don’t understand we must just … We need to continue to puzzle over this data and collect more data so that eventually presumably the usual explanation will win. There’s just a strong reluctance to really ever accept a very contrarian conclusion on the basis of not infinite data.
Robert Wiblin: Yeah. Speaking of which, throughout your career you’ve been pushing ideas or trying to get ideas taken seriously that people might regard as kind of strange and not really credible. What have you learned about how to get your ideas taken seriously by the people who need to take them seriously?
Robin Hanson: I don’t know that I have learned that. I’m not sure that I have gotten the right people to take these things seriously. Certainly one standard pretty obvious advice is pay your dues and collect the standard sort of credentials.
It’s going to be very hard to get people to take your contrarianism seriously if you haven’t taken their field seriously. That is go into some fields, learn their standard stuff, gain their credentials, show them that you understand their way of thinking, and then they might listen to you if you say that they are wrong and you should think about things differently. Unless you gain some credibility somewhere it’ll be very hard to do that.
Now, unfortunately there’s a sense of which different fields have different status and people often just listen to the field with the highest status as opposed to the people who actually study something more detailed.
Physics is pretty high on the status ranking of academic fields, so if you’re at a party and a subject comes up, physicists usually feel free to just make up whatever thought comes to the top of their head. Most people will kind of nod just sagely because, hey, they’re the physicist. That’s kind of a problem, of course.
Obviously if you want to be a contrarian, just collect some credentials. That is the main thing I would say, The Elephant in the Brain, academia is mostly about credentialing impressiveness that people can associate with. That’s mainly what it focuses on, so that could also help explain many of its failures.
You should be more willing to believe that academia is wrong when the explanation for why it’s wrong is it just neglects something because it’s hard to be impressive there, because academia does do that. Academia will neglect just very simple qualitative arguments for something, even if they’re solid qualitative arguments, in favor of complicated math and big computers and etc.
Robert Wiblin: What’s a nice a example of that?
Robin Hanson: Well, I would say, The Age of Em, for example, is very low-tech. Even The Elephant in the Brain here is pretty low-tech. Most of the blog posts I think I’ve ever written that had insight in them were relatively low- tech.
For the most part, academia doesn’t disagree with them. They just look at them and say, “Well, you couldn’t publish that in a top journal, so what’s the point?” It’s just, like, doesn’t exist really from their point of view because it’s not the sort of thing that could win their status games.
Robert Wiblin: Speaking of contrarianism, I think it’s fair to say that a couple of times over the years you’ve kind of enjoyed riling people up a little bit, making controversial arguments and enjoying the attention and the disagreement that that gets. Do you think that’s a good approach to take, because maybe it draws more attention to your ideas, or do you think maybe that the downsides outweigh the upsides because people end up not liking your ideas or you?
Robin Hanson: Well, a lot depends on how you would rile people up. I certainly wouldn’t approve of just insulting people, saying nanny nanny and your mom wears army pants, or whatever you might do to just taunt somebody into replying. I don’t think I would ever do that, but I think the information value out of any particular thing you might say is proportional to the scope of what you say and inversely proportional to the a priori probability people would have given.
The more you can find something that people would’ve assigned a low probability to, the more that has a high information value if you can convince them of it. I think it’s completely reasonable to focus on finding the things that people would be the most surprised to hear and telling them that. That’s a completely reasonable strategy for identifying valuable research.
When you find something that people would assign a low probability to you need to make that clear. You need to make it clear and direct that they would’ve assigned a low probability to this in the absence of your arguments or analysis.
Robert Wiblin: Is that a good idea? Maybe you just want to make it seem as commonsense as possible. My impression reading your writing over the years is that you started to do that more and more, that in the past you used to kind of highlight the ways that the ideas you’re promoting were counterintuitive, whereas now I think you’re happier to make them seem quite mundane.
Robin Hanson: You do want to make the argument seem as persuasive as possible, so there’s two ends. On the one hand, you’re saying something they’re surprised by, which is why it’s valuable. On the other hand, when they read the argument it should be as obvious and persuasive as possible.
They should both lead to a conclusion that would’ve been surprising and in the context of the argument be hard to disagree with. That’s the ideal thing you’re going for. You want exactly both of those, but in order to highlight the importance of what you’re saying you need to highlight that, in fact, they didn’t expect this, that in fact this is a surprise. Otherwise why is it interesting?
Robert Wiblin: Well, I think that is where the problem comes in because I have seen … There’s some people I know who are very good at taking things that people might object to or taking things that people might be very skeptical of and making them seem just extremely commonsense.
I think that means that people, if you can get them to read the argument, are then more likely to accept it because you increase kind of their [inaudible 02:30:38] being true by framing it in such a way that it seems very natural, and then you show them more evidence that it’s true.
The problem is arguments of that kind don’t tend to get very much attention because they’re too boring and people don’t object and people don’t reply, so they don’t get shared very much. That’s quite a deep dilemma, I think, in terms of how information gets shared across the internet, that kind of almost the worse the argument, or the more counterintuitive it is, and in a sense the less likely it is to be true, the more likely people are to pass it onto others.
Robin Hanson: I think there is a status correlation here. If you’ll notice that people’s first books are usually more controversial and contrarian than their later books, that is once you achieve a status then you can say something that people kind of mostly agree with and you can still get them to buy your book.
Your first book needs to stand out more as something they might disagree with. Similarly, when you are on the outs trying to get attention and get tenure you’ll need to make a stance that seems more likely. Once you’re a high-status Harvard professor or something you can just repeat trite truisms and add the authority of being a Harvard professor onto it and people will accept that as a contribution.
In fact, people at the highest levels, they do tend to be very confirming what you would’ve expected, but supposedly doing it with more rigor and care and data, or something. It’s people who aren’t at that level and can’t just get attention by confirming what people think who have to say something more surprising to get attention.
Of course, everything depends on whether you say something surprising that’s true or not, of course. One strategy to get attention by saying something surprising is just to say something crazy.
Robert Wiblin: Yeah.
Robin Hanson: And not having your audience know that it’s crazy or that your evidence doesn’t really support it. That is the trick, to say something’s surprising and have a solid argument for it. That’s where I hope I’m trying to stand out and not just saying something that’s contrarian or riling people up, but also having that solid argument for it.
Robert Wiblin: Yeah. I guess that helps to explain a bit why kind of the strident contrarianism seems to be more common I think among younger people, because they’ve got to find a way to make a name for themselves. One way to do that is to say things that are interesting and wrong.
Robin Hanson: Right. They do have to stand out, yeah. Yeah, I think it’s worth looking at people’s books and their first book and seeing which had more surprise and disagreement. I think people’s first main book that gave them their fame tends to be more controversial, tends to have stronger claims that people disagree with.
Robert Wiblin: Well, what have you changed your mind about over the last five years?
Robin Hanson: Over the last five years? Well, I could say that, well, I mean, I’ve certainly changed my mind that Blockchain can cause a splash, because it has. I was initially not terribly optimistic about it, but there it is making a splash.
I certainly didn’t predict Trump or the increasing attention. I noticed that we had been at a peak of a global and historical political polarization a few years ago, and therefore predicted regression in the mean, but we haven’t regressed to the mean yet. We are still moving away from it, so I still kind of predict regression to mean, but, hey, the momentum effect’s there so it could go the other way for a little while.
I got a grant funded by open philanthropy to analyze a different future scenario in the same style of The Age of Em. That’s a scenario that I thought I should be able to figure out something, but I wasn’t really sure, and I’ve been pleased that I have been able to come up with some concrete conclusions about an alternative scenario. That’s something where I was surprised at the kind of conclusions I could draw. I’m pleased with that, I guess.
Those are some of the surprises I might have. I guess another surprise, well, I mean, I kind of always knew, but I’m becoming increasingly discouraged by the fact that our usual forums of argument just don’t reward accurate argument very well [crosstalk 02:34:24].
Robert Wiblin: Yeah. It’s a deep problem.
Robin Hanson: I read articles or reviews in the paper, and I say, “I know that these people know the counterarguments, or they certainly could if they put a little effort, and they’re just ignoring it because their audiences don’t know it.” That’s true in academia a lot, too, that people just don’t have the incentive to make an argument that is robust against counterarguments when their audience won’t know those counterarguments.
I’m more and more discouraged, wondering, okay, but what can you do in the face of such things? That’s why I was trying to design prediction markets as a solution to that, but we’re not going to have that for a while, so I’m not sure what exactly one … When exactly should one … When you notice an argument that’s just … Could easily be rebutted, do you bother to rebut it? Because life is short and there’s so many things out there that could be rebutted. I’m just not …
I guess at some point you might just think, “Well, just find a thing you could say that you might get people to listen to that would be useful and say it, and don’t spend all your time rebutting people who won’t reward you for pointing out the erros.”
Robert Wiblin: Can you think of any times when you really formed a really incorrect belief that you think you should’ve been able to realize was incorrect at the time?
Robin Hanson: Well, the whole mistake at the beginning of my life of taking people at their word for their motives on reflection is pretty suspicious, right? Economists tend to think of themselves as pretty cynical about people’s motives, and they’re not. They’re actually pretty gullible like everybody else.
Robert Wiblin: You’ve been following the growth of the effective altruism movement for a while and seeing what kind of new ideas we come up with. Has it exceeded your expectations or fallen short of them? I guess what would you like to see us do differently if we could do one thing differently?
Robin Hanson: What I tend to want to focus on is insight, so when I hear people talk about effective altruism I think, “Okay, if you’re serious about being effective, you think about the question, you come up with some insights.” I want to know over time what insights have people gotten? What are the key new insights that people just hadn’t realized before that you could say, “This is what we’ve learned and here’s what you should know?”
To the extent that a field like effective altruism has collected those insights, my hat is off. I say, “All right, that’s the kind of intellectual progress I’m hoping for.” People open up a new area and they come up with insights and they share them and we all learn more and progress of our intellectual world.
This is the thing I would want to ask people in effective altruism, which is, okay, what have we learned? Because often what I’ve heard is the sort of thing I said, but we knew that already, right?
Robert Wiblin: Yeah.
Robin Hanson: I want to know what we’ve learned in that. That’s the question I put to you or to that community. Please try to summarize what you’ve learned. What do we know now that we didn’t know five years ago or ten years ago that all your work has produced so we can say, “You guys have done some work?”
It’s not enough to have your heart in the right place and to be talking about an important topic. That’s the prerequisite of trying to find insight, but until you actually find insight, something you can pass on and say, “This is what we’ve learned,” you still kind of fail.
Robert Wiblin: We got to finish up because we’ve been going for a couple of hours how, but just a final question. What’s kind of changed about the world since you were my age? Have things gone better or worse than you might have hoped?
Robin Hanson: Well, I remember when I was your age or younger I felt like I was this lone person without many other people who thought like I was. I had subjects I was interested in, but there weren’t very many other people who took these big picture questions seriously, and now I can feel that there’s another generation or two out there that even if they aren’t focused on exactly what I would focus on are actually thinking about interesting questions. And it’s somewhat of a tear come to my eye to know that humanity will go on and continue to ask deep questions and make progress. I love that.
Robert Wiblin: My guest today had been Robin Hanson. Thanks for coming on the show, Robin.
Robin Hanson: Great to be here.
Robert Wiblin: If you enjoyed that episode you can hear more of Robin’s ideas by visiting his blog, Overcoming Bias, or buying one of his books.
The 80,000 Hours Podcast is produced by Keiran Harris.
Thanks for joining, talk to you next week.
Learn more
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.