Transcript
Cold open [00:00:00]
Bryan Caplan: Whenever there is a disaster, the normal reaction is, “Something has to be done to stop this from ever happening again.” Again, the question is: Maybe we should just stay the course, because this is the right number of disasters to have? Which horrifies people. But look, we shouldn’t have earthquake codes so strict that no building ever collapses, no matter what, because the effect on housing costs would be astronomical. So why don’t you tell me what is the correct number of houses to collapse in earthquakes? And then we’re only going to cover it in the media if we exceed that number. You just imagine people’s heads exploding, like, “No, we have to cover every single one so that we can have the proper reaction!” This proper reaction is what makes housing costs too high.
Rob’s intro [00:00:43]
Rob Wiblin: Hey listeners, Rob here, head of research at 80,000 Hours.
The idea that if you want to be a better person you should stop reading the news will strike some people as pretty much nuts, and other people as pretty much obvious.
In today’s episode, repeat guest Bryan Caplan and I make a full-throated defence of the idea that following the news is neither truly enjoyable, nor particularly helpful if you want to understand the world or make it a better place — organising our conversation around the book Stop Reading the News by Rolf Dobelli.
I’ve put my time where my mouth is and have mostly quit the news, and I hope we inspire some of you to think about doing the same.
Second, we’ve had many episodes over the last year on ways that development of AGI could be extremely influential or accidentally go wrong. So I thought it was time to give over the mic to someone like Bryan who is pretty sceptical that AI is going to lead to extreme, rapid changes, or that there’s a meaningful chance of it going terribly.
My primary goal was to better understand what underlying beliefs were causing Bryan to see things so differently from me, something which I think I understand much better now. And having identified some of the key cruxes of disagreement between us, I’ll be excited to really interrogate those points next time we speak, fingers crossed.
And finally, Bryan has for many years argued that “rational irrationality,” or rational ignorance, on the part of voters leads to many very harmful policy decisions. So I ask him to explain and justify why he thinks that, and whether if that’s right it leads to any high-value opportunities to get better policy outcomes.
Without further ado, I bring you Bryan Caplan.
The interview begins [00:02:31]
Rob Wiblin: Today I’m again speaking with Bryan Caplan. Bryan is a professor of economics at George Mason University and the author of a number of books. In episode #32 — Economist Bryan Caplan thinks education is mostly pointless showing off. We test the strength of his case — we talked about his book The Case Against Education: Why the Education System is a Waste of Time and Money. And in episode #126 — Bryan Caplan on whether lazy parenting is OK — we spoke about selfish reasons to have more kids, why being a great parent is less work and more fun than you think, as well as his collection of essays titled Labor Econ Versus the World.
Later, we’re going to chat about Bryan’s most recent compilation of essays, which is available on Amazon now, entitled Voters as Mad Scientists: Essays on Political Irrationality. Agree or disagree with him, Bryan is someone who will always give you the benefit of his candid opinions — so thanks for coming back on the podcast, Bryan.
Bryan Caplan: Delighted to be here, Rob.
Rob Wiblin: I hope to talk about your take on how AI advances might play out positively or negatively, and why you might have a moral duty to stop reading the news. But first, in our last interview we were talking about how large the returns are to more intensive helicopter-style parenting, and at one point you said, “Let Wiblins blanket the Earth. That’s my motto. We need lots of Wiblins” — which is naturally one of the more flattering things people have said to me. And to that point, I can share some good news, which is that my wife is now pregnant.
Bryan Caplan: Outstanding.
Rob Wiblin: So we have at least half of one Wiblin coming up.
Bryan Caplan: That sounds great.
Rob Wiblin: How many people have now said that your book Selfish Reasons to Have More Kids was causally involved in them deciding to have kids?
Bryan Caplan: Some hundreds. I’ve actually done Twitter polls where I just say, “How many kids have I talked you into having?” — figuring obviously not everybody that I’ve convinced is reading that particular tweet — and that comes out to hundreds. You may say that some of those people are exaggerating. Seems like an odd thing to exaggerate, actually, but in any case, I think that is a reasonable lower bound. I mean, how many people could actually be reading every single tweet? So I wouldn’t be surprised if it’s over 1,000 people who exist because of the book.
Rob Wiblin: Yeah, I’ve been trying to figure out whether your book caused me to have kids or not. It’s a slightly complicated question, because I read the book a while ago and I probably already somewhat agreed with the general idea that helicopter parenting was over the top. Having kids or not has been a kind of close call for me, so there’s a reasonable chance that the book put me over the edge. Maybe like a 10% or 20% chance.
Bryan Caplan: All right, so 0.15 Wiblins. Or if I push you over the edge and then once you have one, you’re like, well, we don’t want this kid to be an only child, and we got two. It can create a whole chain reaction.
Rob Wiblin: Yeah, I’ll let you know how we go.
Why you shouldn’t read the news [00:05:02]
Rob Wiblin: Let’s push on to a topic that we’re both very passionate about, which is whether it’s good to read the news or not. I have to admit, Bryan, you were actually the second person on my list to talk about this, because I was particularly inspired by a book written by the author Rolf Dobelli called Stop Reading the News: A Manifesto for a Happier, Calmer and Wiser Life. It’s a super entertaining read, it’s a fierce book, and I got through the audiobook in just 90 minutes — so I can recommend checking it out even if you’re stretched for time. But it turns out Rolf doesn’t do interviews at all, and you were the second person on my list, so two out of 8 billion is not too bad.
A little bit of background on this section is that, after hearing the arguments Rolf makes in that book, my wife and I decided to stop reading the news basically cold turkey Christmas last year, because we thought it was making us sad and anxious without actually doing that much to help us make the world a better place, or even understand the world. I’m kind of naturally a news junkie, and before that I was spending maybe one to three hours reading the news, on average, a day — and that’s probably about what I’ve been doing for most of my adult life.
Since then we’ve mostly stuck to it, and I’d say my news consumption is down about 90%. Some stuff still gets through in the comedy we watch. I listen to a Spanish language learning programme which has news in it sometimes. People bring it up in conversation in person, which is fine I guess. Sometimes preparing for the show, I check out specific news things that I’m actively looking for for more technical stuff, and I don’t try to avoid that. But basically, I haven’t checked the homepage at the Financial Times or The New York Times or The Atlantic or The New Yorker or the BBC, or the newsfeed on Twitter or Reddit or anything like that, for about 10 months. It’s all been blocked.
And I really do think I’m happier and more productive than I used to be. Certainly I’m not as anxious, and my moods are not as volatile as they were before.
I was spending 10% of my waking life reading the news. That is a lot of time. It’s not enough to say that there is some benefit to that; you really want to say that this is providing 10% of the value in your life. It has to be actually providing a significant amount of goodness. And it just is so unclear that it’s doing that. It’s unclear whether it’s positive or negative, let alone providing 10% of my wellbeing.
Bryan Caplan: By the way, Rob, it’s fascinating that you and your wife quit together. I think of this as actually very similar to a couple disaffiliating from a church. If only one does it, then it actually causes a lot of conflict. And even if it isn’t a direct source of conflict, it’s just a rift, because one person is still part of a notional social group and the other one isn’t. So it’s a tax on the relationship. So to do it together, I’m really impressed that you were able to pull that off. It is a thing where you are simultaneously getting the benefits of it, not just individually I’m not doing it, but together we’re not doing it — so now we can together go and find something else to do instead.
Rob Wiblin: Yeah, I mean, it’s indicative of the fact that we got married because we think somewhat similarly about issues. So we both heard this book and we were like, “This is just right; this is overwhelmingly right. So why don’t we act on it?” It would have been so much harder if it was only one of us, because then the other one would want to talk about the things that they’re reading, and maybe they’d be a bit dismayed that their partner takes so little interest in the world or something like that.
Now I think that as a result of reading less news, I do read more books and I do listen to more lecture series. This company The Great Courses produces fantastic lecture series, and I’ve just been churning through them this year. But I would say about half of the time that we’ve freed up, I just play computer games with my wife. But I honestly think that is a much better use of time, because, firstly, it’s just fun. I come away from playing computer games with my wife feeling refreshed. I’m excited and happy. I’ve had a good time rather than feeling actively worse, rather than feeling drained because I was reading about something horrible, and it’s good for our relationship and it’s just inherently enjoyable during the time. So I’d say even playing computer games is maybe a more respectable [use of time].
Bryan Caplan: And getting ready for baby: The family that plays together stays together.
Rob Wiblin: Exactly, yeah. So I think we’re sticking with it. And my hope is that some listeners to this episode might reflect on whether they want to make a similar change in their life, all things considered.
So with that out of the way, what is your overall take on news?
Bryan Caplan: I’m not as extreme as you, actually — but you’re making me feel guilty, like maybe I should be. There’s sort of two different takes, or two different angles.
One is: What is the actual effect of following the news on the life of the individual that’s doing it? And I think it’s hard to imagine that it’s not just what you’re saying. Just imagine doing a time diary approach, where you are talking about what you’re doing and what your mood is at every minute of the day. Obviously, when people are watching the news, normally stuff on the news is quite horrifying, and they’re getting upset and agitated. If you just think about people that are angry about things every day, normally they don’t have enough stuff going on in their personal lives to actually get that angry — so what they are angry about is stuff that they are hearing about on the news. So that one I would just say is the main selfish case.
And then to say, “But what if I fail to learn something that’s really important for me personally?” Well, what are the odds of that? That hardly ever happens. And especially if it were going to be that important, you would hear about it almost certainly in a number of other ways and therefore it wouldn’t matter. I remember actually my family was driving down from DC to Florida on the very day that the George Floyd riots were hitting the country. So I had heard something about it from Facebook. But if we were just totally not following the news, since I was going to be visiting some friends, I would have gotten emails from them — I did get emails from them — saying, “Don’t come to Charleston, South Carolina; it’s a war zone here.” And therefore it would have been fine.
So it really just very rarely happens that not following the news would actually wind up having any harm to you personally because you were uninformed, and there are the psychological gains from not having the negativity — because as we know, news veers very negative.
Now, in terms of “How can you become an informed and enlightened person?” — that’s the other perspective. And this is one where my first reaction is: read it in history books. If it’s that important, it’ll be in history books. Read that. When I do want to get up to date with something that’s been going on, I find it’s very helpful just to read Wikipedia. It’s a lot less emotionally affecting because they’re aggregating a whole lot of information. It’s not this sense of, “What’s going to happen next? What’s going to happen next? Oh no. Oh good. Oh no.”
So in that way, you wind up getting not only less of an emotional rollercoaster, but you also get a better picture of what’s really going on when you read the Wikipedia article on an event — because there’s filtration, there’s curation, stuff that turns out to be not actually true and important generally doesn’t appear on the Wikipedia article. And in that way, you can still be highly informed.
For example, with all due modesty, I’ll say I think I’m at 99.9% on Israel and Palestine. For Americans anyway. You can say, “Well, for an American…” But still, I think that’s way beyond what most people who follow the news are. That’s because I’ve just read a number of books on it, as well as some really good graphic novels — honestly, some great graphic novels on this topic. But in terms of knowing what’s going on, I think I’m really quite good, but without that sense of dread and horror and outrage that people who are watching the news minute to minute actually experience.
Rob Wiblin: So there’s two different threads there. One is: Does it help you make decisions in your personal life? Does it provide personal benefit? And then there’s another question: Does it help you to understand the world? To have a better idea about what’s going to happen in the big picture, and maybe how you could try to improve it?
On the first one: that’s the first argument in the Stop Reading the News book; it’s just “news is irrelevant.” And Rolf suggests the typical educated person who follows the news might read 20,000 news articles or 20,000 headlines in a year. How many of those did you act on, did you do a thing on? If you just think about the number of things you can get checking the homepage every couple of hours, it could add up.
And then the question is: Out of all of those, how many concrete decisions did you make differently in your life? The answer is there are a few. You mentioned travelling when there were riots going on. Rolf mentions that he went to the airport once when the airport was closed because of a volcano. I think sometimes it was practically useful to follow the news around COVID because that was going to affect the plans that you might make. But it’s kind of the exceptions that prove the rule. Like you really have to stretch to think, what did I do differently?
Bryan Caplan: Yeah. Even on COVID, normally when I wanted information, I’d read STAT News, which is not news in any normal sense. It’s actually giving you statistical information.
Rob Wiblin: Exactly. You could go to Our World in Data or something like that, if you wanted to actually see what’s going on.
Bryan Caplan: Yeah, I was doing that a lot too.
Rob Wiblin: OK, and then there’s the other thread about understanding the world in some more broad sense. Perhaps you value understanding the world for its own sake, or you think it’s going to be useful. And on that, I think you probably have the view that not only is news not helping here, but it’s actively harming you: that it might well cause you to have a less accurate model of the world. Could you give some reasons for that?
Bryan Caplan: Right. Probably the very best way of understanding this is a thought experiment from Michael Huemer: Imagine there’s a school, and all this school does all day, every day, is tell the students true negative things about Jews. So from morning until night: “…and then there was a guy, Abraham Schlomo, who mugged a woman in 1872 on the corner of Humborg. And we just think that you ought to know all about Abraham.” Every day it’s another thing: “…then there was a massacre committed by Jews in the year 23 BC,” and on and on.
Then someone says that this is very biased and misleading. And they say, “What do you mean? Every word is true. It’s all fact checked.” And you look at it, and it is all fact checked. However, the point of it is to create an overwhelmingly false picture that all things the Jews are terrible. And basically, the way that they’re presenting it, they’re showing you all Jews are terrible and all terrible things are Jewish. And if they said, “We never said that,” the implication, the insinuation was overwhelming.
And this is what I say is really going on with the news. It’s not so much that it’s fake news or misinformation, but rather that the stuff is generally — not always, but generally — true as far as it goes, but with this question: Why are you telling me this stuff? Why is it that I need to know about every plane crash unless your point is that planes are really dangerous and you shouldn’t fly? Why is it that you tell me about every terrorist attack, unless your point is that terrorism is one of the most important causes of death in the world?
So that’s what I’d say is really going on. Yes, sometimes the information is just not even true; it’s not even real information. But the main thing is they’re just giving this overwhelmingly skewed view of the world. And what’s the skew exactly? The obvious one, which I will definitely defend, is an overwhelming left-wing view of the world. Basically, the woke Western view is what you get out of almost all media. Even if you’re reading media in other countries, it’s quite common: the journalists in those other countries are the most Westernised, in the sense of they are part of the woke cult. So there’s that. That’s the one that people complain about the most, and I think those complaints are reasonable.
But long before anyone was using the word “woke,” there’s just a bunch of other big problems with the news. The negativity bias: bad, bad, bad, sad, sad, sad, angry, angry, angry. This is the way that the stories are, even when the ideological agenda isn’t really there, or at least it was a different ideological agenda. So that’s a major problem.
And then another big part of it is just sheer innumeracy. It’s not like every day the news tells you death continues to kill; we have not yet solved the problem of ageing. That’s almost never on the news. Instead, it is these very tiny but vivid risks: terrorism, plane crashes. Now, one thing that I often do instead of news when other people are doing it is reading Wikipedia. I do like this On this day feature of Wikipedia, and yet about 20% of what are otherwise actually important and interesting historical events are terrorism and plane crashes and train crashes.
Now, here’s the thing. You might say, all right, terrorism, we know it’s not important as a direct cause of death, but it’s important because it drives other policies that are important. Well, maybe because you guys are always telling everybody that it’s really important. But the plane crash ones, there’s just no conceivable justification for it, because plane crashes do not cause a bunch of other horrible things. They are quite self-contained, actually. Air travel is so regulated at this point that when there’s a plane crash, there really is not much of a push towards further regulation. The last time I can even remember an air disaster leading to a call for more regulation was when John F. Kennedy, Jr. died in a light plane crash. And then there were some peeps about cracking down on light planes. Although actually, that industry was so thoroughly destroyed by lawsuits that there’s really not much left to do in the US.
A funny thing about light planes is basically almost the only ones that are left around are ones that come from bankrupt companies that are just in the resale market, because now they’re bankrupt and there’s nothing more that can be done to those companies. So now you can fly around in those planes from decades ago and keep them together with spit and glue, no one to blame but yourself for it.
Rob Wiblin: Yeah. I think one reason people can be negative about the news is a more conspiratorial mindset, where they imagine that news companies are really pushing an aggressive agenda, trying to persuade people of things because they think that’s the right thing to do. I think some of that does happen, but overwhelmingly that’s not really the problem that I’m worried about. I think it’s just the agenda of the news is to keep you watching the news: it’s to bring you back to the homepage regularly, because it’s a business that gets more money when more people read the stories.
So basically, they just pick out whatever headlines, they highlight whatever things are most emotionally gripping to the majority of viewers — which is like you’re saying: terrorism, plane crashes, random events elsewhere that are horrible stories of things that happen to people — and a massive undercoverage of the actual important things, like: Are we curing heart disease? How are we going with antibiotic-resistant bacteria? And on and on.
So I think someone who just reads the news naively, checking homepages, probably their view of the world is becoming less accurate. I don’t know, but on balance, I think probably they’re just getting a less accurate view of the world, unless they are extremely selective about what they’re reading.
Bryan Caplan: Yeah, especially just doom and gloom. On the idea that the media is purely trying to make money and so there’s just not that much ideology going on, I think the main thing to realise is that there’s a lot of people who want to hear some negative stuff, but they’re not that ideological. And when you’ve got those incentives set up, it is fairly easy for the people that are running the news to push an agenda.
I don’t think there’s a conspiracy; I think that’s the wrong way of thinking about it. I think rather there’s just worldviews that are in the air, and when you are someone that believes in it, it starts to seem like fact to you. And therefore it’s like, well, we need to find out something negative to talk about. We all know that racism is a super negative thing. Let’s have tonnes of stories about racism. That’s the kind of thing that’s going on. And if people on the other side had a different ideology — where they said the problem is reverse racism — that would be a bad business decision. But if the people listening are fairly open-minded to hearing about anything terrible, it’s like, “I’ll listen to anything you say, as long as it’s terrible.”
That’s where I think a lot of the wiggle room for ideology — and in the long run, some kind of brainwashing — do come in. I think it is very hard for a psychologically normal person to listen to a very biased source of news for a long time and not just start picking it up. Even when people realise there’s a bias, it just requires so much mental effort to constantly be saying, “PBS said this, but we all know what PBS is like, so take it with a grain of salt.” It’s like, “Half of the stuff has an agenda, but the other half is just normal.” No, there’s an agenda all the time.
During COVID, my wife just started watching the news all the time actually, and this was very hard on me, I will say. Now, normally she will turn it off when I enter the room, but she’ll also go to bed with it, and then when I come in, I get just my minute of news. This is where I can see, my god, this is really biased. At the same time, they’re presenting themselves as, “We’re serious. Unlike something like Fox News, for god’s sake, where they have an agenda.” It’s like, you have a different tone, but you’ve got an agenda, for god’s sake. Obviously.
Rob Wiblin: I don’t want people to maybe write this off if they don’t agree with the political bias angle. Personally I agree that there are some messages and topics that journalists obviously feel more comfortable covering than others, and that can lead you to be on the receiving end of a pretty non-representative sample of information or ideas or evidence out of all the information that’s out there.
I imagine some listeners will be nodding along in agreement to that, agreeing that much of the media is pushing a particular set of views in a way that’s tedious or interferes with just giving people the facts wherever they might fall.
But other folks won’t feel that way, and at least for me and Rolf Dobelli, that’s not the crux of the issue, because if I really wanted to, I could find outlets or journalists that feel less driven by any particular agenda.
But even if I did that, then the many other reasons to avoid regularly checking the headlines would remain just as compelling to me, among them that news gets risk assessment completely wrong just as you’ve been describing, it generates a kind of ambient background chronic stress that’s bad for your health and bad for your wellbeing, it encourages us to be extreme generalists — when in order to usefully contribute to the world you typically need to specialise and develop unique competence in some narrower area; it mostly focuses on events far away from us that are the most outside of our control, which encourages us to just feel powerless; it encourages us to join in on ideological stampedes, without being too careful or waiting around to find out whether or not they’re truly justified. And of course simply the day to day flow of random stuff happening completely obscures the big picture. If you really want to understand the world, you’ve got to read a textbook or an encyclopaedia or Wikipedia, or maybe papers even.
Bryan Caplan: Or Our World in Data, man.
Rob Wiblin: Yeah, Our World in Data, exactly. News reinforces our natural tendency towards oversimplification and hindsight bias. How often do you get people writing, “This was extremely hard to foresee; we really don’t actually understand why this happened”? Almost always, people are trying to offer some explanation. Usually it has to be oversimplified because we don’t even understand what’s going on. News reinforces availability bias, leading us to base our decisions on rare but shocking events, or base them on whatever someone else wanted us to be thinking about. The news tempts us to form opinions on issues that don’t really interest us, or that are too complex to comment on sensibly without really in-depth analysis.
Bryan Caplan: You’re reminding me of a great movie recommendation. The original Anchorman is funny, but Anchorman 2 is profound, because it is a story about the rise of cable news. So there is a scene where Will Ferrell is asked to go and cover a car chase. Now, they’ve got no facts, so through his earpiece you hear the station manager just saying, “Speculate, speculate.” And he just starts saying, “There’s a car chase going on. The man possibly well over six foot six tall, possibly intoxicated…” and you’re just listening to this, and yes, they are inventing what we see every day in real time here. And that line of “Speculate, speculate” is actually a catchphrase in my house: when somebody doesn’t have any actual information, but you want to have a story to entertain and grip the audience, just speculate. Who knows what’s going on.
And the scene is really funny, because I think Will Ferrell is broken up from his wife at the time, and his wife is interviewing Yasser Arafat — it’s a period piece — and Will Ferrell is covering the car chase. And it starts getting so much attention that meanwhile, over on the Yasser Arafat interview, they’re saying, “We may need to go and preempt the Yasser Arafat interview to switch over to the car chase.” And then Yasser Arafat says, “I would like to see the car chase.” So this movie is just so deep. I’m not even joking. Just watching this movie, it’s hilarious. They got right into the essence of this garbage.
Rob Wiblin: Yeah. Every so often I do get hit with the front page of a newspaper, or I see a whole bunch of headlines somewhere. And these days, my reaction is one of being kind of angry. But not about the news — rather at the newspapers for wanting to shove this trash in my face. I’m like, why are you telling me this? Imagine if someone walked up to you on the street and was just talking about terrorist attacks, talking about awful things that happen. I’m like, go away. I understand that this is profitable for you, kind of. It’s very slightly profitable, but you’re just destroying my peace, and not helping me in any way. It’s like a complete scam as far as I can see. It’s a scam in a pretty significant way.
Bryan Caplan: What’s also striking is that when people watch the news, they get a kind of superiority pleasure from knowing a little bit before you do and then telling you, “Have you heard the latest?” “No, what is it?” “Oh, some guy was blown up in Ireland.”
Rob Wiblin: “OK, I’ll go act on that.”
Bryan Caplan: This is a case where when it’s happening to me, I’m paying attention to the faces of the people and just looking at the reactions. You can see that no matter how awful the story is, normally the bearer of bad tidings has this little look of smug superiority, like, “I heard before you did. I know more than you. I am a better person than you.” And then the other person has a little sense of, “Oh, I’m not as informed as you. You are, in this certain way, superior to me. Now please inform me so that I may join the superior group and not be left out in the outer darkness of uninformed people.” It is a very strange dynamic.
Rob Wiblin: I would like to think that I wasn’t guilty of that, but probably.
Bryan Caplan: It’s hard. If you see someone who sees a story, they see a person, and then they want to tell them. It’s memetics in one of its purest forms, really. The news, they’re finding things not only that people want to watch; they’re finding things people want to repeat.
Rob Wiblin: So last night, my wife and I were watching a video about South Africa over the last 30 years, talking about how things had kind of gone downhill or hadn’t been getting better since 2010. But I was struck at the end of this 20- or 25-minute explainer video at how few numbers had been mentioned. I think there were like two or three graphs in the entire thing. And I was like, I can’t tell whether this is true or not, because it’s just not enough data cited. So then my wife and I went to Our World in Data – South Africa, and went through 10 or 20 different graphs, looking at all sorts of different things and how they’ve trended for South Africa over the last 30 years. And actually, the broad story held up. But the thing was, I thought that I learned a lot more doing that in a couple of minutes than I did watching this video with a whole lot of additional commentary that is really just some non-expert’s opinion on that. So, yeah, ourworldindata.org: cannot recommend strongly enough.
Bryan Caplan: Oh, yeah. Huge service to humanity.
Strongest arguments for sticking with the news [00:28:37]
Rob Wiblin: OK, so broadly speaking, we agree on this topic. I feel like I should stick up for the strongest arguments that someone in the audience could offer for continuing to stick with the news, because I don’t think it’s a completely hopeless case. Maybe the first rebuttal that I could imagine someone saying is, “I mostly read good news — like a weekly summary of the world events in The Economist, or long essays in The New Yorker, or long big-idea pieces in the Financial Times — and I’m not that interested in random grabbing, shocking headlines. Is that really so wrong? Maybe this is a net benefit.”
Bryan Caplan: That doesn’t sound crazy. I would just say that if someone were to say, “You’re basically right, but I can cut down 90%; I can still be almost as well informed while reducing the harm,” I think that’s a really obvious position, and I think that one’s almost impossible to argue against. What if you spent half as much time in the news? Would you really be noticeably less informed? No. But would you be less unhappy? At least in the time diary sense, where you are counting the experiences of the day, then I don’t see how you could fail to be more happy as a result of cutting down 50%, with really virtually no change in the level of knowledge that you have, even about the events themselves.
Rob Wiblin: Yeah, there are some pieces that I miss. I do think that the big-idea pieces in the Financial Times I do slightly miss. But I know that if I went back into that, then I would start getting grabbed by all kinds of different news stories. It’s like a massive soap opera that’s going on, where it’s really hard to stop watching a soap opera if you’re on the edge of your seat about all of these plotlines that are going on.
Bryan Caplan: You could really watch the soap opera once per week and still follow the story perfectly well. The hardcore soap opera, where it’s on five days a week, has almost no plot. They just have someone waiting to open the door to the hospital room to see their beloved that was burned alive in the fire. They can stretch that out to days, the hand getting closer and closer to the door.
At the same time, I do want to say that I disagree with one common critique of the news, which is that the journalists are terrible people and lie and distort. All my first-hand experience with journalists is pretty much good. I really have to stretch my imagination or really have to remember hard to find any time that I thought a journalist did not treat me well.
Especially among professors, there’s such a sense of superiority about journalists and the way they distort and oversimplify everything. When I go and talk to the media, as I often do, I find that their summaries of what I’m saying are quite accurate, and that they care about getting the facts right. I, in that case, generally blame the professors, because they don’t give straight answers to questions. They ramble on, they’re nonresponsive, and then this poor person still has to eke a story out of the slim pickings you gave them. Then maybe you say that they didn’t do a good job — but you didn’t do your job, and that prevented them from doing their job.
I am not someone who feels mistreated by the media at all. Possibly it’s because the literal truth of what I’m saying is sufficiently entertaining, they don’t need to distort anything. That would be a possible explanation. I think that they are, regardless of their political views, I think that they have a lot of curiosity. On top of it, I think they are like a lot of people in the world of ideas, where they have to work for markets: they’re bored because they have to keep recycling basically the same story. If you think about the feelings of someone that is on the Brangelina beat for a tabloid, you just feel sorry for them. That’s their job. They know they usually don’t even have anything to talk about. They just sort of have to come up with something.
So I think that journalists are actually usually quite excited just to hear something they haven’t heard, even if in some sense they’re ideologically supposed to disagree with it. But they’re just bored with having to repeat the same stuff over and over, and they hear something new and they’re like, “Oh my god, what’s that? That sounds like something that’s a total thought crime. But on the other hand…”
Rob Wiblin: At least it’s new.
Bryan Caplan: Yeah. “It’s new, and maybe it’s not as bad as they say. Let’s just listen and find out and talk.” The number of times I’ve actually gotten hostile questions from journalists is less than five interviews my whole life, and I’ve done hundreds of interviews.
Rob Wiblin: It’s very interesting that you had such a positive experience. I think my take on this is that journalists as individuals are no better or worse, they have no greater or lesser integrity, really, than any other people. But I feel like the business model of the news encourages kind of sloppy behaviour or encourages sloppy work, because just there isn’t enough money in each individual article most of the time for people to spend very much time really deeply understanding the thing that they’re writing about.
So usually when things are written about topics that I know really well through my work, I feel like they’re not completely wrong, but it’s just I would be really embarrassed if I wrote something that had that much misunderstanding, that lack of depth of understanding of the underlying issues. So yeah, whenever I read something that I feel like I know a lot about, I’m like, I wouldn’t forward this to someone in order to learn about this topic. It feels like a bad high school essay to me, even though it’s not malicious. You don’t have that impression though, it sounds like?
Bryan Caplan: Well, here’s the thing: The very fact that they’re talking to me means that they have chosen a topic that I think is worthwhile — in the sense that the topics I work on, I consider worthwhile and definitely undervalued and understudied. Which means that they’re getting rid of what I see as the number one problem in the media: choosing the wrong topic to go and study; working on something that is either just not numerically important, or overdone, or just deliberately designed to promote a certain worldview. Like that school that only tells you bad things about Jews. But the very fact they’re talking to me normally means that we filtered out that greatest problem, because they’re going to talk to me about something that I consider worthwhile.
You probably have heard of the statistical terms of type I and type II errors, which people always mix up. But have you heard of type III error, Rob?
Rob Wiblin: No. So type I and type II errors are false positives and false negatives. We should probably just start calling it that, but yeah. What’s a type III error?
Bryan Caplan: Type III error is getting the right answer to the wrong question. Which means that the very fact that I’m talking to the media about anything means that they have avoided type III error, in my humble (or not so humble) opinion.
And then you were saying that they’re not better or worse than other people. I think in a lot of ways they’re much better. They’re definitely above average in IQ. They’re well above average in curiosity — especially the ones that I talk to normally seem very curious to me; they’re much better than my students overall.
And then really, the only remaining issue is the ideological dogmatism, because as you know, there is some incredible left-right ratio among journalists. Probably it’s at the level now of like 20:1. Normally, we don’t actually have data on their party affiliation. We know what donations are like from different media outlets, so we can get an idea from there. If anything, you think that would bias it the other way, if you think that you’ve got some rich right-wing donors. So left-wing bias is very severe.
I can hear Rob saying, “We don’t want to alienate all of our left-wing viewers.” Look, I don’t want to alienate you either, but just face facts: you’ve got an overwhelming majority of certain occupations on your side. And yeah, of course it has effects upon what they do. How could it not?
Rob Wiblin: So I think that this is an issue that’s particularly severe with the US media, and is one reason that I read less American newspapers, even when I was reading news. I find that in the UK, obviously the class of intelligent writers in the UK has a particular perspective on things, but I feel like it’s much less aggressive about pushing a particular ideology on listeners, because I think the culture wars are not as intense here in the UK.
Bryan Caplan: Yeah, so if you look at PBS — which is basically the US analogue of the BBC; it’s public broadcasting and news and so on — the left-wing bias is so overwhelming. Essentially, every story there has a strong bias, just starting with the selections — you know, “What are the horrible things in racism that have gone on today?” …There’s always something terrible going on with racism, and it’s our job to make people aware of this fact that racism is just overwhelming us. It’s choking us in its intensity.”
I would also say, by the way, that if you want to just somehow balance out the level of bias that you’re getting, I often read the BBC if I’m going to read anything and just see what they are saying over there. Another news service I’m actually going to stick my neck out and defend is Al Jazeera.
Rob Wiblin: I’ve heard good things.
Bryan Caplan: People think of it as just overwhelmingly biased, and I read it, and I’m like, that’s not that biased. Basically, if you were an Islamist and you wrote Al Jazeera, I would say it’s remarkable restraint.
Rob Wiblin: Yeah. That should be rewarded. A test that I often run when I’m reading articles in the media or seeing something reported is to ask myself: If the reverse had happened, would the press have reported on it? If a study found the opposite effect, would they have reported it? If any aspect of the thing had been flipped, would you be hearing about it? And if the answer is no, then you’re not getting much signal from the existence of that story, because you’ll only be hearing about cases of A and no cases of not-A, no matter how actually prevalent they are.
Bryan Caplan: Yeah. So this has been pointed out where there’s basically no coverage of police brutality against whites in America. And we know from the statistics that that’s a very large share of it — and in fact, that the racial breakdown of police brutality very closely matches the racial breakdown in actual violent crime. So all right, then maybe it’s not racism. Maybe it’s just that there’s some fraction of police that are sadistic, but it’s not motivated by race; it’s motivated by sadism, which seems to be fairly race neutral.
Of course, there’s the even more obvious one of what you think of as racism is just an error rate that you would expect to exist in any society. I remember a tweet that I did a while back: Whenever you’re reading the news, ask yourself, “How many events like this would happen in a well-functioning society?” So it’s like someone ran over a baby. All right, well, how many runnings over babies would happen in society where people are actually very scrupulous in their driving and babies are well taken care of?
Rob Wiblin: It’s got to be more than zero in a country of 300 million, right?
Bryan Caplan: There’s just going to be some, because human beings are flawed. But to say this is showing something about bad driving or child neglect or anything else, maybe it’s just wrong.
Rob Wiblin: Coming back to the different reasons that people tell me for why they think it’s important to continue reading the news: Another one that I hear is, “As a smart, altruistic citizen of my country, it’s really important that I follow what’s going on, so that I can do my bit to make sure that things don’t go to hell, and I can vote well, and I can know when to speak up about things that I disagree with that the government is doing, say. And if everyone like me stopped reading news” — stopped slacking like I am — “the world would get worse, because it’d be less accountability for people doing the wrong thing.” What do you make of that?
Bryan Caplan: First of all, of course, there’s the general effective altruism point that unless you think that this is the most important problem in the universe, you should be directing all of your altruistic energy towards the number one cause. So this is probably not the number one most important cause in the universe. So take that time you’re spending reading on the news, and put it into whatever is the number one most important problem — whether it’s deworming or bed nets or whatever you have.
But another key point is: Like I said, you could believe that and still recognise that you could cut down by 90% without any loss in your ability to perform that function. And I would say you could go and cut down way more and just go and read the Wikipedia article instead; I think the political bias in Wikipedia is quite a bit lower, and the “getting the big picture” is a lot better. So I would just say you could do that instead and then you are performing this duty in a much more effective way. This is where you can tell the people that only read the daily stories, “Actually, there was a story that got a lot of coverage that turned out to be wrong. And therefore, rather than helping us to go and hold the government to account, it is scapegoating people for something that didn’t even really happen in the way that people imagine. And so it’s not in fact a big deal.”
Other things to think about are certainly if you’re just using it for voting: What are the odds that the news stories will be sufficiently severe that it would even change your vote? So there’s that.
And then in terms of the story of “Even if the people that I like are in power, I still want to be able to be monitoring them and making sure they’re doing good things”: Again, on the one hand, there’s something to the argument, but what about the point of there’s a base rate of honest, standard human error, below which you ought to actually be worrying that they’re just not trying hard enough? There’s the old joke in economics about how if you’ve never missed a plane, you spend too much time hanging around in airports. Similarly, if you were to say “I’m going to get really angry over every mistake the politicians make,” well, isn’t there just some base rate of mistakes that they will make if they’re doing a good job? And if you only punish the kinds of mistakes that are covered in the media, aren’t you going and actually giving them incentive to basically make the mistakes that come from avoiding risk?
Rob Wiblin: From avoiding acting and just being as conservative as possible, to not be associated with any actions, even if they had positive expected value — because then you’ll be blamed if they go badly.
Bryan Caplan: Right. Or especially when you realise that some actions are not counted as actions by the media. So failing to repeal doesn’t count as an action. This is a general problem with any kind of deregulation or repeal: If, after it happens, any bad thing occurs that would have been prevented, or at least even imagined to be prevented, by keeping the regulation on the books, this is seen as the hard proof that the regulation should never have been repealed — even though cost-benefit analysis might say it’s better to go and cut the price of housing by 20% and have three more buildings collapse from earthquakes per year.
Another nice illustration of this is whenever there is a disaster, the normal reaction is, “Something has to be done to stop this from ever happening again.” Again, the question is: Maybe we should just stay the course, because this is the right number of disasters to have? Which horrifies people. But look, we shouldn’t have earthquake codes so strict that no building ever collapses, no matter what, because the effect on housing costs would be astronomical. So why don’t you tell me what is the correct number of houses to collapse in earthquakes? And then we’re only going to cover it in the media if we exceed that number. You just imagine people’s heads exploding, like, “No, we have to cover every single one so that we can have the proper reaction!” This proper reaction is what makes housing costs too high.
Rob Wiblin: That reminds me of a fantastic comedy skit from Mitchell and Webb, where there’s a journalist who’s outraged that in this very large metropolitan area, there was not a single child that drowned the previous year — which just goes to show that there was an incredible overspend, an incredible level of conservatism about the construction of pools and the protection of waterways to ensure that, because in a city of this size there should be at least a few children that are drowning every year by accident. We’ll stick up a link to that one.
On the point of duty as a citizen: There is a question of maybe this just isn’t the best way to improve the world, but let’s say that you did think that you had some particular duty to your country for one reason or another. I think this can’t be what drives most people’s engagement with the news most of the time, because I noticed that when people make this argument, they don’t say, “…and this is really important, because I checked and I live in a really close seat, and so my vote really matters and it’s very likely to change what MP gets up.” For me, this argument wouldn’t wash, because I’m in an incredibly safe Labour seat in London. There’s no chance anytime soon that my vote is going to matter one iota. So there’s really no reason for me to fret that much about voting.
Bryan Caplan: And you don’t even have a primary system, where you could vote for a different Labour Party person?
Rob Wiblin: I don’t believe so, no.
Bryan Caplan: The US is quite different. In fact, almost all cities in America are one-party Democratic, and yet there is still the thing that we call “primarying” someone — when the incumbent can, via election with the general public, be tossed out of that position. So there’s basically competition even within the one-party system. It’s very likely much more constrained than two-party competition, but it’s still far from zero. Whereas you’ve got actual strong political parties, where it’s only like 1% of Britons belong to a political party, right?
Rob Wiblin: Yeah, it might be a few percent now, but it’s a really small number.
Bryan Caplan: And they’re the only ones that can vote in these things.
Rob Wiblin: You have to pay in order to join, usually.
Bryan Caplan: But it’s a token sum, right? What is it?
Rob Wiblin: I think it varies. I think there’s £20 for one party, and then there was another one that reduced it to £3. Most people are not interested in paying £20 in order to vote for who the prime minister should be.
Bryan Caplan: Yeah, we could talk an hour about that, Rob: What this really reveals about how many damns people give about this stuff.
Rob Wiblin: So that’s on the voting side: people don’t seem to shift their news consumption enormously based on whether they’re in a close seat or whether they’re really unsure about who to vote for. It seems like the more sure they are who to vote for, the more they’re likely to consume news.
Then there’s another question: “But I could be politically active in some other way. I could write to a minister complaining about something or other that’s important that happened.” But you notice the ratio between the amount of time that people spend reading about bad things that happened versus taking action: going to an event or writing an opinion piece themselves.
Bryan Caplan: “Action” equals writing an email. The bar of action is so low. “Sent an email. I’m a real activist now.” I don’t remember what the evidence is, but I think the general view among political scientists is that the political influence of someone who writes an actual letter is enormous compared to voting. Because politicians really do actually have staff that say, out of the actual letters, we got 50 letters for the week, and 30 say this and 20 say that, — and that swayed politicians way more than one person’s vote.
Rob Wiblin: That’s what I’ve heard. You can have really remarkable influence doing that, and even more influence if you actually show up to a meeting session that an MP has in their constituency. Not many people are willing to do that, but they take it very seriously when someone comes and actually is willing to talk to them in person about some issue that troubles them. Someone who, for every hour they spent reading the news in order to learn about bad things that were happening in their local government, and then they spent an hour taking action — like going and speaking to their MP about it — I would say that is fantastic. But when the ratio is 20:1 or 100:1 between those, I think it makes less sense.
Bryan Caplan: Right. And you actually just touched upon another big refutation of “I’m learning about the news in order to influence government”: almost everybody primarily follows national news, where you have virtually no influence. And then in the US, state-level news gets the next level. And the area where people are paying the very least attention is local news, where you have the most influence. And the same also goes for voter turnout in the US: highest turnout for presidential elections, lower for state, lowest of all for local — where you have the most say.
Rob Wiblin: So a third stream of defence that I hear is to say that, and maybe this is the one that I’m most sympathetic to: The one thing that I feel maybe I’ve lost from not reading the news is this kind of frenetic energy that you get from engaging with live events. On the one hand, it’s kind of anxiety and it’s kind of feeling bad; it’s kind of feeling overwhelmed by events. But there’s also this kind of enthusiasm, energy, uncertainty. It’s like watching a sports match in real time. And you’re like, this is kind of bad because I’m worried that things will go badly. But also I’m so engaged, and this is really activating me. I think on balance I don’t really want to have more of that in my life. I value the calm. But I do slightly miss that at times. I guess there were times when that was fun.
Bryan Caplan: I think that’s the secret to the business model. It can’t be that people are made happy by it. So it’s got to be that you’re tapping into some other emotion that is at least partly positive. It’s anxiety, but yeah, it’s the frenetic anxiety. It’s being part of something, it’s flow. So yeah, definitely there’s a lot of flow from news, but with moderate effort, I think you can find some much better substitutes for it.
Really, a much better substitute for the news is just friendship, just having people that you would like spending time with. This is the main secret of human happiness. People are primarily happy when they’re spending time with people whose company they enjoy. You might compare the news to spending time with your cousin that you hate, but you’ve known each other and you sit there pushing each other’s buttons. Wouldn’t you rather be with a relative that you have positive flow with than this negative flow? It’s like, yeah, well, it’s too hard to find that. But if you don’t recognise what you’re really looking for, you’re really not likely to find it.
Terrorism and the news [00:49:12]
Rob Wiblin: So here’s a controversial point that Rolf makes in the book, to begin to close out this section: that reading the news and journalism and the media in general are the direct cause of terrorism. The notion there being that terrorists commit terrorism in large part to get massive media coverage. So when the media provides massive media coverage to terrorist attacks and we choose to read about it, that motivates further terrorism. What do you think of that argument?
Bryan Caplan: I think it’s got to be at least 70% true. If you were to just get rid of news entirely, there’d still be some terrorism. They’re hoping to spread it through word of mouth or whatever. But yeah, obviously they’re highly motivated by these social dynamics. It’s hard to see how you can doubt it. If you were to just go back historically, were there things that we would classify today as terrorism before there was any mass media?
There’s a few things that you might go and count, but really it’s anachronistic, because in the past there’s a pogrom and they aren’t doing it for the purposes of getting a reaction; they’re doing it just to kill a bunch of people that they’re mad about in that area. You might say pogroms have a motivation of, “We go and massacre a couple of towns, and this will lead to mass flight from the country, from all the other people that are worried about it.” And that’s terrorists in a sense.
But this is actually one of several examples of things where if you know the broad span of human history, mass media seems to just change the way that bad things happen. Closely related to terrorism is if you look at the way that the motivations and the dynamics in the way that wars played out in the past. Normally, the ways that wars used to work is there’d be two countries, one would attack another, one side would be decisively defeated, and then there’d be a peace treaty where they would go and “permanently” hand over some land to the other side. That’s the way that it worked. And in those days, you could very plausibly see they’re fighting because this side wants the city of Cologne with its salt mines or whatever.
Now, if you look at the modern world, there are a lot of wars where really the whole point of it is just to antagonise people. There’s no actual goal, there’s no resource anyone wants, there’s no plausible risk. And furthermore, there’s no actual resolution. The normal result of a war in the modern period is what was called a frozen conflict zone — no peace, no war; we have a ceasefire, and that’s the end of it — until, of course, war fighting breaks out again.
So if you just look at, prior to Ukraine, all Russian military interventions, they basically go and there’s some incident and they grab a little piece of territory and they basically just give the middle finger to the rest of the world. And then there’s a ceasefire, and that’s it. It’s just not the kind of war that used to be fought. There’s very little strategic, military, economic point to the territory that’s seized. It’s more of just showboating for the media, and saying, “There were some ethnic Russians there. You can’t push us around. Ha!”
Rob Wiblin: I guess the Mongols massacred people when they resisted them in order to induce other cities to surrender. You could think of that as a sort of terrorism.
Bryan Caplan: Actually, my sons, who know this history quite a bit better than I do, say that a lot of that is exaggerated, and the Mongols would often massacre people who surrendered immediately. Like, what were they thinking? They were just some violent dudes.
Rob Wiblin: Barbarians.
Bryan Caplan: They were really barbaric. What do we have to do to not get massacred? I don’t know. The men need blood.
Rob Wiblin: Live somewhere else, yeah. An important question here, if we’re trying to just take this from a very pragmatic point of view, is: If the media just had a blanket ban on reporting of terrorism in the long run, how much would we expect terrorism to decline? I guess you were saying something like 70%. That seems kind of reasonable to me.
Bryan Caplan: Yeah. Presumably you wouldn’t be able to go and ban people from emailing things. There’d be viral emails.
Rob Wiblin: Exactly.
Bryan Caplan: I believe there’d be a lot of substitution, like, “The terrorism the media doesn’t want you to know about!” So there’d be kooks with internet lists and you’d have to have a real police state to go and totally crack down on it.
Rob Wiblin: Yeah. So Rolf refers to an interesting study on this question in the section on terrorism, where he tries to find a source of random variation in how much the media covers terrorist attacks. So he looks at where there’s a period of substantial terrorism in Iraq and the Middle East more broadly during the 2000s. But sometimes when there was an equivalently bad terrorist attack, there would be a natural disaster or something else that would push it off the front pages really quickly, and other times there wouldn’t be another news story that would push it off the front pages really quickly. I think this is really hard research to do well, but the study claimed that on occasions when you got more coverage of a terrorist attack, that induced more terrorist attacks over the following week relative to cases where the previous one had been unsuccessful. And it just makes a tonne of sense.
So the media does actually coordinate to a point to not cover some things. For example, they no longer give graphic descriptions of teen suicides, because they think that induces other teens to commit suicide, and there’s evidence of that. They also, at least in some countries, don’t give the names of mass killers anymore because they thought that was inducing people to commit mass murder, getting copycat cases. But I don’t know of any effort to get the media to stop covering terrorism, even though the argument is very much the same here. In this case, it’s explicitly targeted more or less to try to get media coverage; the media could decide that we’re just not going to cover this. Because, apart from anything else, it doesn’t actually kill that many people in the scheme of things, and it’s a completely manipulative action just trying to pull the media’s strings. So I think it’s actually kind of shameful that The New York Times and other newspapers have never thought about doing this.
Bryan Caplan: Your random variation point makes me realise that there is one important counterexample of what we’re saying, Rob: When there is an overwhelmingly horrible thing in the news, it is standard for the worst dictatorships in the world to immediately go and carry out some horrible atrocities, knowing they’re not going to get much attention for them. So if I remember correctly, Eritrea, when there was some major horrible thing in global news, then they went and executed a bunch of political prisoners and they didn’t get the coverage they would have gotten normally. So it does say that there are certain kinds of bad things that the media does exert restraint on. Just in the interest of balance, I think it is worth pointing that out.
Now, as to what would happen in a world where almost no one was following the news, maybe it really would be true that those kinds of countries would just feel like they can get away with almost anything. So it’s like, maybe we only talk about the very worst things. But it’s tough.
Enjoyment, empathy, and charity [00:55:44]
Rob Wiblin: All right. We should bring this one to a close. I do want to encourage listeners to go away and listen to this book, Stop Reading the News.
Bryan Caplan: I mean, just the point of cutting it down by at least half: Try that, see what happens. That’s one where I just think that it’s almost impossible to argue against that, other than, “I really enjoy it.”
Rob Wiblin: Yeah, but then you have to think, “Am I really enjoying this? Is this enjoyment or is this something else?” I think it’s harder actually to cut it down by half than it is to almost cut it out completely, because it’s so grabby, right? Once you’re on the home page, then you’re seeing all of these things vying for your attention. I think if someone finds it hard to cut down by half, then I would suggest that they actually try the more extreme option of just cutting it out completely. Just get a blocker that stops you from visiting the home pages of news sites. There’s lots of apps that do that.
I might whet your appetite with a couple of other points, a couple of other criticisms of news that Rolf makes. One thing we didn’t talk about is that many office workers, me included, are in the habit of checking the news every couple of hours while they’re at their desk. This is super distracting. It means that you lose your train of thought and you have to try to restore it all back into your mind afterwards in order to try to be productive again.
Bryan Caplan: I mean, probably people would be looking for some other distraction. I know that’s how I am.
Rob Wiblin: That’s true. Yeah.
Bryan Caplan: There’s this old piece by Robin Hanson where he went over the effect of breaks on productivity. And it really does seem like breaks are a good idea for total productivity. It may be that if you go and look at something really horrible, then that’s not giving you the refreshment that you actually are looking for, or it’s staying with you after you get back to work. But the general utility of breaks, purely from the point of view of total productivity, seems fairly well established. In particular, that paper, some of the papers that was going on had very precise quantitative estimates of how much your productivity is declining as you go without a break. And then I think they actually do the math of working out what is the optimal timing and duration of breaks.
Rob Wiblin: That’s a pretty good point that I hadn’t thought of, to be honest. Another one is that news gives us the illusion of empathy. I think people often feel like they’ve done something useful: they’ve helped people by reading about something awful that’s happened elsewhere in the world; you’ve read about this earthquake and so you care. But in fact, real empathy requires action. Real empathy implies actually doing something to help people, not just the voyeurism of watching awful events overseas — and that is like 99% of what we do in these cases.
Bryan Caplan: Yeah, sort of the idea that you’ve got some mental budget of how much time you’re going to spend on altruism. And if this fake altruism feels like altruism to you, then it’s actually cutting down on something real. Again, this is one where I think that someone could fairly say that one of the main ways that we raise money for horrible tragedies is by showing them some terrible news. But then there’s the EA point of: But are the kinds of tragedies that get a lot of news coverage actually the ones that should be getting the most money?
Rob Wiblin: Yeah. And maybe that does have the effect on other people, but couldn’t you save your time completely and just decide to give some money to charity, and give it to the thing that’s most useful? And then you don’t have to watch any of it.
Bryan Caplan: Right. Again, I think that there, the usual response would be something like, “Well, that might be fine for Martians like Bryan and Rob, but for a normal person, either they will give nothing, or they’ll give because they’ve been moved by some gripping images.” I think there’s probably something to that, honestly.
Rob Wiblin: Well, I think we’ve done news reasonably well, so I’d be interested to hear from listeners if any of them decide to cut back on news, and how it goes for you.
Why Bryan is sceptical of AI risk [00:59:03]
Rob Wiblin: Let’s push on to another topic, which is artificial intelligence. People have been hassling me to do an AI section with you since 2018, and we’re finally doing it, so fingers crossed we won’t disappoint people. Unlike the previous topic, this is one where we have pretty different views, and I’m keen to explore that and understand your view better.
I guess at a high level, I think the impacts of AI technology over the next 40 years are going to be very big. And I’m anxious about it, because I think there’s a more than 10% probability that it could be extremely good and a greater than 10% probability that it could go very badly, in one of many different ways. By contrast, I’d say you think the impacts of AI technology are going to be more modest, and you think they’re many more times likely to be good than bad, so you’re not feeling that anxious about it.
Bryan Caplan: Yeah. You’re right, my negative tail risk is very low.
Rob Wiblin: So from listening to your previous interviews on this, I characterise you as mostly drawing your view from what you think of as common sense that the future won’t be radically different from the past. And you’re thinking of historical analogies to AI, like the introduction of electricity or the creation of steam engines and so on. And I think that the idea that AI could lead us in a very bad direction, in my view, is actually super common sense, more common sense than the idea that we should be really confident that it will go well.
I’m going to try to put some arguments along those lines to you. I want to steer us away from the idea of superintelligence and misalignment, because while I place some weight on them, they’re not doing most of the work to make me anxious. I think that people should be nervous about the future, even if you’re sure we’ll never have intelligence that’s greater than a human level, and even if AIs with their own independent agenda that are in conflict with humans is impossible, is something that could never happen.
Bryan Caplan: I’m puzzled by this idea of we’ll never have intelligence greater than human level. I would say that this calculator here, this calculator from 35 years ago, from my high school chemistry, in a sense, it’s already smarter than any human.
Rob Wiblin: Yeah, at a specific skill.
Bryan Caplan: In a fairly wide range of skills, it’s just better than any human will be. And this is from the 1980s. And like, Bryan, you’re still using a calculator from the 80s? Yeah, it’s that good. I like that calculator. I’m not going to give it up from my cold, dead hands.
Rob Wiblin: I agree. It’s clear that AI is going to exceed human capabilities in particular areas because it already does. But hypothetically, let’s say that all it does is pull even with a smart human in all these areas, I think that there’s still reasons to think that could go very badly.
So with that out of the way, I should give you a chance to lay out your overall take. How do you think AI technology is likely to play out in the coming decades? And maybe what’s the most likely way, in your mind, for the effects to be negative?
Bryan Caplan: So I have changed my mind on this a lot over the last year, because until a year ago, I just considered this to be total vaporware. Do you remember this expression? Vaporware?
Rob Wiblin: Vaporware, yeah. A game that they start working on and then it never appears.
Bryan Caplan: Yes. So the number of times I’ve been told by people, “Oh my god, the AI is so fantastically good,” and then I look at it and it doesn’t seem to work at all; what are you talking about? Or people go and say, “It’s gotten so good at chess.” Yeah, well, that’s just what I would expect that it would be good at. And who cares? Chess isn’t important. It’s like, “Now it can do Go.” Yeah, Go is an even dumber game than chess. And I have actually told friends that when it’s good at Dungeons and Dragons, let me know. That’s a real game. That’s a game that I actually care about, that really is essential to being a human being.
So that was my view for a long time, because people just made claims that seemed unlikely to be true, and I checked them out and they were false. And then when GPT-3 came along, once again, people started saying, “This is fantastic.” And I said, let’s see what happens if I give my labour economics midterm to GPT-3. And it got a D. Then the reaction from some fans is, “To get a D on that exam is an incredible achievement!” I’m like, do you realise how low my standards are? No. A D is not an incredible achievement. It’s basically what you get for just mentioning some key terms and rambling on a bit about the question. So that was where I said this is basically vaporware #173.
But then GPT-4 came out, and I gave it the same exams and it did great. It got As. And I’m like, all right. And this is in the course of three months: over the course of three months, it goes from a D to an A. I remember when I was arguing with my friends, I was even saying, “So you’re saying the next version is going to do a lot better on the test?” Tyler Cowen just said, “Your tests don’t count for anything anyway. Who cares about your stupid tests?” I care about my stupid test. To my mind, these tests show whether people are really thinking about the subject, whether they understand it on a deep level, where they can take what I have taught them and apply it in ways that are not just rote memorisation. And of course, since I’ve been giving these tests for 25 years, I feel like I’ve got a really good sense of what level of thought and depth goes behind certain levels of answers versus others. And of course, and I just feel like I understand this metric much better than I understand most other metrics.
Furthermore, I consider it to be a lot more impressive than being able to do well on an SAT. They’ve got hundreds of SATs that they can train off of; it’s not that surprising that you can get a machine to go and do well on a test that it has all this training data on. But on the other hand, my tests are either not on the training data, or at least there’s just not that much of it. So if a machine can do well on it, then that would really say something. It’s saying that it’s actually got some kind of general performance capability.
Anyway, so I went and gave it a test three months later with GPT-4, and it got an A. And I will say my jaw did drop. I had a bet on it. The bet doesn’t mature until 2030, and it’s a higher bar than just getting an A once, so I still feel like maybe I’ll win just by virtue of bad luck for the AI. But I will say that in terms of the substance, the guy that bet me, I think that it shows that he in particular was right.
So that’s where it really changed my mind about the ability to go and perform well on tasks of this kind, which do mean a lot to me. I mean, I do actually consider the ability to go and learn material to the degree where you can get an A on one of my tests, that is something where… I’m not going to say it’s what separates humans from the animals, but at some level it’s what separates someone that I want to have lunch with from someone that I don’t want to have lunch with: whether they are capable of learning this material to the degree where they can get an A one of my tests. It’s not like I don’t want to talk to you if you haven’t taken the class with me, but if you couldn’t, after taking the class, go and do well in the test, then it’s just there’s something about you that isn’t engaging to me anyway.
So I did change my mind about the performance there. But where this all comes back to me is base rates. Now, Scott Alexander has this phrase of “base rate ping pong,” where he says anytime someone makes a base rates argument, you can always make a different base rate argument, and then base rates really aren’t meaningful. So I say “base rate” for people saying something is going to be the end of humanity, and then how often have they been right? They’ve never been right. But I could have the base rate of a new weaponisable technology gets released, and then we do the average of how many people it would kill and then that winds up being high. So it is true that you can go and do this. And this really actually is almost directly out of Hume’s problem of induction. Not quite the same, but still it is very much in the same ballpark of: for any observation, what is the correct generalisation to draw?
Now, it is hard for almost any rationalist to really stick with this base rate ping pong nihilism — because it is a nihilistic view. Remember, one of the main lessons out of books like Superforecasting is that an essential skill for good forecasting is thinking in terms of base rates. So just to go and say, “No, you can’t trick me with this base rate trick. Base rates, we can just do ping pong all day and there’s no such thing as a right base rate or even a better or worse base rate” — if you’re going to say that, then the whole rationalist project crumbles. It is very close to saying we can’t learn anything from experience, per the original Hume’s problem of induction.
And if you don’t think you can learn anything from experience, there’s no EA project. And of course, there’s no existing, other than Hume saying, “I just kind of pretend that I don’t know this stuff and have a beer with my friends and try to get through the next day.” So I would say if we’re not willing to go and think in terms of base rates, then we can barely converse anymore, if we’re going to say that there’s just no such thing as a more or less reasonable base rate.
Rob Wiblin: I definitely wouldn’t say that. I think that this question of, “What is the right reference class, and what is the right base rate?” — like, “How surprising is this idea on its face?” — is a really important one. I mean, what you have to do is just say, here’s the five — or if you’re lucky, 10 — different plausible reference classes, or the best-matching reference classes that you can fit this in. And they’re going to give a range of different views on how improbable a claim is on its face, and then you maybe want to average across them. I don’t know, take the median — I’m not sure exactly; usually we don’t get that precise. But I think with this, you’ll find with some base rates it seems very unlikely, and then with other base rates it seems plausible enough that you might update to it seeming reasonably likely based on the more inside view considerations about the technology.
I would like to understand better to what extent our different predictions are driven by different ideas about whether AI models will be capable of doing specific things anytime soon, or whether we agree on that but we disagree about whether it would be dangerous, or at least worrying, if they did have those capabilities. So I want to suggest a bunch of different hypothetical AI capabilities and see first if you think they’re plausible, and then if you’d worry if they were.
So: Do you think that by 2035 there will be a generative AI model that could guide a team of 10 biology graduate students through the process of designing an extremely deadly and extremely contagious viral pathogen?
Bryan Caplan: This is one where the main thing in my mind is the marginal risk. If you were to give me just the probability that those humans could do it on their own, and then say how much extra risk is the AI adding? That’s the way that I’m always coming at this. As to whether it could do it, this is one where I would just say that I’ve got to find out what’s going on in biology first and see how well humans are doing. In terms of how much the AI would augment the preexisting capability if they were to try, that’s where I can see maybe this is going to go and double that risk. But then I would say that I think the initial risk of this is sufficiently low that I’m not that worried about a doubling.
So yeah, I do think it’s plausible this could speed up the process. It’s just how worried are you about the idea of biologists trying to design super germs? That’s where I’ll say I’m a little worried. Basically I would only be worried about a government doing it, because it would be such a foolish thing for biologists to do on their own initiative, and they’ll end up in jail and worse. So I’d really only be worried about a government doing it. And then a government doing it, they would know that this thing is all very likely to affect their own population too. So it seems like a pretty bad thing for them to be working on. But then again, governments will do things like that, despite the fact that they’re a bad idea.
So I think it’s plausible that it would augment human ability. The idea that it would take humans’ ability to do it from zero to something noticeable, that’s where I would doubt it.
Rob Wiblin: Yes. I think about this as kind of lowering the threshold of funding and staffing that you need in order to be able to do this. So North Korea, I think, could definitely already do this. Indeed, it might already have done so. And maybe the biggest risk that we face today is that a bioweapons programme in North Korea accidentally leaks a really dangerous pathogen that they’ve created. But I agree that it’s certainly possible, but it seems really unlikely.
Bryan Caplan: Well, when you put it that way…
Rob Wiblin: I think at least for today, that is one of the biggest risks that we face. I mean, we also know that Russia had an enormous bioweapons programme that they hid for decades, and it probably is still continuing in some form. So it’s possible that Russia could accidentally end up leaking a pathogen that they’ve created, probably a bit less likely than North Korea. But I think of it as: A team of 1,000 biologists working on this for a couple of years, I think now they could successfully do this. As the technology advances, you get down to 100 people, maybe they could do this. And it keeps going down and they need less time, less expertise, less amazing materials.
I reckon generative AI models at the moment, they’re able to give kind of useful advice on this to someone who’s a nonexpert. But I think that they’re getting better at actually being able to come up with ideas for how to achieve these goals that a smart biology grad student wouldn’t come up with. My fear is it’s going to lower the barrier to entry for doing this, such that a small group of weirdo motivated people could have a crack at this.
Bryan Caplan: My view there of the technology would just be that there’s the ideas about how to do it, but then there’s actually practically going and getting the real world physical resources together to move ahead with it. And I would think that those are so built into this project that you could go and make the cost of getting the ideas down to zero and still not have a very large change in the quantity — because you still need to get the labs, you’ve got to get the actual physical capabilities of doing these things. In the same way that if you were to go to an AI and say, “Tell me how to go and build a nuclear bomb using materials only available in my hometown,” the AI could be as good as you want, and there just isn’t a way for one person to go and do this. You just need way more physical materials. Even if you had all the blueprints, it would not mean that you would be capable of doing it.
You’re right about moving the thresholds marginally, but in terms of how much is it in multiplying the risk, you start with some base risk of human beings doing it on their own without any AI, and then go and say, how much is AI boosting it? This is a general issue with what economists call production functions, which is normally you actually need a whole lot of different things in combination, and there’s limited substitutability — where you can’t just go and say, “Let’s double the number of workers and halve the amount of physical materials in a car”; you can’t make the car work with half the physical materials, it doesn’t matter how many people you’ve got working on it.
I would say that that is a lot of what’s giving me peace of mind on specific kinds of terrible things that you could do. Yes, human beings could do them already if they got the right materials. But it’s not like there’s just one scarce material. Nor should you think of AI as like the one ring to rule them all: it’s the resource that gives you all the resources. You imagine it’s an AI where you say, “Tell me how to get plutonium easily.” “You get plutonium easily this way.” “OK, tell me how to get…” It isn’t really like that. There’s just a lot of things that are just inherently physically difficult that an AI can give you great advice, and yet it’s still not that helpful.
Rob Wiblin: Yeah. For what it’s worth, experts who worry about bioweapons and people who have looked into this, they’re definitely troubled. They can foresee that within the next 10 years, these things could be really quite helpful in instructing you on: What sort of genetic sequence do you need to get? How do you stitch it together? How would you actually embody it in a viral shell?
Bryan Caplan: What I noticed, though, is that they usually do use this grammar of “could” be really helpful. It’s very hard to get any of these people to actually say what’s the current probability? What’s the probability when AI gets a lot better? And honestly, here’s the thing that really strikes me about physical scientists: they are not good at thinking probabilistically. They seem to be very subject to these normal problems of either rounding low risk down to zero or up to 1%.
Rob Wiblin: I think that the people that I’m thinking of would say the probability that it’s going to be very useful in the next 10 years is like 50%. So maybe you get like a 50% discount because it might not work, but they think it’s really quite material, and that we could get down to something where less than 10 educated people working in a lab and just kind of throwing it together and not telling people what they’re up to might be able to do this. And then there’s quite a lot of labs around in different countries, so that worries people.
Let’s try a different one. Do you think that by say 2060, we’ll have AI or machine agents that resemble people, in the sense that they have their own goals that they pursue independently? They might have physical bodies and they might have legal rights, but not necessarily.
Bryan Caplan: I’ll give the hardest no on will they have rights. This is where I will say, that human beings just regard nonhuman things, even nonhuman animals, but definitely anything that once you know that it is what we think of intuitively as a machine — where you open it up and you see that it’s got gears or whatever wires inside of it — human beings just regard those as fundamentally, morally different from biological beings in a way that I do not think is very culturally relative or culturally specific. It is very deeply built into the human mind.
I’ve been arguing this with Robin Hanson for quite a while, and I say, “Look at every single sci-fi show where there’s a robot, and then people care about it, and then you open it up and they go, ‘Oh, it’s just robots.’ In that case, they don’t care anymore.” And Robin is very insistent that it won’t be that way. It’s like, this makes sense to everybody else, Robin.
In terms of having their own goals: Does it count as having your own goal if someone can go and give you a goal, and they can change it again if they want to, but sometimes they just let you go around having that goal?
Rob Wiblin: Let’s say that they have their own goals to the same extent that humans do: they’re influenced by other people, they can be kind of partially instructed, but also they’re not completely.
Bryan Caplan: But it’s not capable for whoever designed them or built them to go and edit and reedit them and then turn them back to the way they want?
Rob Wiblin: In practice, they are operating in the world, going off and doing things, and not being closely monitored at that level.
Bryan Caplan: But if someone is unhappy with their performance, they can then go and reedit?
Rob Wiblin: Potentially, in the same way that you could kind of pick up a criminal and try to put them in prison.
Bryan Caplan: But not in the same way that you could go and, say, change the preferences on a computer? Yeah, so I’ll put that at really low. I think under 1%. People don’t want to design things like that. They want to design things that they can control to make them do what they want. It’s not like reprogramming a criminal. It’s like a computer, where if the computer is malfunctioning enough, you just shut it off if you don’t like what it’s doing. Basically, this just comes down to they’re going to be built by humans for human purposes. Humans don’t want it to be able to be like that.
They might want to give it the illusion of that. It might be that, like in a video game, you program the computer to try to beat you because it’s more fun if the computer is trying to win. But it doesn’t mean that you’re not capable of going in and releasing the program, or at least the programmer can’t go and edit the settings, so it just renders to you unless you crush it. The closest thing to this that I did a lot was I played an enormous amount of the game Civilization II. And in there, you could create scenarios where you could edit the preferences of the AI and control it in a lot of other ways, or you could even just force its hand and make it do things if you wanted to do it. And this is one thing where this is really almost the opposite of a criminal. In fact, you have absolute control, even though it is not fun to absolutely control it most of the time, because then it’s not a game anymore.
Rob Wiblin: OK, so you think this is really unlikely. Hypothetically, if someone came back from the future and said, yes, by 2060, there are machines that act autonomously in the same way that, say, an employee acts autonomously of their boss. In that case, would you be more worried about how things might play out?
Bryan Caplan: Let’s see. Of course, I don’t want to block the hypothetical, but my reaction is, “I just don’t believe you. I don’t believe you’re a time traveller,” first of all. But even if you convince me you’re a time traveller, then my mind gets very open — because if you can do time travel, Jesus Christ, what couldn’t you do if you could do time travel? If just somehow we know that it will be so, I would say that I would get at least moderately more worried.
Then it would come down to, even there someone designed them to go and be a certain way, but after they’re designed, then they sort of lose control over them. Then my next question is: There’s no kill switch? There’s no way to just shut them off if you don’t like what they’re doing? And if you say, no, there’s not, that somehow they figured out a way to shut that off, then I guess I would not be losing sleep, but it definitely would multiply my worry quite a bit, just because I would be saying, what exactly are their goals then?
Rob Wiblin: OK, here’s another one. Do you think that by 2035 we’ll have AI systems that, if they were instructed to, would be capable of hacking onto computer servers in many different places in the world, copying themselves onto those computer servers, and usually remaining undetected for months while in some kind of hibernation mode, but able to reactivate themselves if the situation called for it?
Bryan Caplan: I would say that if nothing else changes, yes. But at the same time, if you have that technology, it’s going to also be capable of doing antivirus stuff. So the idea that I should be more worried in that world than the world of today, it’s not at all clear to me. It’s like, basically which function is going to improve at a faster rate? And I don’t have any strong intuition about that, so it’s definitely not something that would worry me. I would say that basically it’s an arms race, and maybe things will be better than they are, rather than worse. You might just say, well, anytime there’s greater uncertainty, there’s a greater chance of something going really wrong. And I’ll say, I guess that’s true, but at the same time, we just have a preexisting risk of viruses, so maybe I should be worried about those.
Rob Wiblin: Yeah, I agree with what you’re saying, that it’s a question of offence versus defence, and we’re not sure which one is going to be dominant at different periods of time. Let’s say that you just somehow knew that for some significant period of time — a decade, say — offence on this would be dominant, and it would be possible to do this. Would that make you more anxious about how things would play out?
Bryan Caplan: Yeah, because we’ve gotten so dependent on computers. So if you actually were able to get a big disparity, then you could go and hack the world or try doing ransomware for the world or whatever. And yeah, it’s a big deal. It would be facile to say we’ll just go back to the world of 1990, because we’ve really integrated our society with this technology, so there’s going to be a harsh transition period of starvation, probably.
Rob Wiblin: OK, here’s a different one: Do you think that by 2060 we’ll have the necessary AI models and computer hardware to run the equivalent of the mental work of 10 billion smart human adults working hard at their jobs?
Bryan Caplan: 2060, that wouldn’t shock me. I would want to know a lot more about what’s even going on right now, but it wouldn’t shock me.
Rob Wiblin: And does that framing of things make you worry at all? Thinking that the person-equivalent population of machine intelligence that is capable of doing most of the stuff that humans can do, they might outnumber us just in sheer numerical terms?
Bryan Caplan: As long as there’s human beings who have programmed them to do what human beings want, then I would not be worried, no. There’s always a little bit of residual worry, of course. The world’s kind of confusing. I guess the main thing, this may seem strange, but I would say the thing to really worry about is probably the thing that nobody’s thinking about.
Rob Wiblin: We’ve got to try to think about that.
Bryan Caplan: There’s this famous fake graduation speech, and it just says that the things that are really going to cause horrible harm to your life are things that never even occurred to you would be an issue. It’s much more eloquent than that, but if you track it down, it’s a great spiel of personal advice.
Rob Wiblin: We should have some people working on that. This is a good moment maybe to bring up, I think you know of Holden Karnofsky.
Bryan Caplan: Oh yeah, of course. Hi, Holden.
Rob Wiblin: We did an episode with him earlier in the year. You’ve expressed a lot of scepticism about even if you’re superintelligent as a machine — even if you have a very long period of training time with much more data than any human could consume, and so you’ve exceeded human capabilities in many domains — there’s just only so much that you could do with intelligence. There’s only so much prediction that is actually possible, no matter how smart you are. There’s only so much manipulation that’s possible. And I agree with that. I guess we don’t know exactly where the limits are going to lie; there’s uncertainty about that, but I think it’s a great point.
Bryan Caplan: Yeah. Meanwhile, I’ve got great confidence that the most incredible intelligence of the universe is not going to be able to construct 10 words that will make me kill myself. I don’t care how good your words are. Words don’t work that way.
Rob Wiblin: I agree. So Holden’s argument is that even if we don’t have machines that are more capable than humans — if they’re just as capable as smart, generally competent humans — machines have this advantage over human beings: they can increase in population much faster than humans can. Basically, humans’ population growth rate is about 1%. It takes 15-20 years for a human to become useful. But by contrast, AIs, as software, you can just copy them onto additional hard drives. You can replicate them super fast. At the moment, the population growth rate of thinking machines is like 100%, while it’s 1% for humans.
So you could potentially end up in a situation where simply humans end up losing out because they’re just massively outnumbered by the sheer amount of thinking and action that is going on. This can involve a lot of different agents, or it can involve one agent that thinks incredibly fast relative to a single person. It can do so much work in the same amount of time, and it can react really quickly because it’s got access to so much compute. I haven’t heard you respond to that general concern of just being overwhelmed by force of numbers. Does that give you any worry about how things might play out?
Bryan Caplan: No. As long as we are sticking with the program by humans to serve human ends, then I’m not worried at all. I’m already greatly outnumbered by machines. And why am I not worried? Well, all the ones that I’m interacting with were designed by human beings to help me and do what I want. There are, of course, some other scary machines that some hostile humans have. You’ve got the Russian nuclear arsenal. But there, I’m not worried about the weapons; I’m worried about the people. Of course, you say it’s kind of an interaction of the two. Yes, that is an interaction of the two. But still, the fundamental issue is: Do they want to use what they’ve got to kill me? It’s not that there’s so many of them that it’s going to kind of take control of the situation.
That does not make much sense to me. Again, unless of course you think about them having this literal true autonomy — not fake autonomy, where you say, hey, give me a challenge in a game — but rather the autonomy where you have just lost the edit capabilities to go and change what it’s going to do.
Rob Wiblin: So it is interesting that you’ve made arguments that it wouldn’t be possible for machines in various ways to take over. But then, even if you think that they could, in principle — if they were motivated to do so and coordinated to do so — it doesn’t worry you, because you think it’s so unlikely that they would generate independent goals or hostile goals to humans. So you’ve kind of got this two-level thing, where either one by itself changing doesn’t actually make you worried.
Bryan Caplan: Right. And especially, I’ve always got this last one, that I know people consider simpleminded, but I think it’s reasonable: the kill switch. When have human beings ever built a machine that we don’t have some way of shutting off if we’re not happy? Well, Americans don’t have a way of shutting off a Russian incoming nuclear weapon. Fine, we don’t have that. But somebody over there has that capability. And you can’t reverse it one second before it lands.
Rob Wiblin: I think that is a weak argument, because I do think that it will be possible for these machines, if instructed or so motivated, to break into computer servers all over the world and put themselves on there, and basically resist being turned off because they can just spread all over the place, extremely hard to find, extremely hard to deactivate.
Bryan Caplan: If they’ve got their own motivations and they’re totally acting in defiance of what their designers want?
Rob Wiblin: Or if they’re merely instructed to do so.
Bryan Caplan: But then remember, we’ve got the offence-versus-defence story again. We’ve got which is going to go faster: our ability to use AI to prevent this from happening, or the ability of AI to do it?
Rob Wiblin: I agree that it’s unclear whether that will be possible, but I think it’s like one in two, maybe one in three likely that there’ll be… I mean, I think this is one of the things that people really need to work on. And it is one thing that people are shouting about, to say this is one way that things could get out of control: not necessarily by misalignment, merely by being given harmful instructions. But if it is the case that we can secure computer networks — we can have defence dominance on information security such that it wouldn’t be possible for a motivated ML system to break into other computers and access significant compute without it being easy to turn them off — if we could do that, then that would make things a lot safer.
Bryan Caplan: Or at least it’s a marginal problem, rather than something that’s civilisation-wrecking. No matter how good defence is, there’s going to be bugs, and you can see something getting through. But the question is: Is it where, as soon as you have one breach, that’s the end of your whole society? I can imagine that scenario, but that’s the kind of thing where I go back to base rates, and just say: When in all of human history has it really worked that way? That one little mistake brings the entire system crashing down?
Rob Wiblin: Let’s talk about base rates now. Let me give you a reason to think that “AI technology might radically change the world” is actually more consistent with the commonsense outside view than you might think. Actually, before that, in 1800, imagine that someone had tried to offer a prediction of what the world in 2023 would look like, and then they describe the world just as it looks today, including that it’s common to fly around the world, we’ve got instant communication with anyone driven by these satellites that travel at 7,000 miles per hour, you can receive movies into your hand through the air.
Bryan Caplan: What’s a movie if it’s 1800?
Rob Wiblin: It’s a play. It’s a play, yeah. We’ve got quantum computers. We’ve got the James Webb Space Telescope on the other side of the moon, taking us photos of the beginning of the universe. We’ve got automatons that do a lot of work, like washing machines and dishwashers and robot factory workers. Do you think that an 1800 Bryan Caplan would have said that most of those things were silly and unlikely, because they were excited, fanciful ideas that kind of grabbed the human mind, but that we should bet on the world being boring and fewer things changing than that?
Bryan Caplan: It’s really hard to give an honest answer, but ultimately I think no. Because this is the year 1800 — so we’ve got some primitive steam engines, I believe, already at that time, and I know there’s birds. A lot of these things seem like they are the natural progression of a technology. The things that would have really surprised me is if they were to say things like, “All human beings get along in the future.” Like, “What? All human beings get along in the future?!” If they had said, “War will go down in frequency by 90%” that’s much more believable.
Actually, a lot of my heuristic would be like: How many of these things are on the list of mythological magic that human beings have been dreaming about for a long time? I think I would go through and say the match doesn’t seem to be that close. There’s flying; flying is one where human beings have been mythologising about flying for a long time, so I’d say people really want to believe in that. But on the other hand —
Rob Wiblin: There’s sending information invisibly through the air. There’s automatons that do stuff.
Bryan Caplan: Sending information invisibly through the air is not really something that actually human beings thought about. It’s hard for me to remember any stories of mythology. Things people think about are like super strength, flight, invisibility, immortality, regeneration. Regeneration is a big one. You get a wound and then the wound instantly heals.
So I would say that these things, while marvellous, they’re not the standard things that people have been dreaming about for a long time. So that would make me more think that this isn’t just some very culturally specific thing; this is the way that I would think technology would develop. It’s not just that we fulfil the dreams of people from 2,000 years ago; it’s that we find there are some new things that we just didn’t even think were possible and then we do them. I don’t think that I would be so incredulous, but maybe that’s just self-delusion. Like just an incredible transformation.
Rob Wiblin: The global economy is 1,000 times larger.
Bryan Caplan: If you were just to say that there’s almost no hunger anymore. Now that has been a dream for a long time, and to say we’re going to get rid of hunger. Although it’s also one where I guess we’ve already got Malthus in 1800. So I’d like to imagine that I would not have been sucked in by Malthus at the time, but maybe I would have because the argument is pretty convincing.
Rob Wiblin: OK, so the point I want to make is that there’s a sense in which it’s safe to bet on the world being boring, but I think over a longer term, one also has to be open to radical transformation, because that’s what we’ve seen multiple times in the past. We’ve just seen the order of things, even before humans, radically overturned — by evolution, for example.
Bryan Caplan: Yeah, I’d also say that from 1800, hearing about the world of 2023, the thing which totally makes sense is all the technologies are designed to serve men: by men for men. So I’d say that’s not so weird. And then someone would say that many of these have been used for evil purposes — and gee, that’s no surprise at all, of course. Honestly, even in 1945, if you were to imagine someone saying, “This is what our technology has brought us: Death,” it’s like, all right, you’ve got a pretty good point, man. You couldn’t have had the Holocaust without modern technology.
Rob Wiblin: So here’s another reference class that I think suggests that we could see more radical change in coming decades than what we’ve seen in the past. I think you might be familiar with this from talking with Robin Hanson, but basically, if you zoom out from our lifetimes, or even the last 100 years, then what you see is a series of different kind of eras, each one where economic growth and technological advancement is substantially faster than the previous one, and each one is shorter than the previous one before you get to the next stage.
So you have the prehuman era, very slow improvement and very long lasting. Then you have human hunter-gatherers, where they were increasing population very slowly, but faster than life was going before. And it’s also very long, about a million years. Then you’ve got the farming era, 10,000 years or so. Growth rates are like 10 times what they were before. Then you’ve got the industrial era, with growth rates about 10 times again. And also, at least until now, it’s only lasted maybe a few percent as long.
From that very zoomed-out point of view, I think it would not be that surprising to say that there’ll be a future stage where economic growth is 10 times higher again, and we could get to that plausibly within the next century. How would that happen? You could speculate about various different options, but of course, having an industrial revolution for thought, for analysis of ideas, seems like a very natural one. How would you get much faster effective population growth rate, or a big increase in the effective labour input into the economy? Having people on machines that you can manufacture en masse seems very natural.
So I think that’s one outside-view argument that makes me less sceptical of the idea that actually, maybe we could see substantially faster changes in future than what I’m familiar with in my lifetime so far. What do you think of that kind of outside-view projection?
Bryan Caplan: A lot of this hinges upon considering that we’re still in the industrial era. And I would say that’s odd, because it seems like we’ve been in this information era for 30 years. Basically, you say that we’re still in this era and there’s next ones coming, but in terms of the transformation of society from the information age, it seems like we’ve already had enormous changes. So maybe you’re a little bit too young, but you kind of remember a little bit of the world before the internet?
Rob Wiblin: Yeah, I remember pre-internet a little bit.
Bryan Caplan: I remember really well. And if I were able to go back to my teenage self and describe it, I would have just been stunned at the transformation and how much difference technology has made. Yet we don’t see much of this in economic growth rates. Now, I am someone that has been arguing that growth has been understated, but it’s not understated by a factor of 10. I’m very on board with the idea that true economic growth has been a percentage point greater than measured for, say, 40 or even 50 years. And when you compound that, then things are actually understated by maybe a factor of two. So I’d say that.
The other one is I would just say that basing your prediction on four inductions is pretty crazy. It is a lot like the Kondratiev wave theory of the economy, which has kind of died out, but there was a Russian economist named Kondratiev who basically came up with the 70-year wave cycle based upon three observations. And there are people who say it’s all true, but it just seems pretty wacko to me. For something like this, it’s a very tiny amount of evidence, but I wouldn’t go and put much weight on it. As we’ve seen three speedups, a fourth one is coming. And again, I reference Pythagoras specifically because it is sort of a math mysticism.
Rob Wiblin: OK, so let’s take the first argument first: Isn’t it a bit surprising that we’ve already begun with information technology and yet it doesn’t seem like that’s led to an increase in economic growth? I guess this outside view would just say that at some point we’ll come up with some sort of technology that would allow us to boost economic growth a lot. And it could just be that the ones that we’ve had so far, they just haven’t had that much punch, because they don’t effectively increase the population of labour going into the economy that much. They kind of augment humans, but it’s all really bottlenecked on humans very much. But at the point where we can get machines that do most or all of the things that humans do, then we can just get a massive increase in labour, because it won’t be bottlenecked by humans. Or it never need be bottlenecked by humans, because humans are not actually necessary for any specific task, so we’ve fully automated the things that the human mind can do.
But even setting that aside, even if it’s maybe a bit surprising that computers haven’t led to more of a speedup, I just don’t think that would convince me that there is no technology coming that could lead to a significant speedup. Even if it’s not AI, it could be something else.
Bryan Caplan: Right. I mean, as soon as you start phrasing it in terms of “none” — “there’s no technology that could” — that’s where I’ll say you’re right, because we have added on enough grammatical modifiers to get to the level of tautology pretty much. Like, if you just say, imagine a technology where human beings are no longer a bottleneck at all, then couldn’t we easily get massive growth? And I’ll say yeah — because you basically have removed the key factor that slows things down.
But the question is, could you ever really remove human beings as a bottleneck? This is where I’ll just say “could” is, again, it’s a strong word. But do I see human beings being removed as a bottleneck by 2035 or 2060? I’ll say no. Even at minimum, there is the bottleneck of humans controlling the legal system and just preventing technologies from being used, because there’s so many ways human beings bottleneck things.
And while we’re on base rates, there is the question of why is it that it takes so long, even for general-purpose technologies to really catch on and really start living up to their potential? It seemed like it took electricity about 30 years before it really started living up to its potential. I believe the first phone call is placed in 1873. When is the first transatlantic phone call placed? A true phone call, where you pick up a phone and call somebody?
Rob Wiblin: The ’50s, right?
Bryan Caplan: Yeah. So it’s 80 years between those two things, which boggles my mind. How did Roosevelt talk to Churchill? They had a radio relay station in Newfoundland, so they were combining telephone and radio in order to go and get it. But basically, until the ’50s, the number of transatlantic phone calls being placed per day using this proto-technology was I think under 100 per day, some crazy low number that was going on there. So it does give you an idea about how a technology that seems like it should be doing wonders, there’s just so many issues. And most people know that there was a transatlantic telegraph cable, and so they figure the telephone cable would have to happen barely after that. But that’s just wrong.
Now, as to what’s going on, I’m tempted to get Shakespearean and say, human beings, we are this crooked timber. I don’t think that’s Shakespeare. [Editor’s note: it was Kant!] But human beings, in all of our flaws and all our complexity, we are so intertwined with our technology that to go and unravel us from it is nigh impossible. And while human beings do great things, we also are an incredible pile of sand thrown in the gears of every machine. And just to clean us off of that machine, it’s just too hard. So that’s where I’m thinking we just see that even the most promising technology just takes a really long time.
And by the way, in terms of base rates — and this is one that Robin really doesn’t like — but just imagine the dawn of the domestication of animals, and someone comes along and says, “We’ve improved these animals a lot, right? Can you imagine improving them so much they actually take over and get the upper hand?” I think at the very dawn of domestication, it would have been kind of dogmatic just to say, no, there’s just no freaking way. It’s like, look, we turned wolves into dogs. Who’s to say if we keep doing this for another 10,000 years, what these beasts might turn into? But the whole time, the key point is that human beings are controlling this adaptation, and making it so we have done amazing things with dogs, but we have not bred the dog that turns on its master.
Rob Wiblin: So turning to the next argument, which is that you’ve got like three or four different periods… I suppose you could probably extend it out to more, if you’re willing to extend the analogy to earlier eras of evolution and so on, and see this increase in complexity —
Bryan Caplan: By the way, to be fair, in Robin’s model, he has growth as a series of exponential nodes, so he actually fixed the math so that it’s OK for there to be this intermediate period where it doesn’t seem like the IT is doing much, because basically it’s like a weighted average of which era you’re in. So he doesn’t require there to be a sudden transition. So he’s got the math worked out; it’s sort of reverse engineered to work.
Rob Wiblin: Yeah. So the thing I was going to say is, rather than do it as this discrete series of eras, you could just say if we zoom out and fit a very simple model to things, then you would say that there’s growth and there’s increasing rates of growth. And I guess if you add many more parameters, then you start getting some more complex curve in there. But the zoomed-out picture is growth and increasing rates of growth.
And so if I want to say I’m going to make a really boring prediction about the future, I’m going to just say that the trend of the past million years or the past 10 million years is going to continue. I would say in the future there’ll be growth and there’ll be an increasing rate of growth, if you zoom out sufficiently. So I think of that as the boring prediction in a way: the very straightforward base rate prediction. And that leads me to think that the world, in 100 years’ or 200 years’ time, there’s a good probability that it will look radically different and it will be quite shocking to me.
Bryan Caplan: Right. What’s the best way of thinking about that? One, I would just say that it seems like in the cutting-edge countries of the world, we really have bumped into the opposite problem, where it seems to be getting harder and harder to get any good new idea. I’m going to blame government for some of it, definitely. But still, there’s some very good stuff from Charles Jones just showing it seems like the number of minds that you need working on a problem to get anywhere appears to have multiplied by like a factor of 10 over time. And he’s just got a story of low-hanging fruit: it’s just getting harder and harder to go and find great new things.
Now, if you say that’s a short-run thing, and eventually we’ll break through and then there’ll be a new era of low-hanging fruit, maybe. But for example, it would not be surprising to me if 1,000 years from now we haven’t figured out anything better than nuclear power for energy. It’s an incredible technology. When you describe it, it’s like, my god, you could go and power a city with a baseball’s worth of fuel and it hasn’t taken over the world? Yeah, that’s where the broken timber of humanity comes in: where we can discover a true sci-fi technology and yet we don’t really make use of it.
This is where Tyler [Cowen] often says, “Bryan, you claim to be so optimistic, but really you’re very pessimistic.” Look, I’m optimistic that something good is going to happen, but I’m not optimistic about any particular good thing happening, because there’s just so many good things we can imagine. And yet to go and put your hope in any one of them to me seems mistaken. You’ve got to just say that there’s just thousands of things where good things can happen and it’ll be great if we go and realise 10 of them.
And that’s what I think actually has historically happened. There’s a lot of things we haven’t really improved at very much. Most obviously, we haven’t even gotten that much better at making people happy actually. There’s still a lot of people who have all the benefits of modern technology and they’re still suicidal. Including all of your alleged antidepressants, which don’t seem to actually work that well for a tonne of people.
Rob Wiblin: Yeah. So on R&D, it seems like, broadly speaking, you get logarithmic returns to throwing additional people at problems. Although I guess you do get a thing where sometimes new questions or new fields come up, and then you get kind of rapid progress within those until things start levelling off again. I think that is a useful reference class to have in mind. You could say that actually, over the last 50 years, things look a bit weak. We’re not really seeing big increases in growth.
Bryan Caplan: Well, there’s an acceleration in global growth, but it’s so easy to explain that as catch-up growth, where backwards countries are just borrowing ideas that are totally solid.
Rob Wiblin: Right, yeah. I think you can significantly explain this by the fact that we failed to take all of these surplus resources that we’re making and turn them into more people, basically. So in fact, population was a past driver of growth and increases in inputs into R&D, and that’s really gone away in recent times.
Bryan Caplan: Although the total number of minds that are in R&D has skyrocketed. There is the question of is a PhD today the same as a PhD from 100 years ago? No, PhDs from 100 years ago were pretty awesome, at least in the real subjects, whereas now PhDs can often be quite mediocre.
Rob Wiblin: Yeah. So the number of smart people and educated people in the world working on that kind of stuff has definitely gone up since 1950, but it was increasing at a much bigger proportional rate before 1950, when birth rates were much higher.
Setting that aside for a minute, because we’ll have to move on before too long, another angle I wanted to bring to you is it’s possible that when we invented and broadly disseminated the printing press, that the resulting upheaval in ideas and culture caused the European wars of religion.
Bryan Caplan: Yeah. In fact, I think that’s almost certain.
Rob Wiblin: Yeah, exactly. More or less, yes.
Bryan Caplan: “Printing press caused the wars of religion” seems really solid to me.
Rob Wiblin: It’s a really solid claim. Some people have suggested that radio was significant in allowing Soviet communism to exist and allowing it to make it possible to brainwash people with particular ideas in a way that previously would have seemed impractical. So similar idea. Now it seems like if we had some sort of period of instability like that today, the probability of it leading to a nuclear exchange that kills most people would have to be substantial. More than 10%. What do you think is the chance that all AI technologies taken together prompt a cultural upheaval as significant as did the printing press?
Bryan Caplan: Here again, if we say if we’re adding on to what the internet did, I’d say the internet has already created enormous upheavals, and ones that are quite unanticipated — definitely not by me, but I don’t remember anyone saying [it would be] most of the stuff that we now see in front of us. And then the question is: How much more will AI do? I tend to think that, again, as long as you’re focused on human beings are the ones that really call the shots — we don’t give actual autonomy to machines — as long as we’re doing that, then I don’t think the marginal upheaval from AI is going to be very much.
In terms of telling an AI to come up with an argument that’ll make right-wingers super angry today, whether they’ll be that much better at creating such arguments than regular humans are, I don’t know. I guess I can imagine it. I think this is one where you would have seen it already actually. I can’t think of any time that AI has been used right now to come up with an argument that really gets the other side super angry, or even one that will totally motivate my side. Like, “I really want to have protests in the street tomorrow. What do I tell my people, AI?” — and the AI will say, “You’ve got to use fewer verbs.” It seems like we would have seen it.
Again, I wouldn’t be shocked by saying that we could get double the upheaval of the internet. To say it would be massive, I think that there’s always just some tail risk of massive upheavals without any change in technology too. In terms of multiplying it, I just don’t think it’s too much.
Rob Wiblin: Let’s say that somehow you did know that it was going to turn out that AI technologies in aggregate did cause an upheaval that was more the size of the printing press — which I guess I would say is probably a couple of times larger than the internet. In that case, would you feel more worried about how the future might go?
Bryan Caplan: I would still fall back on this idea that rich people just love life too much to die for much of anything. So I’m on board with Steve Pinker’s Enlightenment Now story of the pacification of the world. I’ve got a micro story where when life is cheap, it’s just much easier to get people to risk their lives; when people have it good then they are cowardly. I’ve got a whole story about the main reason for the world wars is that it does take a generation or so for rich people to get a new mindset, so basically you have this brief intermediate period when they’ve got the technology of the modern world without the ethos of the modern world. And that’s what I think explains the world wars.
This also is a big part of the reason why I’m really hoping for rapid growth in the third world, so they can quickly get through this last remaining period where they’ve got modern technology but premodern value of life. I think once we can get almost everyone in the world rich, then almost no one will be ready to die for much of anything, and then we’ll be about as safe as we can get. Always remembering, though, this crooked timber of humanity — and someone could go and do a launch for reasons that don’t make much sense to us, but they do it anyway, and they just blow up their civilisation.
I’m not a big fan of [mutually assured destruction] as an absolute theory of there’ll just never be a nuclear war, because I know how World War I started. It’s like, you idiots. We have three cousins on three thrones of Europe and all right, so only two of them actually have much power, but still they go and they blow up their world. And for what? Actually because of Serbian terrorism, as it turns out. As you may have heard, actually, it turns out that the Austro-German theory that the Serbian terrorists that assassinated the Archduke were actually funded by Serbia is correct. So it was state-sponsored terrorism; the Black Hands were not acting on their own. And once we see this, it’s like, hey, that list of the ultimatums that were given to Serbia, they should have just agreed to everything and avoided World War I. Jerks.
Rob Wiblin: OK, I think let’s wrap up the AI section there. Obviously, neither of us has really changed our minds all that much.
Bryan Caplan: You got me thinking, Rob. Honestly, the argument that really got me thinking the most is a couple of weeks ago, I said human beings are not just going to let themselves be replaced by machines. And Robin [Hanson] said, what about, like, a million years from now? I’m like, Jesus, a million years. To go and say what the world will be like in a million years, you got to be real cocky to go and talk about that. All right, yeah. In a million years, lord only knows what’s going to be going on.
Rob Wiblin: I mean, I think that there will be competitive pressures that encourage companies and governments and countries. Inasmuch as AI models acting with greater autonomy are more productive, they can make more money, they can make decisions faster, there are competitive pressures that will drive people to delegate more and more influence and more and more decision making to AIs just because it’ll be way more profitable.
Bryan Caplan: There’s autonomy and there’s pseudo-autonomy, and I think you can get almost all the benefits of autonomy with pseudo-autonomy.
Rob Wiblin: Then the question becomes: Will you have people giving very bad instructions, or very harmful instructions, to models that are really powerful? And then there’s always this question of misalignment — which I’ve kind of bracketed today, because that gets into tricky technical issues where I feel someone who works on technical ML would be better to talk to you about that. It’s maybe the most slippery to do in words. But either way, I’ve listened to a lot of stuff that you’ve said about AI and I feel like I actually understand your views better. I have a better sense of your model now after that. So I think that was useful.
Voters as Mad Scientists [01:51:16]
Rob Wiblin: Pushing on, your most recent book is a collection of essays about politics and political irrationality titled Voters as Mad Scientists. What do you think is something important to understand about politics that the audience of this show, or maybe me in particular, are most likely to be getting wrong?
Bryan Caplan: Good question. You’ve got a great audience out there. I think probably a lot of it is saying, sure, people are irrational sometimes, but the system basically works. I think that this is mostly based upon status quo bias and just being very accepting of the world as it is, rather than having actual EA standards that are external to it, where you are really doing an actual honest-to-goodness evaluation.
I was just doing a debate on capitalism and socialism, and what was striking was that the other guy is very reasonable philosophy professor, Scott Sehon — and whenever I talked about big bad things that governments do, he would say, “Sure, governments make some mistakes.” And I said, “Look, I’m not saying they make some mistakes. I’m saying the main things they do are terrible.” It’s a very different point.
So he was basically a huge fan of Sweden. To him, it’s basically the best country that’s ever existed, at least darn close. And I said, your beloved country Sweden, guess what? In the ’60s, ’70s, and ’80s, they did an incredibly fast switch to nuclear power. They were on track to be relying upon nuclear power almost entirely. And then based upon the Three Mile Island accident in Fukushima, they moved very rapidly in the opposite direction. He’s like, “Well, even the Swedes…” I’m not saying they made a mistake. I’m saying they had a fantastic system and they just went and trashed it.
And this is democracy. It’s not just some isolated mistakes. It’s having something that is a great idea and either refusing it or… Really, Sweden did better than other countries, because they first went with the great idea. But then they even more inexcusably halted the great idea and reversed it and tried to go and decommission all their nuclear plants. That actually looks like they won’t go quite that far, but still. So that’s the way that I would think about understanding voter irrationality: it’s not just on the margins; it’s not just a few random things — it’s basic functions of government, things that almost everyone just takes for granted.
Another one, and this one is great for EAs: Almost every country, I think really every rich country spends considerably more on universal redistribution than on means-tested redistribution. From an EA perspective, this is just insanity. Just imagine what EAs would say about a billionaire who says, “I have $8 billion to give away. Here’s my plan: $1 to each person on Earth.” All right, there are worse things you could do, like you give $8 billion to a terrorist group or something, but it’s about as dumb of a helpful thing as you could do.
It’s like, target your resources to where they do the most good. And yet every first-world government anyway, they spend a lot more on universal redistribution. The intellectual case is pretty simple; it comes down to: Why take money from everyone to give to everyone? Why not instead focus on the biggest problems, and just say most people just don’t need help and can take care of themselves?
And then the defences of this, even social scientists are so pathetic. It’s pathetic just in the sense of they hardly even exist. If you just go to Google Scholar and try to find all the defences of the way that first-world governments spend trillions of dollars every year, you’ve got like 20 articles. And that’s it. It’s like 20 articles to justify spending trillions of dollars every year? And what are the defences? Well, there’s the one of the only way to redistribute is to do it universally, because otherwise people vote too selfishly and you have to basically trick them into thinking that they’re benefiting — even though, of course, on net they are not — which is an awfully specific theory of human error.
Rob Wiblin: OK, let’s just pause and back up for a second, because I think there’ll be lots of people in the audience who, while very sharp, are not aware of your broad take on voter behaviour and where they go wrong. Can you lay out the key simple argument for rational irrationality on the part of voters that you lay out in The Myth of the Rational Voter?
Bryan Caplan: Yeah, very good. It comes down to this: Imagine that you go to the grocery store and you just start throwing objects in at random and buy them. What happens? Well, you waste a pile of your own money on a bunch of stuff you don’t actually want, right? Or imagine, even more strongly, what if you just go in there and you just buy a bunch of stuff that you’re supposed to want? So you just go and put in a whole bunch of rice cakes or whatever stuff is allegedly super healthy, and then you buy it. And what’s happened? Yeah, you just have a bunch of stuff that you don’t even want to eat because it sounds good, but in fact it’s disgusting and you can’t stand it, right?
And when you make decisions on this basis, you are the one that suffers: it is your money that is wasted. Which doesn’t mean that no one will ever do it. We’ve all made purchases that afterwards we’re like, “Man, that was kind of dumb. Why did I buy that thing?” And yet it is quite abnormal for you to go fill your cart with a bunch of total junk that you don’t even want and then get home and say, “Why did this happen to me?”
On the other hand, if you go and vote randomly, or go and vote for a bunch of stuff that just sounds good, even though it doesn’t work very well in practice, what happens to you? And the answer is: the same thing that would have happened to you if you were the most diligent, thoughtful voter in the world and voted on that basis. Because you’re just one person. You’re just one person out of millions or tens of millions or even 100 million voters. So effectively, you have no influence on the outcome, which means that you really can safely go and vote randomly, or you could very safely go and vote for what sounds good rather than what actually works well.
Now, many people say, well, why would I vote randomly? Yeah, probably it’s going to be more of you’ll vote emotionally. You’ll vote based upon what sounds good; you’ll vote based upon ideology. If you were to say, “I’m going to go and figure out what job to do based on philosophy,” your philosophy is not going to be very helpful for figuring out these questions. But if you go and vote based on a philosophy, that’s actually quite normal for people to go and do it in that way.
Now we’re in the middle of a new book, where I think that I really am taking the argument from The Myth of Rational Voter and I’m giving it a lot more psychological structure, and I’m really happy with how it’s coming out. And this is where I build very heavily on the idea of social desirability bias. It’s basically very simple. It’s a commonsense idea with a fancy name. It just says: When the truth is ugly, people lie. And when the lies become ubiquitous enough, people often just even forget that they’re lying. They lose consciousness of it because no one’s ever even challenging them. And I say this is really the general theory of democracy: what rules policy is just what sounds good, not what is good. Because virtually everyone really is voting based purely upon the most superficial appearances, and even curiosity about what the real effects of policies are is so low.
Rob Wiblin: So as a consumer, when you’re trying to decide what car to buy, you have a big selfish personal reason to do your best to look for evidence and make a good decision. As a voter, if you’re deciding who to vote for, you have virtually no incentive to put in enormous amounts of time and effort and energy to figure out who objectively is going to lead to the outcomes that you want, because the probability of you affecting the outcome is negligible.
Bryan Caplan: Right. And also just to calm down, because when you’re buying cars, you’re like, what’s the coolest car? That’s like a Ferrari. Then why don’t you have a Ferrari? Because I don’t want to give up everything else in my entire life to have a Ferrari.
Rob Wiblin: Yeah. So I think of this kind of stylised fact that voters are not given sufficiently good incentives to do the extremely hard work of trying to figure out what’s the best thing to vote for, I think of that as the key reason that democracy falls short of what we would ideally like. Is there any evidence that we can point to to say that it’s not one of these other complaints? Like this is the key thing that’s driving the problems?
Bryan Caplan: I think the very best and most experimental evidence is just from betting. When you go and ask people making extravagant political claims, “Would you like to go and bet on that, and at what odds?” you will just see that people’s confidence suddenly plummets, and often they just run away entirely from what they’re saying. So I think this is the very best case where we actually can, in real time, change people’s incentives from this consequence-free world that the political discussion normally is in to one where you’ve got to be precise and where there’s a definite right and wrong answer, and there are stakes on the line. And then you really will see that it’s not just that the way they talk that changes; the way they think changes.
I don’t have telepathy, but the idea that when people are really angry about politics that they are lying just normally seems fanciful to me. If someone is saying, “No one will want to ever come back to Texas again if we don’t change our abortion law,” when they’re saying that, they sure seem sincere. Well, how about we go and bet on what happens to migration into Texas? I’ll give you 100:1 odds that the migrant population is greater than zero next year. So some of it is like, “OK, fine. Some people…” Well, that’s pretty different from all, from no one will come. Fine, I’ll give you 10:1 odds on there being a 50% decline. And this is where I think you actually, if you just are looking at their faces, you can see the telltale signs of a person finally for the first time in their lives facing facts, thinking about what they’re really saying and whether it’s actually true.
So I think that is the best way of doing it. I have this long-running argument with Tyler Cowen who has lost a couple of public bets to me and also suspiciously maintains that bets don’t prove a damn thing.
Rob Wiblin: Well, we need to know the temporal sequencing of those maybe to judge.
Bryan Caplan: Yeah, it’s kind of suspicious to me, but I’ve stopped arguing about it because he seems to get kind of agitated about it. Out of character. But for him, he’ll just say, well, whatever your portfolio is, that shows what you believe. And I’m saying, “Here’s my portfolio; tell me what I believe.”
Rob Wiblin: You mean what investments you hold?
Bryan Caplan: Yeah, like is it even true that Trump and Biden voters have different stocks they’re holding? Or a different mixture anyway of what stocks they’re holding? You say it’s all values, there’s no factual disagreements. Sure seems like there’s factual disagreements. I would just say that “portfolio” is just a really vague description of what you think is going to happen. It’s true if you own stock it means you probably don’t think the world will end in a year. But other than that…
Rob Wiblin: It is this remarkable stylised fact that you could imagine a world where people talk a big talk about politics, they have really strong opinions, they say these kind of extravagant things — and then when you try to bet them on it, they really believe it, and they actually are constantly losing money to people who are betting with them on it.
Bryan Caplan: Yeah. “I’ll be so rich.”
Rob Wiblin: Yeah, exactly. But it’s so fascinating that that basically never happens, because as soon as you put real stakes on the table, almost everyone realises that they don’t know. And thank god, I suppose. This has happened to me as well. Obviously, when you start getting into loose talk, you get passionate about things and then someone’s like, “All right, let’s make a concrete prediction and a concrete bet. Put money on it.” Like, I need to be more careful.
Bryan Caplan: I mean, there’s several variations. Sometimes it turns out that the person just is using words in a weird way. Like, someone was predicting that if immigration continues into Europe, there’ll be a civil war in 20 years. And I said, “How about if 10,000 people get killed, we’ll count that?” And he’s like, “No, the civil war is already happening, because an immigrant killed someone in Toulouse last week.” OK, so basically you just use the word “civil war” in a totally idiosyncratic way to confuse people. So maybe we don’t disagree then. Yeah, I think immigrants will kill people if you let them in, because there’s a lot of immigrants and there’s a greater than zero murder rate.
Rob Wiblin: So given that voters face such weak, selfish reasons to vote really prudently and wisely, why aren’t things worse?
Bryan Caplan: Great question. Part of it is that there is actually, in addition to issue-based voting — where you vote based upon the policies that politicians favour — there’s also what’s generally called retrospective voting — where people go and vote just based upon is the world collapsing or not? Do we have peace? Do we have prosperity?
Now, the real optimists say that then we can just disregard all people’s stupid policy views, because all that politicians get judged on is economic performance and peace. And I say that’s grossly overly optimistic. It’s basically the way that retrospective voting works: if things get a lot worse than they were before the election, that’s what you lose from. And even there, it seems like it’s mostly just a big fall in the three months before the election. Larry Bartels has some good stuff on this.
That means that while on the one hand, it does stay the hand of people that will just drive their civilisation off the cliff, on the other hand, it means that there’s still almost no political pressure for improving things in obvious ways, such as deregulating nuclear power. In fact, this is one where if there’s stories in the news about there’s a nuclear accident, it’s the kind of thing that’s likely to lead to a media action to go and crush nuclear power, even when no one died. It’s way safer than stuff that we’ve allowed for hundreds of years, like coal. So why is it that you are having this ridiculous reaction?
Yet a politician that wants to stay in office does not just go on the news and say, “Nuclear is the safest kind of energy. We’ve already regulated it way too much. I’m not going to add one more regulation on. No way. Line in the sand.” That’s crazy. That person is going to lose, of course. The best you can do is just add on a few more token regulations. But to go and deregulate something that is unpopular, where the gains would not be obvious for a really long time, that’s the kind of thing where I just don’t see the system has much tendency at all to get things right.
It does stay the hand of someone that wants to just lead to get immediate disaster. So in Venezuela, you do need to switch to a dictatorship after you just turn your country into a hellhole. But on the other hand, if you’re in a hellhole and people don’t remember a time when you weren’t, it’s pretty darn hard to get out of the hellhole. It’s like, you didn’t get to be worse of a hellhole, so reelect.
Rob Wiblin: OK, so the stylised thing here is that voters are biased in a big way to vote for things that sound good.
Bryan Caplan: On a very superficial level.
Rob Wiblin: Yeah. Is there any way that that voter tendency and the low effort that they put into deciding how to vote can be turned towards good? Like maybe, given that, the most important work in politics is to find policies that are actually truly good on the merits, and then find ways of framing them that sound good and noble to someone who isn’t paying that much attention?
Bryan Caplan: That sounds great. It’s just so hard. Basically it would just require this amazing coincidence where the ideas that are really great are actually susceptible to being sold in this way, in competition with people who can go and take any idea. Basically just imagine this is a competition means someone who has a constraint to find good ideas and then sell them effectively to people that are very superficial, versus someone that doesn’t have that constraint and could pick any idea at all. That person can just say, “What is the most saleable idea? Great, I’ll sell that one.” And then you’re in competition with the guy who is constrained. It’s got to be good before I can sell it. It makes sense that the person that has no constraints on what they’re willing to sell wins.
Now, in terms of what sounds good, on the one end, I think a lot of this is very much a human universal. So things like “think of the children.” I think that’s a human universal for pretty obvious reasons. Or even things like “education is wonderful.” But then there are things that vary a lot. Like if you are in the Middle East, going and crushing the Jews sounds good, but it doesn’t sound good in other countries. So there are these variations. But if you just imagine trying to go to the Middle East and say, “I’m going to figure out a way that will sound really good to them of making lasting peace with Israel,” it’s like, my god, what are you going to say? “Don’t worry, Allah will punish them for all eternity. You don’t want to go and interfere with that by making them suffer in this life.” Good luck with that. That’s not going to sell.
Rob Wiblin: I think this framing does suggest something that relatively few people do, which is to say that I think most people start with a thing that they’re passionate about and then they try to find a way to pitch the policy that they want. This is just saying, let’s look across all of the underrated policies — policies that are better than people appreciate — and think which of these is the most saleable in a way that hasn’t been exploited yet. I guess I’ve never tried actively doing that.
Bryan Caplan: It is a great EA mission, yeah. I don’t mean to discourage people from doing their best; I just want people to have realistic expectations that there’s a reason why good stuff hasn’t won yet — and that’s that it’s probably less saleable. Just the idea of “we should consider the cost of doing something” doesn’t sound good, if you realise that you’re going to be against a demagogue on the other side saying, “When it’s something as important as our children, cost is the last thing we should think about!” How am I going to beat that guy?
Allocating resources in space [02:08:21]
Rob Wiblin: All right, we’ve got a couple more minutes, so I’ll leave the final two questions to the audience. Someone sent in this question for you: “If humanity at some point is able to reach and make use of other planets or asteroids, how should those resources be allocated? And what if suddenly it’s easy to get to them and so many people could race to them roughly simultaneously?” — it’s not that there’s only one company, say, that could reach them. How should they be split up?
Bryan Caplan: I’d say that realistically, the very best system we’ve ever had is “finders keepers.” Whoever gets there first, gets it. There’s been some really interesting research in countries where they have better mineral rights for the people that find them first. I totally don’t remember where I heard this, but it makes too much sense to not be true. It’s truthy, anyway. Those countries appear to just have a lot more resources. Is this a coincidence? I think it is more of when it’s finders keepers, there’s just a lot more energy that goes into finding them.
You can also see this in archaeology. The golden age of archaeology was in the finders keepers era, and once it was set up that whatever you find you have to hand it over to the governments of that country with basically 100% tax rate normally, that has been a horrible blow to archaeology. It seems very likely there’s tonnes of finds that are still out there, but you can no longer fund them in a for-profit manner as you could in the past.
I’d say the same thing would make the most sense to me for the case of exploring space. It’s true economically it is wasteful if there is a technology that’s very widely available. Just to really stack the deck, imagine that the cost of speed is quadratic in your speed or something like that, and so you could wind up burning up all the resources in the race, and then it just winds up being futile. I would just say that that is so unrealistic, it’s just not a good thing to think about. And remember, even if it’s true for some close resources, it’s not going to be true for far resources, and it’s just better if there is not wiggle room on this.
So finders keepers rule is the best that human beings have come up with. Realistically, there’s got to be something better than that. It’s very easy to come up with a model. But in practical terms, finders keepers, I’m all on board with that, number one.
Rob Wiblin: That makes sense from an economic point of view. The thing that would trouble me most about that is that I think it’s not so much that it would motivate racing, but that it would motivate violent conflict. If you can grab something then you get to keep it, then that motivates people basically to fight one another.
Bryan Caplan: I would say normally it demotivates fighting, because finders keepers, there’s the keepers part, right? “I was there first, so it’s mine and no one can go and take it from me.” And the legal system has to recognise that.
Rob Wiblin: I think that what that would devolve into is people trying to forcibly displace other people in order to grab these resources. And so I think probably the safer bet would be to have some kind of divvying up of resources in proportion to people’s kind of military power. That might be the thing that’s least likely to generate violent conflict over it. I mean, you do see that there’s spoils of controlling government within a country, and often they are split kind of along these lines in proportion to basically the ability to do violence of different interest groups.
Bryan Caplan: Well, we’ve got two cases where we did something pretty close to that: one is Antarctica and the other one is space. I would say both of those show it’s a bad idea. And yes, divvying up territory to governments based upon military prowess basically just means that you give each government a big pile of stuff they don’t do anything with.
Rob Wiblin: Hmm. I have to think about that one.
Bryan Caplan: So there was this age of imperialism where government seemed highly motivated by the sheer thrill of colouring the map. That’s over. And I will say it’s a little confusing as to why that is so, but it does seem like governments just said, “No one can own space. No one can own Antarctica.” There’s like zones, so it’s not quite what you’re saying, but I think it is pretty much handing it out based on military prowess. Not quite, because Norway has a bunch of Antarctica, because they’re cold-weather explorers.
Rob Wiblin: What about the two-step model where governments get their allocation and then they privatise it — they just sell it off — and then use that to fund government services?
Bryan Caplan: Sounds great, but that’s just not the way that things have worked for over 100 years. So if you’ve seen a map of the US, a third of our land is just owned by the federal or state governments. They just have not been privatising much of anything since the homestead era, really.
There is this popular view that the government only owns land that nobody wants. That is wrong. If you go and take a look at Texas, so you have to drive around a lot of Texas because it’s such a big state, there are enormous areas that appear to be complete wastelands in Texas. And yet, because it was settled under the old rules, there’s almost no government land in Texas. So if you ever drive from Amarillo to San Antonio, which I have done, almost all that is private land and someone wants to own it. And you might look at it and say, god knows why they want it, but then there’s always option value: maybe it’ll be useful. But they are developing it, so they’re building new cities there. A lot of it is oil drilling.
But once something is owned by somebody, the amount of creativity of “What could I do with this thing which seems totally useless?” turns out to be quite high. But as long as it’s owned by government, then there’s not even really much reason to ask those questions. The people in charge of it are like, “We just preserve it as is, useless to humanity.”
Homeschooling [02:13:42]
Rob Wiblin: Okay, another audience one that we’ll make the final question. You homeschool all your children, I think. Do you have four now?
Bryan Caplan: Only four.
Rob Wiblin: Only four, yeah. “Isn’t there a huge opportunity cost for you in that? You could be spending that time to do research or writing or just having fun because you’d think you’re not specialised in teaching young children? So given that huge time cost, should listeners seriously consider homeschooling their kids or not?”
Bryan Caplan: The actual story is that I was doing just my older sons, and then during COVID I did all four. And then there was a negative opportunity cost during COVID, because it was either that or monitor them doing Zoom school or whatever. So I found it easier to do it myself than to monitor. And then after kids started going back to school in person, my older sons were in college, and the younger ones, we gave them a choice. One wanted to stay in regular school; one came back.
In terms of the opportunity cost, it is much lower than most people would think, because of the system that I have. Basically I put in a modest upfront investment in just the curriculum — that might be 20 hours for a year or something like that, or probably less, once I’m doing it repeatedly. And then every day they’ve got a schedule, and most of the time I’ve scheduled it so they’re working on their own. And then I budget a certain amount of time where it’s feedback, where we go over it, but it’s not interrupting my day, normally. Basically, normally, I’ve got somewhere between 20 minutes and 90 minutes where I am going over and helping my kids with the work. But it’s not like a constant interruption by any means.
In terms of why I did it, I am definitely not an effective altruist with respect to my kids. I care about them a lot more than strangers. I do a lot more for them than I do for strangers. For my older sons, it really came down to they were just really unhappy in regular school, and I knew from some past trial periods that I could go and just give them a much better life. Since I love them, it doesn’t feel like work. It’s just really enjoyable. And my older sons are the kinds of students that you pray to get as a teacher: students who are really engaged, really curious. So for them, it was almost all a pure joy.
The only downside of homeschooling my older sons was really the college application process, where they were whining and bellyaching a lot about how stupid it was. “I agree, but I can’t change it. Why are you complaining to me? Yes, I know these essays are stupid. I know the system is a giant farce, but I’m not the person that should hear about these complaints. My job is just to tell you how to game this crummy system.”
Rob Wiblin: You should be proud of them, Bryan. They’re going to be fantastic bloggers one day.
Bryan Caplan: Yeah. Now, for my younger kids, during COVID they were not thrilled to be there. It was our best option, but that was a very different experience and a lot more draining. For COVID, this is better than Zoom school, where I’m leaning over to have someone incompetent teach them. So it’s better for me to just do it myself. So that was the reason.
Right now, I’m just doing my younger son. He is a great kid, but he’s not like his brothers. He’s not that interested in doing this stuff. He’s here because it’s better than the alternative, which is not as much fun. Again, why do I do this for him? Mostly just to give him a better childhood. I love him and I like his company. A lot of what I get is I take him on most of my trips, and by myself I’m just terribly lonely, and this way I go and we get to travel the world together. So that’s fun.
He really wanted to study Japanese, and I’ve been wanting to go to Japan for a long time, and my wife just has zero interest in Japan. So not only would I not want to go by myself, but now I’ve got this great excuse, like it’s part of his Japanese education to take him to Japan. I took him there almost as soon as Japan opened to foreigners, we went. And I’m taking him again in December. And actually, I’m plotting to take him at least once a year, every year, all through high school, which is super fun for me, but he’s the one that makes it so.
Rob Wiblin: I’ve rarely thought of homeschooling as a junket, but I suppose when you have conflict within the family, it can potentially make sense. How much time do the kids save? Because like a typical school day is seven hours, right? Do they actually spend seven hours doing schoolwork, or did they manage to do the same in a lot less time?
Bryan Caplan: It’s not a lot less time. With my older kids, they’re so motivated that they didn’t want that much free time. In fact, they would just go and do their own academic interests in what was nominally free time. With my younger son, I’d say that it’s probably about two-thirds of a normal school day, all things considered. Obviously we cut out so much administration and so on.
But basically this year we’re just doing three things: we’re doing math (he’s doing Algebra 2), I’m prepping him for AP microeconomics, and he’s doing Japanese. So about two-thirds of the time is totally dedicated to these three subjects, where he is making very good progress and getting good. And especially with one kid, the positive is we don’t move on until you’re good. This is not one where we have covered the material, and even though you don’t know it, now it’s time to move to the next subject. If we don’t understand it, we just keep working on it until you’ve got it. Also double back, make sure you still remember this stuff, because I always say, “I’m not teaching you so you can do well on a test. I’m only teaching material that is worth knowing. And if you forget it the day after the test, then we’ve both failed.”
Rob Wiblin: I mean, a kid at a normal school only gets 20 minutes on average of teacher time, so I suppose they’re probably getting a massively better education from an hour or two of you. It’d be interesting to see in five years’ time, when my first kid hopefully is of school age, how good the large language models will be at substituting for teachers. I mean, it seems like they’re already quite educational, but they need a lot of fine-tuning in order to be able to do the wider range of educational things that a teacher does.
Bryan Caplan: Yeah, they’ve got that “explain it to a second-grader” function on GPT. I would say that I actually pay at least double, probably more like three or four times the Zoom cost, for Japanese tutoring, so my son can get in-person tutoring. And partly I just think that he is going to learn better. And just partly I want him to go and have time with other people besides me.
I don’t consider Zoom to be just psychologically the same. A lot of people feel the same way. I remember during COVID, we had a couple of lunches with you, Rob. But they were Zoom lunches. It’s just not the same as having real Rob.
Rob Wiblin: I’ve got to come visit sometime.
Bryan Caplan: Yeah, totally. I want to meet the baby. You’ve got to bring the baby, Rob.
Rob Wiblin: That’d be wonderful. My guest today has been Bryan Caplan. Thanks so much for coming back on The 80,000 Hours Podcast, Bryan.
Bryan Caplan: My pleasure. And the new book is Voters as Mad Scientists. You can get that on Amazon for $12, ebook for $9.99. You’ve also got everything else that I’ve written there, and I’ve got my Substack, which is Bet On It, so please check it out.
Rob Wiblin: Wonderful. Chat again soon.
Bryan Caplan: Great talking to you, buddy.
Rob’s outro [02:20:41]
Rob Wiblin: I just wanted to remind you all of something which we’ve discussed before but not recently, which is that we let guests look over the questions we’re going to ask them ahead of time and take out any they don’t like, and look over the transcript of interviews and take out anything they don’t like from it.
Keiran and I explained why we do that in an interview from February 2022 on our other show, 80k After Hours, titled Rob and Keiran on the philosophy of The 80,000 Hours Podcast.
This has various benefits and some costs as well.
On the one hand, it means guests can cut questions where they don’t have much to say ahead of time, and we can avoid questions that the guest is just going to dodge and not answer in a forthcoming way even if we do ask it.
It also means we can cut out anything where the guest misspoke and is worried they’re going to be misunderstood.
It also means we’re able to interview people who otherwise would just be unwilling to do an interview because they’re anxious about doing interviews, or they’d regard being put on the spot as too risky.
And in fact we’ve seen it makes guests much more relaxed and potentially more candid in what they say, because they can think about what they want to keep and cut from the recording at leisure after the fact.
On the other hand, it means we can’t make a guest address a topic they don’t want to talk about, or ask them hard-hitting questions they don’t want to face, or put them on the spot in a way that would be telling for the audience.
Fortunately, there are plenty of other outlets that take the more traditional approach to interviews, which is why we see our collaborative process as a way we can do something different and get out information that a more adversarial approach wouldn’t be able to uncover.
While we can’t get people to say things they don’t want to say, we can get sophisticated coverage of whatever it is they are willing to discuss.
If you want to hear more about how we think about this one, go check out Rob and Keiran on the philosophy of The 80,000 Hours Podcast over on the 80k After Hours feed.
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
The audio engineering team is led by Ben Cordell, with mastering and technical editing by Simon Monsour and Milo McGuire.
Full transcripts and an extensive collection of links to learn more are available on our site, and put together as always by Katy Moore.
Thanks for joining, talk to you again soon.