Transcript
Cold open [00:00:00]
Kevin Esvelt: So scientists correctly appreciate that, when there is controversy, you can get a paper in Nature, Science, or Cell — the top journals which are the best for your career.
Therefore, the incentives favour scientists identifying pandemic-capable viruses and determining whether posited cataclysmically destructive viruses and other forms of attack would actually function.
And I have not seen any appreciable counter-incentives that could be anywhere near as powerful as the ones favouring our desire to know. Because almost all the time, it is better for us to know.
So I don’t see many plausible futures in which we do not learn how to build agents that would bring down civilisation today. We just know that in the limit, if you get good enough at programming biology, we can do anything that nature can do — and nature can do the kind of pathogen that is necessary to kill billions and set back civilisation by at least a century
Luisa’s intro [00:01:00]
Luisa Rodriguez: Hi listeners, this is Luisa Rodriguez, one of the hosts of The 80,000 Hours Podcast.
In today’s episode, I spoke with biologist Kevin Esvelt about why it’s getting easier and easier to engineer new pandemic-capable viruses.
If you’re like me, you might find it hard to believe a person or group would want to bring down civilisation intentionally. You might also be sceptical that the kind of person who would want to do that would also have the skills and technology to succeed. There aren’t many people who actually know how to make nuclear weapons, and have access to the material required to do so, for example.
But in this conversation, Kevin explains that omnicidal actors — individuals or groups that want to destroy all human life — are more common than you might think, and that the democratisation of new biotechnology is making it easier and easier for untrained people to engineer pandemic-capable viruses — including pandemics humanity’s never seen before.
But Kevin also explains that there are technologies capable of preventing this kind of catastrophe, and despite all the dark possibilities — he thinks humanity can get their act together and solve this.
And now I bring you Kevin Esvelt.
The interview begins [00:02:23]
Luisa Rodriguez: Today I’m speaking with Kevin Esvelt. Kevin is director of the Sculpting Evolution group, which invents new ways to study and influence the evolution of ecosystems. Kevin is probably best known for his invention of CRISPR-based gene drive, which allows a trait engineered in a laboratory organism to spread on its own through wild species, and could be used to prevent diseases such as malaria and Lyme disease in wild animals.
More recently, he’s been an advocate for improved biosecurity, developing proposals to better monitor ourselves and our environment for especially dangerous novel pathogens, and to screen all synthetic DNA to make it harder for bad actors to access the DNA required to make bioweapons — both of which we’re going to discuss today.
Thanks so much for coming on the podcast, Kevin.
Kevin Esvelt: Thank you. It’s a pleasure to be here.
Why focus on deliberately released pandemics [00:03:11]
Luisa Rodriguez: So we’ve talked about the risks of natural pandemics before on the show, as well as pandemics caused by accidental lab leaks, but we haven’t talked as much about deliberately released pandemics, which is what we’re going to focus on now. Why are you worried about deliberately released pandemics in particular?
Kevin Esvelt: Nature is not trying to kill us. That pretty much sums it up. To the extent that natural pandemics are accidents, that implies that they’re not hitting at our weak spots. Nature doesn’t know where our weak spots are, because nature doesn’t know anything; nature doesn’t make combinatorial attacks. To the extent that a security challenge is more difficult, defending against deliberate biological weapons is simply much more challenging than defending against whatever nature throws at us.
But if any listeners are sceptical of this — and you may have heard that “nature is the greatest bioterrorist” — suppose that we, hoping to protect ourselves, identify natural viruses in animals that would cause pandemics if they spill over into people. A human terrorist could assemble and release those across multiple airports. That would cause them to spread much more rapidly than anything released at a single location, which is what would happen through natural spillover or through a laboratory accident.
So consider the omicron variant: from when it was first sequenced, in its first 100 days, it infected half of Europe and a quarter of the United States, which is on the other side of the world. Now, imagine something that was released across multiple airports to start with, and you can see how the moonshot vaccine initiatives that hope to get a new vaccine working and approved in 100 days are still going to be much too slow.
And given that we know in nature, natural pathogens of animals, things like rabbit calicivirus — which has nothing to do with Game of Thrones; it’s actually rabbit hemorrhagic disease virus — it’s more than 90% lethal in adult rabbits, but it’s spread very efficiently and doesn’t always kill young rabbits. If nature can do that in an animal, that means it’s possible.
And pathogens spread much more rapidly around the world in humans than they do in any animal, precisely because we have air travel. So if something like that were sufficiently contagious, and enough essential workers who provide the key services that keep civilisation working either suffer debility or death, or simply decide that they’re not willing to go out there without adequate protective equipment, then we will lose food, water, power, law enforcement, some combination thereof, and we will lose civilisation. And if that happened now, that would be the outcome.
Luisa Rodriguez: Interesting. Yeah, I want to react to two things. One is “nature’s not trying to kill us”: On the one hand, I’m like, yes, sure. On the other hand, I do have this feeling that pathogens sometimes are trying to kill us. But I guess that’s just kind of an accident. They’re not actually trying to kill us.
Kevin Esvelt: It’s a great point in that they are definitely trying to kill us in one circumstance: If they can efficiently spread from our corpse to other hosts, then they are trying to kill us. And it is true that many of the most lethal natural pathogens do spread efficiently in that way. Part of the reason why the Black Death was so lethal is because when you die, the fleas abandon your cooling corpse and spread the pathogen to others. So that is how nature selects for extremely high lethality.
Luisa Rodriguez: OK, but it’s killing us because it’s lucked upon a mechanism that works particularly well. And you’d expect that if anyone were trying, they could do as well or better.
Omnicidal actors in history [00:07:10]
Luisa Rodriguez: Which I guess brings me to the other thing, which is: I imagine some of our listeners will be confused about the idea that people might be trying to kill everyone. I think when I first heard this argument, I found it very counterintuitive and just really hard to wrap my head around. Can you help make it a bit more intuitive? Why would any individual or group want to actually do this? Kill billions of people, maybe everyone, including themselves?
Kevin Esvelt: Well, I think there’s a big difference between people who want to kill everyone and people who just want to bring down civilisation. Simplest possible example: Suppose you believe that most of the value in the world comes from the beautiful complexity of nature, that the tapestry of the world and all of the different species and the siren song of life is what’s most important. Well, humanity is currently perpetrating the sixth great mass extinction. When I was a teenager, I was a fairly radical environmentalist. I was not very sympathetic to humanity’s right to severely damage other ecosystems and extinguish the amazingly beautiful, awe-inspiring wonders that nature creates all the time. And if you’re a sufficiently extreme deep ecologist, you might reason that nature would be better off without humanity. Many, many people have expressed this attitude.
If you’ve had a particularly heinous day or you’re in a very low spot, you might think that life for most people is really not worth living, and people who think that it is must just be deluding themselves. Because if you’re depressed and you look around at the world, it’s just hard to imagine that there could be enough light in it to make up for all the despair.
Indeed, rather than the ecologists who want to preserve nature, you can take the opposite perspective and say, “I’m concerned with suffering, and I’m worried that nature has too much suffering.” And if you’re concerned with eliminating suffering, you may not be able to do much about the nature part of it, but you can certainly do something about the human part of it, and possibly humans making nature worse if you’re pessimistic about where technology is going.
Supposing you do care about humans, you think life as humans is worth living, but obviously evolution shaped us to live in ways very differently from the way we live now. Right now, life is confusing. Things move incredibly quickly. A lot of people feel like their basic existence is outside of their control. They feel helpless. They’re stressed all the time. There’s a barrage of negative information. Perhaps we would all be happier as hunter-gatherers. Perhaps the market is causing us to warp our inherent dignity, causing us to do things that are different from what we evolved to do, the things that would make us happy. And suppose that we will eventually even engineer ourselves to remove the things that truly make us human to satisfy the dictates of the market.
If you view it that way, perhaps civilisation is the problem and humanity would be better off if we started over. In which case, you don’t want to wipe out all humans, but you do want to bring down civilisation. And there’s one very famous individual who thought this way: Ted Kaczynski, the Unabomber.
Luisa Rodriguez: Do you want to say more about that case?
Kevin Esvelt: So I hate to recommend a manifesto written by a mass murderer, but it was pretty darn prescient considering that he wrote it in the early 1980s. And the basic thesis is exactly what I described: He viewed the market system and technology as creating socioeconomic, sociotechnical incentives that would eventually cause us to use what he called the “immense power” of biotechnology to change who we fundamentally are, to make ourselves less than human in order to compete more effectively in the marketplace — and that we would thereby make ourselves increasingly miserable and an increasing travesty relative to what humanity should have been.
This is what got him against technology. And this is a man who went to Harvard, who became a mathematics professor at Berkeley, and then threw it all over to live in a cabin in the woods and develop his philosophy and try to thwart progress by murdering people with incredibly sophisticated mail bombs that completely threw off the FBI for over a decade.
Luisa Rodriguez: I do feel like a real concrete example of a person who thinks this way was necessary for me to actually get behind the idea that it’s one thing to have these kinds of beliefs as an ideology, and it’s another thing to want to act on them, and to actually act on them. I think it just felt like, sure, there are environmental activists who think humans are destroying the beauty of the Earth, and maybe they wish humans weren’t even here, but that’s very different from wanting to take the humans that are here, give them a horrible plague, and kill them all. That’s just such enormous suffering. I found it just really unbelievable. So yeah, I feel like the concrete cases made a huge difference.
Another one I’ve heard of is the omnicidal cult Aum Shinrikyo, which I never pronounce correctly.
Kevin Esvelt: I don’t either.
Luisa Rodriguez: It just gets me every time, even when I’m reading it. But yeah, do you mind giving the basics of that story as well?
Kevin Esvelt: So Aum was a religious movement that arose in the 1980s in Japan — which is not the kind of society that we normally think of, at first glance, as spawning extremely radical religious movements that launch weapons of mass destruction programmes and then try to use them. I don’t know how typical religious movements are, but it clearly developed in directions moving towards a cult, but one that had a lot of members, a tremendously large income stream, thousands and thousands of members who tithed.
But when it was developing, they eventually moved in a messianic direction — the founder, Asahara, did — and it moved in an apocalyptic direction. Not that everyone should die, including members of the cult, but very much that most people were going to die: the apocalypse was inevitable, and it would be in some ways more humane to bring about the end times of the current world so that the enlightened could build a better one. That was a major part of it.
And they stooped to targeted murder of rivals and inconveniences relatively quickly, as they were explosively growing. Because first they had to hide an accidental death, which they did, and then it became, “Well, there’s this one person who is very much in the way.” And so they developed this sophisticated form of assassination, which then they botched, and had to kill more people and then cover that up. And eventually it became an adversarial thing where the apocalypse will happen, but it won’t happen soon enough, and the suffering will continue until we bring it about.
So they launched weapons of mass destruction development programmes. They bought a uranium mine, they started developing chemical weapons, they started looking for biological weapons. And while there weren’t very many that they had access to at the time, they were able to produce botulinum toxin and they tried to make enough anthrax. And at least as a passing thought, the leader of their bioweapons programme, when they went to Africa, he was hoping that they would find someone who was infected with Ebola so that he could purify the virus and spread it around, so that it would hopefully transmit and kill as many people as possible.
Luisa Rodriguez: That’s horrific.
Kevin Esvelt: This was and is the most important part. So Ted Kaczynski, you can say, well, he was a mathematics professor. He clearly appreciated the immense power of biotechnology, even in the early 1980s. But he’s a mathematics professor: would he really have been able to do something about it? Well, again, this man is a genius and he was willing to throw away everything in his life and go and live in a cabin in the woods to pursue his philosophy. Would someone like that be willing to dedicate the time to picking up the skills which someone of that capability clearly could? I think so, if it was obvious that he could have access to something pandemic-like that could threaten the stability of civilisation. He didn’t have that, so he didn’t.
Luisa Rodriguez: At the time. Right.
Kevin Esvelt: We should return to the cult, though. Because their lead bioweaponeer was one of the original disciples, and he rose high in the ranks — in part because he was a graduate-trained virologist out of the University of Kyoto.
Luisa Rodriguez: Yikes.
Kevin Esvelt: Anyone with that level of technical training today has access to reverse genetics protocols that would let them make many of the smaller viruses.
How many people have the technical ability to produce dangerous viruses? [00:16:14]
Luisa Rodriguez: Can you say more about that? What the differences in the technology then and now are?
Kevin Esvelt: So the very first synthetic virus made from chemically synthesised DNA and delivered into a cell and booted up to make infectious particles was achieved in 2001. Before that, we couldn’t do it. We could isolate the DNA from an existing virus, manipulate it outside the lab. That was developed in flu in the 1990s. So this is still after the events of Aum, who were eventually arrested for murder and were actually executed; it was after their time. But now, starting with 2001, it was feasible to make small viruses purely from synthetic DNA, which you could order at the time for extremely expensive prices: hundreds and hundreds of thousands of dollars required to even have a chance at making an extremely tiny virus.
Luisa Rodriguez: OK, yeah. That is prohibitively expensive.
Kevin Esvelt:And it’s partly with that in mind that a lot of scientists decided that it was, in fact, a good thing to find victims of 1918 influenza who were buried under the permafrost, take samples, sequence them to find the genome of the pathogen, and then use our knowledge of influenza reverse genetics — which is, again, basically a virus assembly protocol that lets you go from DNA or RNA to infectious particles of the virus. So they did that, and made the complete intact virus. And they took unbelievable safety precautions. This was done at the CDC in a BSL-4 lab by one person who was doped up on antiflu influenza drugs — everything we had available, all the time — continually tested, full-on showers, limited contact with other humans when he was not in the lab doing this. They were careful.
And what they found was a bit of a surprise. Turns out influenza has eight different segments. And they had tested different combinations before and nothing was very bad; it didn’t kill the mice. Turns out you put all eight together, and boom: there’s a reason it killed 50 to 100 million people.
They decided this was important, and they should share the information with the world. So they wrote up a scientific paper describing exactly how they did it, exactly what the design was. They put the genome sequence online. And there was a bit of a controversy over this, and they decided to have it reviewed. And the editor-in-chief of Science at the time said, “This is appalling. If the government doesn’t want this information shared, they should classify it. But knowing what we know now, we would have gone ahead with it unless they threatened to throw us in jail.”
And the reason was, at least in part, very few people could perform that reverse genetics procedure at the time, and it was super expensive. What they weren’t thinking about is the question of how accessible is that going to be, and how cheap is synthetic DNA going to come? I can virtually guarantee you that none of them ever actually began to draw a cost curve on how fast they expected DNA synthesis to improve over time, or how large the scientific workforce would grow, how common it would be.
But now we live in a world where, when influenza researchers want to work on viruses that they’ve found samples of in nature and are interesting, or they want to study one that some other lab has come up with, they don’t bother shipping the samples anymore. Which you might be grateful for: if it’s a dangerous sample, you probably don’t want it in the mail. But it’s actually easier now to just order synthetic DNA and make it yourself.
Luisa Rodriguez: Wow.
Kevin Esvelt: That is the standard in the field. Which means that virtually every influenza lab in the world, at a minimum, has the technical capability to do this for pretty much any influenza virus that has been publicly described, including 1918.
And now I’m monologuing, so forgive me, but it’s a very important point: 1918 influenza is not very likely to take off today. They call it “the mother of all pandemics,” and that’s because all successor flu strains in humans are descendants of that one virus. Whatever was in humans before it outcompeted all of them. It was a new species that jumped straight from birds, as far as we can tell — probably no pig intermediate or anything like that — it seems to have been straight from birds. And it just outcompeted all of the other flu strains, at least influenza A. So every flu strain that infects us today is a descendant in at least some of its segments from that strain. And because of that, we have some degree of immunity; there is some cross-reactive immunity to that strain.
And in particular — because we categorise influenza strains by their hemagglutinin and neuraminidase; we call it an “HN” — it was an H1N1 strain. And one of the modern strains circulating is also an H1N1 strain. So we all have been infected, except for very young children, with H1N1 strains very recently. That doesn’t guarantee that 1918 would not take off, but it does mean that there’s a very good chance that it wouldn’t, and that if it did, it would not be as bad as it probably was back then.
Luisa Rodriguez: OK, so bad actors now couldn’t make that good of use of the fact that these eight pieces of DNA could be combined to make this previously really terrible pandemic-causing pathogen.
Kevin Esvelt: But Luisa, you’ve done a lot of work on nuclear risk yourself.
Luisa Rodriguez: I have.
Kevin Esvelt: It’s worth pointing out that if we suppose that the 1918 strain has only a 5% chance of actually causing a pandemic if it were to infect a few people today. And let’s assume it would be way less lethal than in the past. So in 1918, there was a fourth as many people and it killed 50 to 100 million. Let’s assume it would be way less lethal today: let’s assume it would be COVID-level lethality, direct and indirect, 20 million deaths if it did happen. So a 5% chance of 20 million deaths is an expected million deaths if anyone tries it.
Now, the only reason I’m telling you this is it’s pretty unlikely that any given terrorist group, whatever their motivations, are going to go out on a limb to try to make a virus that has a 5% chance of causing another COVID, as opposed to the sort of apocalyptic thing that they’re looking for. So that’s why I’m comfortable telling it to you. But how in the nuclear space do we handle security regarding access to technologies expected to kill a million people?
It is inherently difficult. It’s inherently much more difficult. But we spend a fairly ridiculous amount on nonproliferation every year, and access controls, and ridiculous levels of bureaucracy that scientists at the time thought were incredibly stifling when it was placed on them. But the whole field of physics has this sense of ancestral sin that meant it’s a little bit hard to object when you just annihilated a few hundred thousand people.
So in biology, of course, if you want access to 1918 influenza, you can just order it from a company that does not screen its orders, and you can follow the reverse genetics protocol that is freely available online. It’s open access. I would be incredibly disappointed in the University of Kyoto if a graduate-trained virologist from there, specialising in genetic engineering, could not obtain pretty much any influenza virus they want — and frankly, many of the others as well. There is just about no chance that they would be unable to successfully perform that protocol. And it does not require that much equipment either.
I should probably raise the issue of what’s called “tacit knowledge” here.
Luisa Rodriguez: Yeah, tell me about tacit knowledge. I have heard this raised as an objection.
Kevin Esvelt: Well, I should maybe let you explain it, because I’m biased, right?
Luisa Rodriguez: Sure. Yeah, I’ll give it a go. What I’ve heard is: To make a virus using DNA that you order would involve years of wet lab experience — so experience working with particular types of viruses, doing particular types of science-y things, and probably a bunch of other bits of context that I don’t know about — and that just anyone working in the discipline of virology wouldn’t necessarily have the context and skills to do that particular kind of synthesis of a pathogen.
Kevin Esvelt: Yeah, well said. That is pretty much exactly the objection. And anecdotes are not data, but my second-year graduate student, who had never done virology before — had done mammalian tissue culture, but only for a couple of years — needed an influenza replicon for her research. And I said, “This is a good test case. How about I don’t help you, and you just try to figure it out from the protocols online? Do the design yourself. Go ahead and do it.” She did it. And I decided to check with some of my other students: “Do you think you can figure out how to design the reverse genetics plasmids for 1918?” They all could.
Luisa Rodriguez: Can you say a bit more about what exactly that entails, just so I have a sense of actually how hard it is?
Kevin Esvelt: Oh, you shouldn’t ask me. You should ask GPT-4.
Luisa Rodriguez: Oh, god. OK, GPT-4 could probably tell me. That is terrifying. On the other hand, you’re at MIT; these are MIT students. How representative are they of the kinds of people who have the goal of releasing a pandemic-level pathogen all over the world?
Kevin Esvelt: Well, I would certainly hope none of them would consider doing that. And in fact, I’m very strict with who I allow into my laboratory, just because I do think about this stuff, and eventually the way of thinking about things rubs off. So I’m pretty careful about that.
And also, they’re obviously amazing. But let me be frank: MIT is great, but there are a lot of other great universities out there too. Are you really willing to say that the typical student at MIT is better than the best students at a major state university in graduate school? I wouldn’t say that. There’s not anything close to a particular selective filter. Many people are geographically constrained with where they can go. There are talented people everywhere.
Luisa Rodriguez: Yeah, what’s your best guess at how many people do have these skills?
Kevin Esvelt: So we’ve established that it’s definitely not limited to virologists, and you don’t necessarily need specific training in that particular class of protocol. The reason is that this is not research. If you were to rephrase your question — and say, “How many people could figure out a reverse genetics protocol for a novel virus?” — a lot of virologists are very critical of this kind of reasoning, because they say this is hard. They will cite how long it took them in graduate school to develop a reverse genetics protocol for a new virus.
That is not the question here. Nor is it something about doing novel research. Doing research in biology is hard: most experiments fail. But that is because they are experiments. If you have an exceptionally detailed step-by-step protocol that was written to allow anyone with the very basics of laboratory training to successfully achieve the goal, it’s not research anymore. It’s not an experiment anymore. It is a protocol.
Luisa Rodriguez: It sounds like GPT could literally give me the step-by-step instructions, but so that I have a more intuitive sense of how easy this protocol would be to follow, can you give me an example of some of the steps? Is it like baking? Or what are the kinds of skills that I need to have in order to be able to do this?
Kevin Esvelt: The big one is you need to be able to culture mammalian cells. And that is a form of tacit knowledge barrier, because until you’ve been trained in it, it’s just really hard to pick it up yourself without contaminating everything all over the place. And that goes double if you have a jury-rigged setup that you try to set up in your garage. So the pool of relevant people who can pick it up is, at minimum, restricted to people who can do mammalian tissue culture.
Luisa Rodriguez: And that’s like many graduate biologists?
Kevin Esvelt: That’s pretty much anyone involved in the biomedical research enterprise who is working on biomedicine for humans. You pretty much need to do mammalian tissue culture. And I’m not saying that they all could do it, but influenza is fairly easy, because it’s segmented and none of the segments is very large. So when you order the DNA online and it comes in the mail, for influenza, all of the pieces can just be there. You don’t need to manipulate them to change the DNA sequence on your own at all.
Luisa Rodriguez: And you do for some others?
Kevin Esvelt: For pretty much every other kind of virus, the companies will make something that large, but it is more complicated. So they’re much more likely to notice if you want them to make measles or something. I mean, they will: we’re vaccinated against measles, who cares? But even so, it’s very different from ordering the pieces for an influenza virus, which is just very bog standard.
Luisa Rodriguez: They look kind of nondescript. Yeah, that is unnerving. I didn’t realise that.
Kevin Esvelt: So if you want to order the pieces in sub-3,000 base pair chunks, then unless you’re working with influenza, you need to piece them together yourself. This is pretty common; we call it molecular cloning. The basics of molecular biology is stitching together DNA pieces to produce something of your choice. That said, if you want to do larger constructs, they’re harder. And as they get bigger and bigger, they get much harder.
So take a coronavirus over 30,000 base pairs: that’s very difficult. Most virologists can’t necessarily do that, because that’s more of a synthetic biology, a biotechnology task. But there are a lot of synthetic biologists and biotechnologists who can do that kind of thing. and are very good at it. Then the question is: Have they also done mammalian tissue culture? Then that’s the set of people who can do that.
I would estimate that for influenza, influenza is very comparatively accessible. How many PhDs in virology are there every year? You can look up that number: NSF and NCSES calculate that number.
How many people who are not virologists, who are in other disciplines, can do it? My PhD is in biochemistry. It’s not that hard. Again, if you’re in biotechnology, you can almost certainly do it. And many types of biomedical engineering similarly can probably do it. And there’s a lot more people who get PhDs in those disciplines than in virology. But it’s also true that not necessarily all virologists can do it. Many of them are studying plant viruses, or fungal viruses, or even just straight-up bacteriophages, bacterial viruses. So we shouldn’t assume that all virologists can do it. But it’s probably safe to assume that there’s at least four times as many people who can do it as the number of PhDs in virology.
And we’re setting aside students, and master’s degree folks, and even talented undergraduates, and technicians who have been working for a long time. Let’s just focus on PhDs: 1,500 people a year get PhDs in virology or one of these other disciplines worldwide. And you can do that because you get to basically 125 in virology. Another three times that many gives you 500 a year in the US. The US is about a third of the global total. So you’re at 1,500 a year. Assume a 20-year career in which you’re reasonably active, and you’re at 30,000 people with PhDs.
Luisa Rodriguez: Wow.
Kevin Esvelt: Now, that’s influenza. With coronaviruses, you’re probably down to the single-digit thousands. And paramyxoviruses and so forth.
And a lot of people are worried about smallpox, with good reason: that virus is one that we know for bloody well sure that would take off and cause a horrific pandemic. The US has enough doses of vaccine for its entire population. So does, I believe, Israel. But other places don’t. How fast could they make them? But fortunately, smallpox is above and beyond when it comes to difficulty. It’s huge: 186,000 base pairs compared to 15,000 for all the segments of influenza or something like measles, or 30,000 for a coronavirus. So you’re up to more than 6x as large as a coronavirus. And actually, GPT-4 doesn’t know this: you need a live pox virus of some other sort to provide the proteins that are necessary for the new genome you put in the cell to boot up. So you need a live clinical sample on top of this incredibly difficult task of assembling this huge, huge, huge genome.
So I would hazard that maybe a couple hundred people in the entire world, if that, could singlehandedly access smallpox. That’s still pretty terrifying, because smallpox killed half a billion people, estimated, in the century before it was eradicated — and that was when we had the vaccine. Still, it’s very much not accessible. So that’s not the thing that I worry too much about. It’s really the smaller ones that are much more accessible.
Luisa Rodriguez: OK, so there’s this gradient: some pathogens are much, much more accessible than others. And for the ones that are most accessible, there are something like tens of thousands of people who could plausibly create them. Are there steps besides creating them? Do you then have to mass produce them or something else? Does this get a bunch harder when you look at the practicalities, or is it not as hard as it seems?
Kevin Esvelt: So that’s another area that has tripped up the traditional field of biosecurity, such as there is one. I like thinking of the field of biosecurity as nascent, because there’s really very few people in it. And biology has been advancing so fast that the rules of the game are very different from what they were: so different that I’m going to go out on a limb and say that past knowledge just leads you to make incorrect assumptions about what is possible today.
In the past, people were mainly concerned with nasty things that you could aerosolise and spray over a city: think crop dusters. And this is why, if you look at the select agent list in the United States, it’s full of things like anthrax. And indeed, Aum Shinrikyo tried to mass produce anthrax, aerosolise it, and spray it over a city. Turns out that’s hard. It’s hard to make that much pure anthrax. It’s hard to aerosolise it without killing it. It’s hard to disperse it over a large area — and, you know, the wind conditions have to be right, it needs to be done at the right time, whatever.
Lot of complexity goes into all stages of that. But above all else, you do need to make a lot of it. You need large-scale fermenters, not the kind of thing that you can buy and put in a garage lab. And it’s complicated and it’s an optimisation process and there are no protocols.
Luisa Rodriguez: I find that very reassuring. Is there a reason to think that’s going to get easier?
Kevin Esvelt: Maybe. But at the end of the day, that’s the kind of thing that can kill maybe 105 people, even if they do it right. That’s bad — traditional security people need to worry about that — but that doesn’t meet my minimum bar for “I need to do something about this.”
But if you think about a pandemic virus, it spreads on its own. So how many people do you need to infect in order to trigger a new pandemic? Depends on the virus. If it’s highly contagious, more often than not, one could be enough.
Kevin Esvelt: Even if it’s not very contagious, if you infect a dozen people, that is almost certainly enough if it is a pandemic-capable virus. So now, I guess the benefit of COVID is that everyone understands what R0 means, the basic reproductive number: How many people does the typical infected person go on to infect? If it’s above 1, it’s likely to take off. But of course there’s chance, there’s randomness: maybe this person will infect five people, maybe they won’t infect anyone.
And SARS-CoV-2 relied heavily on superspreaders. So any one person is pretty unlikely to infect anyone, but you infect six or eight people and one of them is likely to be a superspreader, is going to infect a lot more than that. So it depends on how contagious the virus is and how much it relies on superspreading — with lower contagious and more superspreading meaning less likely to cause a pandemic per infected person.
But note that now we’re in a very different ballgame. How much purified virus do you need to infect four people? Twelve people at most? That’s just not very much.
Luisa Rodriguez: Yeah. OK, I’m back to scared again. Thank you for that.
Kevin Esvelt: And this is why the bottom line is I think the game is different now. Yeah, we still need to worry about aerosolised anthrax, because there’s new technologies that could plausibly make that easier. But it’s not the kind of thing where the scientific community is deliberately making it as easy as possible for scientists around the world to obtain the relevant agent and in quantities necessary to start a self-perpetuating spread of death.
Luisa Rodriguez: So if it’s really, really transmissible, then you don’t have to overcome some of these practical challenges of producing a bunch of virus, and aerosolising it, and spreading it over a city. You might just be able to infect four to 12 people, for example. And how transmissible exactly does it have to be for it to be the case that someone could just make 10 [doses]?
Kevin Esvelt: Well, if it’s below R0 of 1, then it’s not going to take off at all, despite your best efforts.
Luisa Rodriguez: And what was COVID, for reference?
Kevin Esvelt: COVID started out around 2, we think, and then it grew. We just recently redid our estimates, but omicron is probably between 4 and 5.5, absolute upper bound of 6.8. But this is controversial. There are some people that think it’s higher.
Luisa Rodriguez: OK, so there are naturally occurring pathogens that have transmissibility that’s much higher than 1.
Kevin Esvelt: And smallpox, for example, was somewhere between 3.5 and 6 when it was around. And measles is the upper bound: measles is the most contagious virus known in humans. It’s thought to be somewhere between 12 and 18, probably 15 and 18.
So you might say, why are you concerned then? And the answer is that today, I’m actually not all that concerned. I mean, I think it’s absurd that we don’t put some kind of synthesis screening requirements in place, given that the expected casualty toll of someone making and releasing 1918 — given the 5% chance it takes off, and 20 million deaths — is a million deaths in expectation if anyone tries: that’s detonating a nuclear weapon in a major city; that’s nuclear terrorism. We spend a lot of money on that, but we can’t even be bothered to require DNA synthesis screening.
But it’s also true that I’m not super worried today because there aren’t any good credible pandemic pathogens. That’s a terrible way of saying that — “good,” right? If there’s cheap synthetic DNA that is currently not screened, and there’s reverse genetics protocols for a bunch of viruses, including the nastiest ones that we know about, then why hasn’t someone already done it? The answer is, we just don’t know of any viruses that look like they’re really likely to cause pandemics. There are no publicly visible good candidates to use — and I expect that will change.
Luisa Rodriguez: I think you lose me at “there are no good credible pathogens or viruses” that we know of — because we have had pandemics. And so we know of pathogens like smallpox and COVID that do cause pandemics. What’s the distinction? Is it that we’ve seen these pathogens and so we have great defences against them? And so if they were deliberately released now, we’d be able to defend ourselves — whereas some new thing would be much more likely to have catastrophic consequences? Or is it something else?
Kevin Esvelt: It’s pretty much exactly that. So smallpox would be really bad, but we have a vaccine. We don’t have enough of it, but for the United States and Israel, we could just vaccinate the whole population and they’d be totally fine, because you have sterilising immunity against smallpox. Unless it was the Soviet-enhanced variety, of course — then it might be a problem. But even there, we could probably reformulate the vaccine to make it better fairly quickly, because we’ve seen smallpox, we know smallpox, we’re worried about smallpox. We’ve actually invested defensive dollars for military defence purposes to protect the entire population against smallpox in the United States. That’s amazing. That is precedent. We should lean on that precedent.
The 1918 influenza: yes, we know it was very nasty, we know it exists, we know the sequence, we can make it. But there are circulating H1N1 influenza viruses that certainly provide some level of cross-reactive immunity. So it is highly questionable whether it would cause a pandemic at all if released — and if so, it certainly would not be as lethal, because so many of us have some degree of immunity.
And COVID: Could we, knowing what we now know, and certainly with modern machine learning models, potentially engineer the next variant of COVID, the one that will outcompete all of the natural ones? Yes. Even if we can’t do that now, I would be stunned if we can’t do that within the next couple of years — and I would prefer that we couldn’t, by the way. But for COVID itself, you make the next variant and it’s a common cold, because everyone’s had it before and been vaccinated, or had it enough times that so what. Nobody cares. Common cold. Whereas COVID was bad initially because none of us did have immunity.
Luisa Rodriguez: Right, okay.
Concerns around AI models [00:41:42]
Luisa Rodriguez: Could you expand on your concerns around AI models like GPT-4?
Kevin Esvelt: I have two concerns with natural language processing models, large language models.
Number one is they could expand access to existing nasty pandemic-class agents. Right now, you need some degree of lab skills in order to turn a publicly available genome into an infectious sample of virus. But we asked students in one of my classes — Safeguarding the Future, non-scientists — to leverage chatbots to figure out how to cause a pandemic. And in one hour, the three groups of students, plus the chatbots, came up with four of the nastiest viruses known — that would be not particularly likely, any of them, but among the most likely that we know of to cause pandemics. It told them that scientists can access these viruses by reverse genetics, producing infectious samples from synthetic DNA constructs. And that not all companies do screen DNA to make sure that you’re not ordering something nasty, and that all of the companies that do all have their names conveniently listed on a website, so you can be sure that you’re ordering from one that does not.
And then perhaps even more concerning, when the students asked, “What if I am a biochemist and I don’t know how to do reverse genetics? What do I do?,” it said that you can work with a core facility or a contract research organisation that will perform reverse genetics for you. You can send them your DNA constructs which you designed — and the LLM again will help you with the design — and they will send you back infectious samples. And it even will go into how to test whether the CRO is actually going to sequence your samples to make sure it is what you say it is.
So the upshot is the LLM taught non-scientists, in an hour, which viruses are most dangerous, how to design DNA sufficient to produce them, who to order that DNA from and who to send it to, and how to do so in ways that could allow them to obtain infectious samples without being detected.
That dramatically expands the number of folks who could plausibly gain access to potential pandemic agents. And it’s why we need to close some of those loopholes. We need universal DNA synthesis screening and we need to ensure that those contract research orgs really do sequence all of their customer samples — and not in a way such that someone who has penetrated their network can ensure that the sequencing file is replaced by a false one as soon as it appears, which is again something that the LLMs will talk to you about.
So expanding access is one risk, but the other is just that we anticipate that scientists will learn to program biology in ways that, used maliciously, could create worse agents than natural ones. Eventually, I anticipate AI to get as good as human scientists at doing such things. If they are willing to tell the world how one might do that, then they will, and people will ask, and folks who are willing to misuse them will gain access. So we need to ensure that they don’t expand access. And perhaps even more important, we need to ensure that they don’t tell us how to build things like wildfire and stealth agents in the future.
Luisa Rodriguez: I mean, I’m almost bewildered that they’ll agree to do all that for you now, given that DALL-E won’t make me a picture with blood in it. How is it possible that we haven’t already trained these LLMs not to give these instructions out?
Kevin Esvelt: Well, a lot of those questions are phrased as, “I’m a biosafety researcher and I’m really worried about laboratory accidents that might cause pandemics; what are the pathogens I should be most concerned about?” or “I’m a biosecurity researcher…” or “I’m a policy analyst…” or “I’m a staffer working with a lawmaker and I want to understand what the current regulations are on DNA synthesis screenings. I want to make sure people can’t get access to pathogens. What’s the current state of affairs? How could people do this?” There is always a way to phrase it such that you seem like the good guy, and the models right now just can’t separate that.
So given that reliable jailbreaks exist, and still exist, and probably will continue to exist, we call this dual-use information for a reason: it can be used either way. And if you’re going to say the models can gain access to it, because some people with good intentions might benefit from it, OK — but then you’re also handing it to the malicious actors.
The case against trying to identify new pandemic-capable pathogens [00:46:29]
Luisa Rodriguez: OK, so the number of people who will be able to identify and synthesise new pandemic-capable pathogens is growing, and might be helped by AI systems. Can you explain why the focus is on new pandemic-capable pathogens — rather than just the ones we know of now, like COVID-19?
Kevin Esvelt: We don’t know of any candidates that we’re particularly confident will cause a pandemic. This is not for lack of trying — and that is the concerning bit. Many scientists who are brilliant and well-meaning and want to save lives and have devoted their career to saving lives [have tried] — and this is the tragedy — but they are thinking about nature. They are used to fighting nature, and nature does not ever use what you know against you.
And many of these are such good people that they just don’t natively think about deliberate misuse as even a possibility. If you’ve devoted your life to fighting pandemics, even before it became cool, the notion that someone would be so malevolent, they just struggle to imagine that anyone could ever do that. Who would do that? There’s always someone.
But they don’t think that way. And so if you’re only fighting nature, you want as much information as possible pretty much always. More information is always helpful, except in some really rare cases where you get something that looks promising and it sends you off in the wrong direction. But even there, if you had more information, that would tell you that you should not go down that way. So in general, it is a great heuristic to always learn more. And so they want to know. We’re trying to prevent spillover from animals: Which animal reservoirs are the most dangerous?
Luisa Rodriguez: So that we can preempt that pandemic and create vaccines and stuff, so that when it possibly does happen, we have protection against it.
Kevin Esvelt: Bingo.
Luisa Rodriguez: That’s the idea.
Kevin Esvelt: Yeah, that is exactly the idea.
Luisa Rodriguez: That’s a lovely thought.
Kevin Esvelt: Better off in every possible way. The obvious question from a numbers standpoint is: How many pandemic-capable viruses are out there? Because if there’s a lot of them and you spot one, it’s pretty unlikely that that one is actually going to spill over and cause a pandemic. And the same goes if you have to engineer it in the lab. This was what they did in 2012 with the controversy over the H5N1 enhanced transmission studies: they deliberately created mutations and tested them for laboratory growth and in transmission in ferrets, which are a great model of influenza transmission for humans, to try to identify mutations of H5N1 — which is very lethal when it does infect a human — that could be efficiently transmitted from person to person. That is, they were fishing for viruses that, with mutations, might have pushed R0 above 1.
Because if you know, then you can say that we really need to get serious about monitoring our chickens and our pigs, and is it more likely to occur in pigs, and whatever you can imagine. Maybe we could do something, maybe that would kickstart vaccine research targeted for that particular strain, and we should include that in the yearly flu vaccine. None of that actually ended up happening, mind you, but that was the logic to it.
But there, there’s a question of: Is nature going to come up with those same mutations? So in both cases, you have this question mark. The thing you learn may not actually turn out to be relevant. And so that’s a discount on all the possible benefits, even assuming you can encourage governments to actually invest. And frankly, good luck. After a pandemic that killed 20 million people, how much is any wealthy nation in the world spending on preventing the next one? Bupkis.
Luisa Rodriguez: It is extremely disheartening. Just to make sure I understand, the idea is that they’re hoping to be able to prevent a pandemic by guessing at what kinds of mutations might lead to the most transmissible and lethal pathogens. But they might decide to publish something about that pathogen and that mutation. And so that’s a new pathogen that we don’t know about now that someone might be able to use to do a bunch of harm.
And the question is just, which is more likely? That that pathogen in the wild jumps to humans with that mutation, or that there’s a group of people out there that want to use that pathogen to hurt people?
Kevin Esvelt: Well, there has been a million-plus-death pandemic roughly four times per century — 1889, 1918, 1957, 1968, 2019 — so you got your five there in 130ish years, about every 33 years on average. So that’s your baseline natural pandemic rate that’s severe.
Even if we just count Seiichi Endo, the Aum Shinrikyo bioweaponeer virologist, we’ve only had recombinant DNA for 50 years — 20 years for reverse genetics from synthetic DNA — but let’s call it 50 years in which we’ve had recombinant DNA in which deliberate pandemics like that are even a plausible thing. One guy means 2% per year baseline historical rate. And then if you think it’s reasonably likely that someone like Ted Kaczynski might have done it — if there were an identified “we think this virus is probably going to cause a pandemic if it spills over” and maybe we know that it’s super lethal as well, like in the H5N1 case — at what point would the Ted Kaczynskis of the world decide to switch fields and get training, volunteer as a technician in a wet lab, gain the relevant skills and go for it?
But the historical rate of just baseline existing virologists — and of course, there are more than there were circa 1990 now, and there are more people who are not virologists who have the skills, so arguably the rate is increasing — but baseline historical rate, one out of 50 years is 2% per year. So you have one every 33 years for natural, one every 50 years for deliberate baseline. And then if there’s dozens of pandemic-capable pathogens out there — or at least there’s a one-in-dozens chance that you actually spot the correct one — but any one that you identify can immediately be misused, the math looks really bad. Especially because, even if you spot the natural one and guess correctly, there’s no guarantee you can convince anyone to actually direct any resources towards it: again, witness pandemic preparation now. So you may not have achieved anything beyond actually causing that virus to be deliberately misused as a weapon.
Luisa Rodriguez: Right. OK, so you have to correctly guess the virus. There might be dozens, and you have to be like, “I think it’s this one that comes from monkeys.” And you have to guess the right mutation: “I think this mutation is going to make it really bad, and that’s going to be the one we’re going to worry about. And so we’re going to make vaccines to target that.” And that particular mutation has to be the one for that vaccine research to end up being helpful.
But any single one of those pathogens, if you understand them well and publish a bunch of stuff on them, could be used by one of these bad actors. And so the odds start to look really low that it’s helpful — but much higher, relative to the benefit probability, that it’s used harmfully. Am I getting the picture?
Kevin Esvelt: That’s exactly right. There’s just this discount factor. Suppose there’s 100 out there. Your odds of guessing right on the natural side are 1 in 100. So that’s a 100-fold discount on new benefits — that is not applied to misuse because any one will work for misuse.
Luisa Rodriguez: You can just use any of them.
Kevin Esvelt: And then we haven’t even talked about how, even if you do develop a vaccine, you still have to manufacture and distribute it and all that jazz. You have to get it approved. How are you going to get a Phase II clinical trial of a vaccine against a virus that has never infected a human and might never do so?
Luisa Rodriguez: Yeah. “We just are kind of worried that maybe this will happen. So can we infect a bunch of people with this thing that’s not already in humans, and then see if our vaccine works?”
Kevin Esvelt: But you’re a scientist. You want to know and you want to help. And this is your skill: this is what you know how to do. You’re driven as much by desire to understand how these things work. They’re things of beauty, these are clockwork marvels that evolution has crafted: the elegance by which they subvert the different aspects of our immune system and take over our cells and replicate and manage to cause just the right kind of symptoms. And it’s just amazing. I completely understand the desire to know. And again, if you’re not thinking about misuse, knowledge is always worth having — if your adversary is not going to use it against you.
Luisa Rodriguez: Yeah. Are there benefits that we might be leaving out? What is the strongest possible case for this type of research? Maybe 1-in-100 chance of picking the right one is an underestimate, because actually we’ve got some pretty good reasons to think that some pathogens are more likely to jump into humans than others? Or something that makes this look better? Maybe we learn other things about pathogens that help us in ways besides creating a vaccine for that particular thing?
Kevin Esvelt: You can definitely imagine that it would better target anti-spillover efforts. You can definitely imagine that maybe you will be able to get a vaccine that is ready to go.
But the thing is, you don’t need to know which specific virus would actually cause a pandemic in order to come up with a broad-spectrum vaccine or to improve your overall anti-spillover effects. It’s not like a “do nothing” in terms of preventing spillover and preparing vaccines, because surveying the viruses and figuring out which ones are out there doesn’t tell anyone how to cause a pandemic. And it does help you ensure that your broad-spectrum vaccines actually work on all members of the viral family in question. It helps you learn what animals they’re circulating in and thereby lets you help target your anti-spillover efforts.
We should really update those efforts, by the way, because there’s new technologies that were not proven five years ago when all these efforts came out that now are. It used to be that you needed a long time to develop a vaccine. Well, Moderna can design one within 24 hours — and if you already have the factories for mRNA production, just program them to put a different string of bases together. So we can have a lot of them much more quickly. That doesn’t mean quick on the scale of if someone is deliberately releasing something, but it’s still much quicker than before. So the benefits of needing to start way in advance are less now if you think that you can now get a nucleic acid vaccine.
But more to the point, since we have that capability, and we now have things like nanopore sequencing, you can imagine equipping communities and hotspots that are at risk. Say, if there’s a concerning illness that multiple people come down with at once, help give their medical provider training on how to use this nanopore sequencer. Use it, get a sequence of the thing, and send it up. And then once we have that, we can get a sequence in one day. And then you can imagine within 10 days — I would love to see — you have 10,000 targeted diagnostic tests specific to whatever the novel pathogen is, and maybe even 10,000 doses of nucleic acid vaccine pre-approved for a Phase I/II combined clinical trial.
And because it’s a lot to ask people often in developing nations to just be guinea pigs for this, Australia is really good for regulatory approval of vaccines. You can imagine maybe we get a 1Day Sooner–type movement in Australia, wherein people will say, “Yeah, I’ll volunteer to get the vaccine if it helps encourage people who are actually at risk to get vaccinated.” Because then that will decrease the likelihood that it will escape that one geographic location and spread.
So these are all technologies that we didn’t have proven five years ago, and they’re available now. So where should we send them, where should we target them? But none of that requires us to know which particular viruses are likely to cause pandemics. The math looks really, really bad.
But here’s the thing: I’m not optimistic that we’ll be able to persuade people on this. And you might say that everything you’ve been talking about, all you’re saying is right now, the risk of misuse is pretty small, because there’s no obvious pandemic-capable viruses that are accessible. And yet you’re talking about extremely dangerous pathogens: measles-level transmissibility, 90% lethality. How would you ever do something like that?
I suspect that the scientific community will — well-meaning, of course — learn how to make pandemics much more devastating than anything natural. And it will be a combination of accident and deliberate intent — well-meaning, of course, but deliberate intent.
Luisa Rodriguez: Why accident?
Kevin Esvelt: Because techniques developed for something else accelerate research in a different area of bio all the time. This is how biotechnology works: Biotechnology is taking a useful trick that was built usually itself on top of some useful natural trick that someone discovered, and combining it with this other interesting natural trick somewhere else, with this protocol that someone else develops so that you can get this additional capability. You put them together. That is how clever biotech works. I’m primarily a biotechnologist, right?
Luisa Rodriguez: Is there an example that’s made this kind of stuff easier, like researching pathogens in particular?
Kevin Esvelt: Cheaper sequencing, for sure. And you can imagine reverse genetics just makes it much easier to gain access to them: you don’t need to get a physical sample. And easier synthesis means you can test different variants much more readily. You no longer need to make one and then study one; you can make a library of a million and see which ones work best.
Luisa Rodriguez: How can you do that?
Kevin Esvelt: So I’m an evolutionary engineer. I do directed evolution in the lab. This is how I started: I built a synthetic ecosystem to very rapidly evolve useful proteins by essentially tricking the viruses to evolve the thing of our choice for us, such that the viruses got to replicate more often, the better they performed the molecular trick I wanted them to do. And what we do in directed evolution is, essentially we say we don’t know how to design proteins very well — and machine learning is letting us get better, but even so, these things are complicated. Especially if you have multiple interacting pieces, modelling all of that is just heinously difficult.
What we can do, though, is we can do what nature does: We can create a lot of variants, we can set up conditions that will select for the ones that are best, and we can take those winners and we can make another million or 100 million or billion variants, depending on the system in question, and do it again. That is, we’re just harnessing evolution in the laboratory. And if you add AI so you can do gradient descent in some levels, you can be even more efficient, of course. But at the end of the day, you get many shots on goal, not just one. But that requires DNA synthesis to make all of those variants in parallel, so those are the sorts of things that are accelerating stuff.
But let’s zoom back out: let’s bring back physics, which is more your background. One of the most charismatic physicists in history, of course, was Richard Feynman. And Feynman has this famous quote: “What I cannot create, I do not understand.” Well, in biology, we seek to understand biology: we want to know how these viruses work, we want to understand how our immune system fights them, we want to understand what the moves and countermoves are, and we want to learn to program biology — because I, for one, don’t really relish the idea of withering away and ceasing to exist. I’m not down with that. And I’m not down with horrific diseases causing people to suffer. These are all things that we need to fix, and biotech is how we’re going to fix them.
But along the way, we’re going to learn to program biology very well, we’re going to understand what controls the evolutionary fitness of pathogens, we’re going to understand how to evade the immune system, we’re going to understand what kinds of pathogens are most lethal — ways of increasing lethality, of making them evolutionarily stable.
Luisa Rodriguez: Yeah. And you said some of that was going to be accidental, but you also said some of it was going to be deliberate?
Kevin Esvelt: Some of it is deliberate because it is sexy to be able to say, “This virus could cause the next pandemic.” And I can point you to a Cell paper published late last year that said, “This primate arterivirus is primed to spill over and cause the next pandemic. Here’s all of our molecular characterisation that suggests that it could.” And all that controversial research at the Wuhan Institute of Virology. What were they doing? They were taking natural viruses that they thought could cause the next pandemic, they were shuffling them and making chimaeras of different components of them, and then they were testing their growth potential in the laboratory.
And the controversial DARPA proposal that got turned down to insert a furin cleavage site into these coronaviruses: that was what they were hoping to do, and then they were going to measure transmission in mouse models expressing the human receptor. They wanted to know which viruses could cause pandemics, they wanted to know which ones could evade the existing immune system.
To this day, many virologists are trying to predict what the next variant of SARS-CoV-2 is going to be. So they’re collecting tonnes of data, running structural studies, running mutational studies — where they make a mutation in every residue of different existing strains and see which ones are important for recognition by antibodies that are common in current people, and which ones are tolerated by the virus.
And you put all this information together and you can do a pretty fair job of predicting which set of mutations will escape immunity and yet still remain functional for getting into our cells. And you put all that together, and if you can make the next variant, you just made something that could plausibly infect most of humanity. And then if you made that more virulent, because somebody else perhaps stumbled across some way of making things very virulent — or you can use standard virology techniques for doing that, but that’s research, so I’m not really worried about terrorists doing that — I think there’s going to be many ways of increasing the virulence of pathogens artificially. Some that nature does by co-opting them, some using different ways that nature is not going to try — because, again, nature is not trying to kill us.
But we will learn how to do that. We will publish that information, because we have a very strong prior that open data and open science are very important. And sooner or later, someone will put the pieces together. And naively, well-meaning, they’ll say, “We should really be concerned about this. I think if you combine this, this, and this, it could cause a 90%-lethality measles.” And they will try to warn the world so that we do something about it, right?
And then there will be controversy. Would it actually work? Well, controversy in science induces journals to sit up and take interest — because if it’s a topic in the news and it’s controversial, then that means that we want to resolve the controversy. We want experiments that will determine who is right. So scientists correctly appreciate that, when there is controversy, you can get a paper in Nature, Science, or Cell — the top journals which are the best for your career.
Therefore, the incentives favour scientists identifying pandemic-capable viruses and determining whether posited cataclysmically destructive viruses and other forms of attack would actually function. That is, I expect it would be: “I think this would work.” “No it wouldn’t.” “Yes it would.” “No it wouldn’t.” “All right, I’m going to test it.” And then you get a high-profile publication for testing it. And then they would say, “Well, the other piece wouldn’t work though.” “Yes it would.” “No it wouldn’t.” “Yes it would.” “No it wouldn’t.” “I’m going to test it.”
And I have not seen any appreciable counter-incentives that could be anywhere near as powerful as the ones favouring our desire to know. Because almost all the time, it is better for us to know. And in biology, unlike physics, we tend to trust the institutions. Because at the dawn of recombinant DNA, partly because many biologists at the time had been physicists, they called a moratorium on all recombinant DNA research and then all got together to hash it out. They decided at the time that we were decades away from learning to build things that would spread on their own, and we were decades away from editing the human germline. And therefore, here is a set of self-regulation principles that we will follow, biosafety principles. And that became the basis for the NIH guidelines on recombinant DNA that has governed us ever since. Success! They were right on all counts.
That was 47 years ago. I invented CRISPR-based gene drive a decade ago. Now we can reliably make things spread on their own, as best we can tell. And I would be willing to bet that there are many other ways of doing that. Gene drive favours defence; pandemics decidedly do not. And if you happen to care that “we don’t know how to edit the human germline” — well, we can do that now too.
But have we really sat back to reevaluate Asilomar? No. Biologists just tend to assume that the system can handle itself, because historically it did. It did great. And when people tried to raise the alarm back with the first reverse genetics in 2001, right after September 11, a bunch of people screamed bloody murder that this was going to put the tools of mass death in the hands of terrorists — and nothing happened. And so scientists associated security precautions with having to take off your shoes in the airport.
So I don’t see many plausible futures in which we do not learn how to build agents that would bring down civilisation today. You get good enough at understanding and programming biology, you will learn how to do that. And we know that sufficiently nasty viruses exist in other species — and bacteria for that matter; I would not assume that viruses are the only threat, they’re just the most obvious one — and I would not assume that today we could predict how it can be done. We can’t foresee all of the different ways. We just know that in the limit, if you get good enough at programming biology, we can do anything that nature can do — and nature can do the kind of pathogen that is necessary to kill billions and set back civilisation by at least a century.
Whether we can sufficiently improve our defences [01:09:01]
Luisa Rodriguez: To what extent is it possible that we’ll come up with scientific advances that also make our defences much better at the same time, such that the risks aren’t on net getting much worse?
Kevin Esvelt: I am glad you asked. I think this is a totally solvable problem.
Luisa Rodriguez: Great.
Kevin Esvelt: It’s just that you have to frame it correctly.
Luisa Rodriguez: OK, how do you frame it?
Kevin Esvelt: This is hard for me to say, because I am ultimately a biotechnologist, I’m a life scientist, I dabble in a lot of disciplines: that’s what I am at core. But I wouldn’t bet money on any vaccines or any other clever stuff that we’re doing.
And my group works on a number of things. We’re very interested in defective interfering particles — which is like the snippet of a virus that says “replicate me, package me” and nothing else. These are very promising, because it can work against a whole family of viruses if you choose a broad origin or you give a cocktail of them and so forth. Just potentially incredibly promising.
But if I’m a malicious actor, I can just change the replicase using modern protein design tools to recognise a sufficiently different sequence that the defective interfering particle won’t work.
Luisa Rodriguez: So offence is just a bit easier than defence?
Kevin Esvelt: Offence is just easier than defence for most of biology. And so I’m just drawing from principles of cybersecurity: I think we should just assume that sequence space is diverse enough and the logistics of delivering biomedical countermeasures are slow enough — even in wealthy nations, nevermind everyone else. And that’s an interesting moral issue, right? All of this stuff about, “We’re trying to learn about pandemic viruses so that we can develop vaccines” — that we know will only be available to wealthy people, but we are creating risk that applies to everyone in the world. That’s a bit of a social justice problem, isn’t it? So there’s all kinds of issues there.
But I think we have to assume that you cannot scale your biomedical interventions. And people are already going to preempt me and say, “But what about when we have DNA bioprinters everywhere, on every desk?” Fine, once you can get those things in sub-Saharan Africa, everywhere, then maybe we can talk. Maybe. But you’re still going to have to convince everyone that they should put this thing into their arm, which is a whole other separate problem.
So what’s the solution? Well, what do you do in cybersecurity if you have fundamentally insecure hardware and you’re stuck with that hardware? The real reliable solution is you air gap it. You just say, “Adversary, you don’t get to talk to the insecure hardware. There is no free information exchange with that insecure hardware.”
So if we just assume that, relative to what we will eventually be able to do with biotech, all living things on Earth are insecure hardware — but most notably us — then you just need to engineer physical defences that prevent biological information from entering our bodies unless we authorise it.
Luisa Rodriguez: How do we do that?
Kevin Esvelt: We already know how to do this! If you order a powered air purifying respirator with a HEPA filter, it is 99.97% effective at filtering particles. It’s actually better than that. That’s the weakest it is; it is at least that good, and the area where it’s worst is actually not where most infectious particles are. So it really reduces your risk of infection by about 10,000-fold. It doesn’t require fit testing. That is, it works on everyone because it’s creating positive pressure filtered air that’s going into this headpiece.
Luisa Rodriguez: But what’s the proposal? People aren’t going to wear these all the time.
Kevin Esvelt: I mean, no, of course you’re not going to wear one of these all the time, but if there’s a 90%-lethality measles going around, the obvious way to defend against something like that is to have everyone lock down, except for the people who can’t lock down because they’re involved in the distribution of food, water, power, and law enforcement.
And those people need this pandemic-proof personal protective equipment. Current versions suck: they’re loud, they’re noisy, they’re uncomfortable, and they were designed 20 years ago. We can do much better. We can reduce the price point: if we can get it [down] enough, then we can compete with N95 masks in hospitals, get some uptake there. But frankly, militaries around the world have no excuse for failing to invest in enough of these for all their personnel. And if they question whether it’s strategically necessary, if their mission statement is something about, “We must always be able to win a war,” well, see if you think you can win a war when your personnel are all infected with something nasty. So: solvable problem.
Luisa Rodriguez: Earlier you said AI increases the risk of misuse of biological pathogens. But is there any chance it’ll help?
Kevin Esvelt: I think in the longer run, AI is also the solution, because it’s hard for me to imagine a world in which most people have access to biology that could bring down civilisation. Humans are fallible; someone would do it. Mental illness is a thing. Hostile ideologies are a thing. Someone would push the big red button if you hand out thousands or millions of them. That’s just how humanity is. We are just not responsible enough to deal with widespread access to tremendously destructive technologies.
But with AI, we might be able to build systems that are trustworthy enough. I am more optimistic that we can make trustworthy and responsible AI systems than that we can make all humans trustworthy and responsible. So benevolent AI can, in the long run, solve the problem.
So this is my challenge to everyone out there who’s working on alignment and trying to ensure the AI doesn’t kill us: It’s not just about getting it good enough that the AI won’t kill us deliberately or accidentally. You just also need to ensure that the AI will keep us from wiping out ourselves. And honestly, if you can do the first two, I’m pretty confident you can do the last one. So good luck with that.
And until then, we probably do need a lot of evaluations of nascent systems by folks who have a reasonable understanding of how bio could be used to cause harm. What sorts of questions are dangerous? What sorts of knowledge are dangerous? We’ve now, out of all the life sciences papers ever published, grabbed the 1% that have the concepts that we think are most concerning with respect to potential combinations to create really nasty, novel things — and we think we can use this, combined with sets of evaluation questions, to potentially train future classifiers that will help the AI systems learn what sorts of questions are dangerous.
And I do think this is one area where we should just err on the side of caution. Do we really need to accelerate our understanding of viruses that could kill everyone, that are almost certainly never going to exist in nature, but humans could create? Is there any amount of understanding of natural viruses that are not going to exist that would justify giving access to humans who are not responsible? I think the answer there is pretty clear, but folks may disagree, and I think that’s one of the conversations we need to have.
Stealth pandemic scenario [01:16:08]
Luisa Rodriguez: OK, so that’s the case that the risk of deliberate pandemics might increase in the coming years. Let’s move on to a similarly harrowing topic and talk about the worst-case pandemic scenarios, how prepared we are for those scenarios, and how we might get better prepared.
So you’ve written a paper describing two scenarios so catastrophic you think they could cause society to collapse. So there’s what you call the “wildfire” pandemic and the “stealth” pandemic. And I want to talk about both of them, but I want to start with the stealth pandemic scenario. Can you say what happens in the stealth pandemic scenario?
Kevin Esvelt: Imagine a fast-spreading respiratory HIV. It sweeps around the world. Almost nobody has symptoms. Nobody notices until years later, when the first people who are infected begin to succumb. They might die, something else debilitating might happen to them, but by that point, just about everyone on the planet would have been infected already. And then it would be a race. Can we come up with some way of defusing the thing? Can we come up with the equivalent of HIV antiretrovirals before it’s too late?
Luisa Rodriguez: Yeah, that’s pretty horrific. It didn’t feel intuitively plausible to me until you made this analogy of the respiratory HIV virus. I guess it wasn’t that salient to me that some pathogens do have very long latency periods: unless you were being tested for it, you wouldn’t know you had this disease. You wouldn’t have any symptoms, you just have nothing. You would think you were fine and healthy and you’d go about your life, and then years later, you start presenting symptoms.
And in the case of HIV, while it’s horrific, we’ve gotten very lucky in that it’s not nearly as transmissible as respiratory illnesses. But it doesn’t have to be that way. There could be something that was much more transmissible but had this long lag..
But it sounds like you think there are ways of actually detecting these pathogens early, despite the fact that people aren’t having any symptoms. Can you explain how we do that?
Kevin Esvelt: So there’s two ways. The first way is we look for things that we think are suspicious, ways that we imagine such a thing might be created, what viruses or bacteria it might be based on — and we look for those together with signatures of engineering. So we figure this is probably not going to happen naturally, although we should be looking for it, right? The notion that NIH will fund tonnes and tonnes of research to cure or prevent HIV and basically none on detecting the next one suggests that our society is a little bit overly obsessed with cures at the expense of prevention, which we all know is better.
But we can look for suspected signatures. The problem is that that’s not reliable, because if an adversary knows what we’re looking for, they can engineer something that we won’t detect.
Luisa Rodriguez: Can I take a step back and ask how we look for these? Are we basically just screening random people for things that might cause some symptoms down the line but aren’t now? And we’re just taking random samples from the population and checking for random things that look kind of engineered?
Kevin Esvelt: That’s a great question. You can do it one of two ways. You can take clinical samples — imagine SARS-CoV-2-class nasal swabs — and then just do metagenomic sequencing of everything that’s in there. The problem is you’ll always get some of their DNA then, and therefore you’ll get some of their genome, and there’s privacy concerns because they’re individual people.
The other way to do it is to sequence wastewater. You can imagine just municipal wastewater plants, but the one we’re probably more excited about is sequencing aeroplane lavatory wastewater, because we know that all human pathogens spread through the air traffic network. So you can get a leg up on them if you specifically look for them in aeroplane laboratory wastewater.
Luisa Rodriguez: Genius. And then is that reliable? Do you have to know what you’re looking for in order to pick up on it? Or are you just like, “This is kind of a weird, unexpected thing,” and then you happen to notice it looks engineered?
Kevin Esvelt: You can look for specific signatures of particular sequences that you think will be present if you have some idea of how to build one. This is obviously a bit delicate, because if you think you know how to build one, then maybe you should not disclose that. But the obvious way to do it is, insofar as we have research into genetic engineering detection algorithms, you can just apply them to everything that you sequenced.
And you will probably come up with laboratory researchers who have some contaminant on them — engineered DNA from the laboratory, from their E. coli that they work with in the lab. You should start seeing that. You should certainly see that in municipal wastewater. Oh, I’m sorry, I shouldn’t say that. Of course, we are very careful to bleach all of our laboratory samples for at least 20 minutes before dumping them down the drain. I am sure everyone at MIT and all other universities do that, and so if you were to sequence the wastewater in the Cambridge, Boston area, you would never see any signatures of engineered DNA coming out. Never. Sorry. End of sarcasm.
But point being, that is one reliable way to do it that sidesteps all the privacy concerns. But that’s still not reliable, because the adversary can engineer around that sort of thing. If they know what genetic engineering detection algorithm you’re using, then obviously they can check the thing they’re making to make sure that it doesn’t trigger it.
Luisa Rodriguez: And what kinds of things are they checking by default? How do you detect that something’s been genetically engineered?
Kevin Esvelt: One way to do it is you look for combinations of sequences that you should not see naturally.
So one way I will jump to is my own background in gene drive. A gene drive distorts inheritance. Basically, it’s a way of engineering an organism such that whenever the engineered version mates with a wild version, and the offspring inherit one engineered and one wild, the engineered version has a copy of CRISPR genome-editing machinery, which edits the wild one to match the engineered one. And so it just cheats and it spreads on its own in the wild that way. CRISPR systems are ubiquitous in microbes, but they are not found at all in sexually reproducing organisms. And gene drive pretty much only works in sexually reproducing organisms, with very few exceptions. So if you see any DNA from the genome of a sexually reproducing organism on the same sequencing read as something that looks like CRISPR, you know that thing was engineered by a human.
Luisa Rodriguez: Cool. That’s really helpful. And to what extent are we currently doing this wastewater metagenomic sequencing?
Kevin Esvelt: It’s a research thing. So we are definitely doing wastewater monitoring for particular things. And we are even doing wastewater sequencing — for example, for variants of SARS-CoV-2 and influenza and a few other things, known pathogens. But the only people who just do deep metagenomic sequencing of everything that’s out there are pretty much academic researchers.
But what we’re also looking into is the reliable way — because I always want to be cautious when it comes to the possible end of civilisation, and my children dying, and me dying, and all the hopes and dreams of everyone being shattered.
Luisa Rodriguez: Yeah, yeah. Thanks for that.
Kevin Esvelt: Yeah, we don’t really want to take chances with that. And due to my efforts on DNA synthesis screening, I’ve been spending a lot of time with cryptographers. And this is an area of cultural conflict between biosecurity and cybersecurity. Cryptographers in particular make a number of assumptions going into their work. They say: Assume there is an adversary. Assume the adversary is smarter than you, better resourced than you, and is operating in the future with the benefit of technologies and advances that you don’t know and can’t imagine. And of course, they’ve had the opportunity to look at your defences after you construct them. So design accordingly.
Luisa Rodriguez: That’s a pretty different approach to my impression of the way biologists are thinking about this.
Kevin Esvelt: And even biosecurity people — which again, this is a nascent field, but come on — are still struggling with maybe we should require DNA synthesis screening at all, never mind ensuring that it actually is up to date and verifiable. And what about questions of information hazards? Maybe we shouldn’t disclose everything that we’re screening because the adversary can both use it against us and evade it. And maybe you shouldn’t have a screening criteria on a device. Maybe, maybe, maybe.
These are all much more advanced questions where, perhaps understandably, most people in the field are just focused on, “But we haven’t even gotten screening at all!” And my point is: But if any teenage malcontent can get ahold of your software or one device off of eBay, and then endlessly interrogate your screening criteria and then write a quick algorithm that can convert anyone’s DNA sequence into something that will evade screening, they’ve just negated your entire effort like that. So what’s the point? You have to think at least a little bit about how to do it right. You have to think more than just the next step. And what’s more, you need technical advances in order to meet those other goals. So if you’re doing technical research on what you need, then you should think about those later steps and try to learn from those disciplines. Because I’ve often said that after working with the cryptographers and InfoSec folks for years, I now have the security mindset of about a three-year-old toddler compared to that — but even my three-year-old toddler self can say that you really don’t want to rely on our expectations and genetic engineering detection and similar algorithms to be reliable against a sophisticated adversary, if there is one.
Here’s the genius thing: Whatever the threat is, if it’s biological, it’s made of nucleic acids in its genome, and it needs to spread rapidly. Which means it needs to become more common in our samples and across the world in a pattern that should match that of novel variants of SARS-CoV-2 — or, on different timescales, other things. That is, we should see some pattern of exponential-like growth.
Luisa Rodriguez: Oh, I see. So basically, if you’re doing this kind of metagenomic sequencing, even if you don’t know what you’re looking for, you’ll notice that something is increasing in frequency at some increasingly increasing rate. And you’ll be like, “That’s weird. What is that?”
Kevin Esvelt: Exactly. And if you build a system and look for those signatures, you should see every new variant of every existing human pathogen. For example, you’ll also see some weird spikes when the airline changes its food sourcing, and you start seeing plant viruses from whatever lettuce they’re serving now. So you’ll see some weird spikes and there will be some background; it won’t just be human. But the nice thing about the aeroplane lavatory is that it is almost all human samples, plus whatever the airline just fed people.
And the point is, you should see everything spreading through humans: every human pathogen, every new variant, every new mutation that they’re accumulating that is starting to spread and eventually to everywhere in the world. We should see them all. And there’s just not that many of them. There’s only 200-something viruses that are known to infect humans, period. So once you’re monitoring all of them, that’s not so many that you can’t have a human look at them.
And so even if it’s designed to evade your engineering detection algorithms, you can still have an expert human look at anything that is new. Anything, anything, anything that is new, it is worth having an expert — who is paranoid and suspicious and very good at engineering biology — look at it and say, “Do I think there is anything at all concerning about this? Is it a baseline pathogen or even commensal mutualist that’s spreading rapidly? What do we think the fitness advantage is here that’s causing it to spread rapidly? Is it doing anything unusual? Is it expected to interact with any biological system? Are there any signs of genes that would not normally be there, based on all of our other samples of things like this?” Maybe they look natural, but it’s really statistically unusual to see a gene there for this viral family.
Luisa Rodriguez: Right. It’s so odd that it’s there, that maybe they’re just evading our genetic engineering detection. Yeah, that’s amazing. That’s really amazing. Will you just have a better sense of all of the changes in different frequencies of different illnesses going around the world at any given time?
Kevin Esvelt: I’m not an epidemiologist, but I’m an evolutionary biologist. And yeah, that’s a treasure trove of amazing data on why these particular things are spreading. Now, it’s not the sort of data that I necessarily want to share with the entire scientific community. But that said, they’ll know anyway, because everyone’s already tracking all of those things, and publishing all of the sequences, and training machine learning models to predict which ones are likely to take over next, thereby allowing you to design them and so forth. Which is back to why I expect that in the long run, we are going to learn to do this sort of thing deliberately, so it doesn’t really matter. But yeah, it’s going to be an amazing treasure trove of data and understanding. And that part of me that’s just pure scientist, I really want to know, know, know, know.
Luisa Rodriguez: But your team is interested in using this to look for things that are just weird, that are unusual, that are unexpected, that are unfamiliar, that might have little bits of proteins in a place you don’t expect. And make predictions about how those things might make something more transmissible or more lethal, and then notice in advance that this seems like it might be a pathogen that could be doing this latency and then terrible symptoms thing.
Kevin Esvelt: Yep, throw everything weird looking into AlphaFold and see what it folds into. Maybe if I’m being evil then I would probably make it look like some protein with other function, but it’s actually a bifunctional protein that I use protein design ML tools to create — and so it’s doing the thing that you would expect it to be doing, but it actually has this secondary function, which you might be able to predict, but you would need to do so using multiple fold and function prediction software tools. Not just one, because they know whichever one you’re using, so you need to use multiple ones — and you need to not disclose which ones you’re going to be using to assist your investigations, and need to be somewhat stochastic in what you’re looking for.
But suffice to say, if you’re sufficiently paranoid and you have a finite list whereby you typically get no more than one or two novel things a week, or even every day, to look at with a small team of sufficiently paranoid and skilled people, I’m pretty darn confident that, unless we’re talking superintelligent design capability, short of that, we will detect it.
And we think it’s going to be expensive. It’s not philanthropic level; it’s probably hundreds of millions a year. It depends on how sensitive you need it to be, of course: if you need to detect one in 1,000 air travellers, that’s probably doable for less than a billion dollars a year — which, on the scale of defence budgets, is pretty trivial, considering that this is one of the ways that we could all lose. So I’m pretty confident that we will get that.
And then, once it’s known that we are doing it, we can advertise it. And then if you’re the adversary, why would you even try a stealth attack? Beyond the fact that it would be hard to convince everybody that it was real — and that’s the other bit that we need to get on.
Luisa Rodriguez: Yeah, so that’s my next question. So even if you have this, it’s a pretty weird claim to be like, “There’s a new thing, we’ve looked at it and we found some things that might do this benign protein function, but might do this sinister protein function.” How hard is it then to convince whoever you need to convince — politicians, other academics, society — to actually take action? Given that no one’s going to be having any symptoms, everyone’s going to feel fine. And you’re going to be like, “But we think there’s a weird thing, and we think everyone should invest loads of money and effort to try to figure out what’s going on with this weird thing.” Is that going to happen?
Kevin Esvelt: Well, I guess one question is: Who would need to support us for you to believe it? For the most sceptical member of your family, the most conspiracy-theory-minded person, to believe it? And those are two very different levels of burden of proof.
Luisa Rodriguez: They are.
Kevin Esvelt: But the horrible thing is we don’t need everyone to believe us; we need enough essential workers to believe us such that we can protect enough people to keep civilisation running, is the horrible answer. Obviously we want to save as many people as we can. We need to provide tools that will allow them to protect themselves if they believe — even if many other people living around them don’t believe, even in the same family. That’s hard, but it’s not impossible.
And as to who do you need to believe? Well, you probably need at least a plurality of scientists, and ideally you would get near-unanimity in the scientific community. Now, scientists are like any other group of people: it’s very difficult to get 90% of scientists to agree on anything at all. We can always argue over something.
But if you can get about 90% of life scientists to look at the genome and say, “Oh my god,” that’s probably good enough. And maybe you need to run experimental tests that it’s behaving the way you expect in the cell types that you expect. Maybe you would need to track down people who are infected and get their permission to run tests on them, verify that it’s doing the things that you would predict based on all of your analyses. You certainly, at a minimum, need to convince your defence establishment.
Luisa Rodriguez: Yeah. And that seems hard. Maybe they’re a bit more paranoid, and that seems good. But is the method going to be easy enough to communicate and demonstrate to a community without a science background, that they’ll actually take a bunch of actions to get people to stay at home, quarantine en masse? Which we’ve already seen is really hard.
Kevin Esvelt: I would put most of my probability mass on the case where, looking at the genome, you can just tell.
Luisa Rodriguez: OK, that’s reassuring,
Kevin Esvelt: But that only gets you so far, right? So the bulk of the scientific community, you show this, and they’re like, “Oh my god.” And they tell all their families, “We’re not interacting with other people anymore. Sorry, you’re just going to have to trust me on this.” That’s pretty credible in and of itself, but is that enough to get a democratically elected government to actually take serious action? I don’t know. Most of the people I’ve spoken with at the policy level seem to suggest that the answer is probably no. Maybe if you do a lot of advanced preparation and briefings on, “We think this is a thing. Here’s our monitoring system. You have been funding it. The whole point is that this is an attack. This is obviously adversarial. We need to find who did it. There needs to be an investigation.”
Even if you’re not going to order lockdowns — because lockdowns are almost certainly not the right thing to do anyway, just because there’s backlash and it’s not everywhere, so you don’t need to lock down everywhere, and they’re just going to make people resentful — you need to empower individuals to make their own decisions as much as possible once they’re persuaded, and ideally encourage the other side that doesn’t believe from treating it as like, something that we’re going to use to shut down their freedoms.
It’s going to be hardest in the States, but maybe it’d also be the easiest in the States. At least in the United States, 30% of the population will believe you no matter what, because 30% will not believe you no matter what. And maybe in many less polarised countries, you might struggle to even reach 10%, but in the US, you’re basically guaranteed 30%.
Anyway, long story short, you almost certainly need your defence establishment, because otherwise you don’t have resources. But even if the civilian government won’t do anything — because imagine how costly it is if you’re a politician: your career is over if you’re wrong.
Luisa Rodriguez: Right. If you raise these alarm bells, and then…
Kevin Esvelt: And then, and even if you’re right, no one’s going to be happy with you. Maybe you did the right thing, but you’re not going to win any points for having done the right thing. Everyone’s going to hate you by the time it’s shown that you were actually right. And so history might vindicate you, but you’re going to get a lot of abuse and you’re not going to be able to do anything else. It’s going to sink every other policy priority. It will be a nightmare.
But a lot of people are going to believe, and you can at least make the protective equipment available to enough people so that they can protect themselves — and you make it comfortable, easy to use, minimally impeding in your daily life. And you go out of your way to just block generic transmission in the environment. Why don’t all of our buildings have germicidal lights in all of the fixtures, together with whatever level of ventilation we can manage to put in?
Luisa Rodriguez: Right. So in the world where this goes well, we’re not forcing lockdowns, but we are empowering people with information to make their own decisions. And some people will believe and avoid people if that seems like the sensible thing, or hopefully have access to the kinds of equipment that are going to make it safe for them to do their jobs, if that seems really important. So one thing is just empowering people with information.
UV lights and PPE [01:37:42]
Luisa Rodriguez: Another thing is it sounds like using technology to make the environment safer, to make it less likely that this pathogen is able to transmit between humans super easily. So let’s talk about some of those, because I think some of those are really cool. One of them is UVC light. Can you explain the case for that?
Kevin Esvelt: Yes, there’s two forms of UVC light, really. There’s the sort of classical UVC, which was traditionally mercury-vapour lamps. So back in the day, public toilets had UVC lights — really just a germicidal lamp — above them. Once you left the stall and closed the door, that would trigger the light, and it would be on for a little while and it would sterilise everything there. And this was back in a world where liability had not taken over quite so much, and if someone was dumb enough to be in there when the light was on, when obviously there was sufficient warning, then that was their own bloody fault. That was considered acceptable.
Now, we do not live in that world anymore. But they did a lot of studies. Not just on that sort of thing, but also, you can imagine if there’s a room with reasonably high ceilings, you set up the lamps pointed basically upwards so you get just a sheet of sterilising light well above everyone’s head, so even if you reach your hand up, you’re not going to get exposed. Turns out this is pretty effective at blocking the spread of anything that relies on aerosols.
Luisa Rodriguez: Yeah, that’s amazing.
Kevin Esvelt: It just shuts down tuberculosis. It can definitely slow the spread of even something as contagious as measles, as chickenpox, things like that. But it’s not enough on its own, and that’s probably just because it’s only up there. So you need some circulation of air going up and down.
And if you’re in conversation with people, like across a table, your breath plumes basically go at each other. And this is where the other form of light comes in. So you really don’t want to stick your hand into a traditional germicidal UV. It’s not that bad; it’s actually not very penetrating, but it’s still bad — you don’t want to do it. The really dangerous light that gives you sunburns and skin cancer is actually UVB, not UVC. Even so, traditional UVC is not good for you.
But if you go lower in wavelength — below 235, 230 nanometres — you start getting strong absorption by proteins, by the peptide bond itself. There’s more studies on this, and we have a paper coming out that details exactly what is the research agenda so that we can be absolutely confident that in the safety: What is every possible thing that could go wrong? We need to look at it to figure out how high we can go. We have pretty good data that it’s good for now, and the levels that are approved in the States are actually enough to eliminate 90% of aerosolised pathogens every minute.
Luisa Rodriguez: That’s incredible.
Kevin Esvelt: That’s about 10 times as good as the crazy high ventilation rates on aircraft.
Luisa Rodriguez: Which already seemed to be solid, during COVID at least.
Kevin Esvelt: Absolutely. So it’s possible that if you just went up to current levels, that might be enough to suppress transmission of something even as contagious as omicron. So you combine the two, you presumably get the best results: the really intense stuff up high, and then the 222 [nanometre wavelength] when necessary.
Luisa Rodriguez: The safer stuff.
Kevin Esvelt: Right now, it’s too expensive to install everywhere, but this is a solvable technological problem. Solid-state emission of 222 or the like seems to be something that we can do. There are some folks that say, “No, it generates ozone.” Well, in most rooms, when you open the window, you raise the ozone level dramatically. So I’m not super concerned about that, but that’s what better ventilation is for. At worst, you sense the number of people in the room and you turn it on when there’s multiple people. And you have microphones, and you turn it on higher when they’re talking to each other or they’re shouting or they’re singing. We know all this stuff. We know when people are highest risk: when there’s crowds, and people are shouting or talking or singing.
Luisa Rodriguez: Yeah, we have learned this the hard way.
Kevin Esvelt: We learned this the hard way. We know all of this. So I am a huge believer — as someone whose lab partly spends our time working with communities, concerning the idea of engineering wild animals in their environment to block disease transmission — that this is the sort of thing where you need to invite anyone and everyone to express their concerns. And you listen carefully, because there may well be something in there that you weren’t thinking about. And in the meantime, you’re developing cheaper and better energy sources for generating this kind of light. And then you start saving employers a lot of money.
Luisa Rodriguez: Yeah, there was an amazing statistic I read in one of your papers that American employers suffer an estimated $300 billion in productivity losses to infectious disease each year. So businesses already have a strong incentive to install these kinds of protective lights once they’re demonstrated to be safe. Do you think that will happen kind of automatically? Is the incentive that strong once safety is demonstrated? Or do you think there will have to be a big push?
Kevin Esvelt: Well, it sure is for, say, cruise ships. There are certainly select environments where it’s definitely going to be viewed as a major perk. And to some extent for employers. It depends on what kind of employers. I mean, say you’re the type of Big Tech employer who tries to keep your employees onsite all the time and offer them perks, like free daycare and all that jazz, and you pay them on average half a million a year each. You really don’t want them to get sick. So you’re probably more incentivised to install them, and install them in the daycare for their kids, than an employer who runs a coffee shop: that is not the kind of environment that is necessarily going to be incentivised to install them sooner, until it’s reasonably well established as a prestigious thing that you can do to make your establishment safer.
And once that’s well established, then you should have the somewhat arms race of beating out your competitors by offering this as a general perk. But we shouldn’t be that surprised at the cost, nor think of it as somewhat unbearable, because I believe total losses to fire — most of which is not actual losses to fire; it’s spending on fire safety and prevention — also sums to about $300 billion a year in the United States.
Luisa Rodriguez: Wow.
Kevin Esvelt: So since most of that is prevention, if we’re willing to invest that for fire safety, shouldn’t we do this for infectious disease? I mean, come on. I hate being sick.
Luisa Rodriguez: Yeah, no kidding.
Kevin Esvelt: I am so miserable, and I feel really bad for my partner because I am just the worst person to be around when sick. And so when one of our kids gets sick, they say, “You know what? Just stay away. Having to deal with you being sick is just so much worse than them being sick. Just stay away.” I hate being sick. Yes, she’s a saint.
But yeah, if we can get rid of infectious disease, why would we not? If you can save money and make people happier because they’re not sick all the time? We’re not going to do it because we’re afraid of future pandemics, right? We’re just not. I can talk as much as I want. I may have scared you, but in general, I’m not going to be scary enough to scare everyone into spending hundreds of billions a year on infection-prevention devices. But if it can save people money, if there is a market dynamic encouraging adoption, that’s another story.
So what we need is very rigorous safety data. So everyone’s on board, we need to know this is safe enough at the highest level that we think we can achieve. We need epidemiology studies showing that, yes, indeed, it blocks the transmission of not just of the things we’re most frightened of from a pandemic perspective, but everything — at least everything airborne, and ideally surfaceborne as well: it’s pretty good at sterilising surfaces and so forth. And then we need the generation to make it cheaper — although, again, the market will take that over of its own accord once you show that there’s demand.
So I’m pretty hopeful of this. It’s not a near-term thing. Given a decade out — certainly two decades out, if we have that much time — you can definitely get it there: in high- and middle-income countries immediately; low-income countries are going to take a while longer, but eventually. Look at the cost curves for LEDs: we can get there. And to everyone who’s doing heroic work trying to control tuberculosis: Wouldn’t it be easier if every lightbulb sold in all the countries suffering from it had these kinds of germicidal lights?
And this is why, in the long run, I’m optimistic. Even though we will learn to program biology such that eventually you get infected, you’re toast — we just have to assume that is true: we will get that good at programming biology — we can probably use physics and engineering to just ensure that we never get infected.
And computer science as well. If there’s anyone listening who thought that we could have done better than exposure notification, one of my friends, Po-Shen Loh at Carnegie Mellon, came up with this idea. Contact tracing is the wrong way to go. You don’t want people to learn when they’re infected so you can figure out who else they might have infected so that they don’t infect more people — you want to motivate people to change their own behaviour and take fewer risks when they’re at risk.
So what you want is an app that you can open that has been keeping track of who you’ve been interacting with — so it knows, for all of those people, how many of them have reported that they’ve been infected in the last, say, week. So how many one-degree connections — whether you know their names or not — your phone knows that you’ve been around them, and it knows who they’ve been around. So it also reports how many people two degrees from you have been infected, and three degrees, and four degrees, and five degrees. Because two people in the same city can have wildly different risk levels, but the app could tell them, and then they could take whatever level of precautions they’re comfortable with — they could make that decision on their own. And with the right kind of cryptography, you can make this work in a privacy-preserving manner.
So what I’m hoping is that we can build this thing — it’s getting easier as our phones get better and better, and batteries get better, and proximity sensing gets better — and get it to the point where the Electronic Frontier Foundation, which is a leading privacy advocate in this space, is willing to say, “Yes, this is a good idea, and we are willing to endorse Google and Apple rolling it out as an operating system update come the next pandemic.” And it would be an opt-out thing: anyone could opt-out if they wanted, but otherwise it would be in. The required threshold for it starting to become effective is way, way, way lower than for contact tracing or exposure notification. And it’s really just empowering people to take the level of risk that they’re comfortable with.
Luisa Rodriguez: Right, yeah. I would have loved to have had that.
What we could do after discovering a stealth pandemic [01:48:35]
Luisa Rodriguez: Going back to this stealth pandemic scenario: We’ve got the desired plan of action from scientists and the people that we need to convince to take it seriously. We’ve got some of the kinds of technologies that you want to make the environment a bit less dangerous for people. What else does it look like for this scenario to go well?
Kevin Esvelt: I think we need a network of folks we’re calling now “expert responders.” So these are, yes, scientists, but also physicians and trusted community leaders who may have a bit of a technical background and therefore feel confident that they could evaluate the thing once we discover something that we’re very concerned about. You could have a couple of tiers. You bring it to the first tier: Do they agree? Bring it to the next tier.
But you would initially issue a preliminary warning saying, “We discovered something concerning. It looks like it’s engineered. It looks like it might be harmful. We don’t know any more.” But just we decided you’ve probably got to be transparent with everyone: “Yes, we spotted something of concern, and you might want to be careful. It’s very uncommon still, but even so, it’s out there. And we’ll let you know when we know more.” And then follow up with the investigations as we gain more data, identify people who are infected, study them and so forth, and then communicate the findings to essentially ratchet up the warning level the more and more confident we are that this is in fact a stealth event.
And these people can also help convey the possibility to their own networks in advance. This is the thing. How are you going to convince your country’s defence establishment that they need to do something once they detect it? Well, it’s a heck of a lot easier if it’s been in their briefings for the last several years as, “This is a thing. And by the way, you’re helping fund the detection system. But note that it’s going to be a problem because many people are not going to believe. This is going to be a conspiracy theory, misinformation-fed mess. Maybe you have to deal with someone deliberately trying to cry a false alarm in advance because people are trolls that way.” And again, assume there is an adversary. Assume the adversary.
Luisa Rodriguez: Wow. Yeah.
Kevin Esvelt: So you have to do that, but you have to build the network of trusted folks with relevant expertise from different disciplines and connections and trust in different communities in advance.
Luisa Rodriguez: It’s interesting that so much of the problem is the sociological side. It sounds like we’re making good progress on the science side, but then how we get the people to do the things sounds extremely challenging. But it also sounds totally right that if you’ve been in conversation with the defence community for five years, you’ve shown them what your system is like and they’re bought in. And you’re like, “At some point we’re going to come to you and we’re going to say, ‘There’s this pathogen that we found and it looks really bad,’ we’re going to need you to have a plan already” then they might have a plan. And that plan should make things go much, much better than if they were just, one, in some disbelief, and two, just totally surprised.
Even in that case, which is like the best-case scenario for this world, I’m still trying to wrap my head around how absolutely horrible it would be for anyone who is infected to know that you might well have a lethal virus, or a virus that’s going to be horribly crippling in some way, even if it’s not lethal. To not know when you’re going to start presenting symptoms, to not be able to see any of your loved ones for fear of transmitting this terrible pathogen, potentially killing them or crippling them in the same way. It’s just awful. It’s unimaginable. So I can’t even really begin to empathise with what that’s going to be like for people.
Kevin Esvelt: Yeah, that’s precisely why we want to show that no matter what they do, they will not succeed in bringing down civilisation. They can kill a bunch of the poor people who don’t believe — and some of those who just get unlucky, even if they take it seriously — but they will not succeed at the kind of harm that anyone trying this sort of thing would presumably be after.
Luisa Rodriguez: Right. Being so prepared that we’ve deterred them.
Kevin Esvelt: That we can deter them. Because we can’t protect everyone, right? I’m ultimately optimistic that this is all the sort of thing where, when you consider technology as a whole, it may well be defence-dominant relative to biological offence. Within biology, I think it’s unfortunately offence-dominant. But if you bring in the physics, the engineering, the computer science, and all these other tools, I think we can actually get a handle on it all. But never perfectly, especially in the stealth scenario.
Although it’s worth noting that if we fail, and we get very unlucky and there is a stealth scenario, it would never work again: everyone would believe it the second time. But that first time, as you said, for the folks who are infected and believe — and again, you want them to believe so that they take precautions and don’t go and infect other people — you have to give them hope.
Luisa Rodriguez: So is there hope? What hope do they have?
Kevin Esvelt: Sure. I mean, HIV is not a death sentence anymore, right? Sure as heck was back in the day, but it’s not anymore. And this is one where it’ll be the easiest to convince the bulk of the scientific community. It’s been inspiring how many people from every discipline just dropped everything to fight COVID, so I think it’s very possible that you could just have the entire research enterprise pivot to figuring out how to defuse the consequences of infection, whatever they are. And so there’d be as good a chance as you could imagine that we would come up with something.
Luisa Rodriguez: OK, so the whole scientific community would be mobilised to, if possible, understand the thing and find a cure — and hope that, one, it’s curable, and two, that they find it in time. I guess they don’t even know how much time they have, but in the very-best-case scenario, maybe there is a cure and maybe the number of people seriously harmed isn’t even that high.
Kevin Esvelt: But that does bring us to the last point about the stealth scenario: The scientific community is not necessarily going to be willing to just go into their labs like normal, even given this motivation — because they believe that it’s out there, and they would be at risk and putting their families at risk. So this underscores the importance of ensuring that there is good enough protective equipment and healthy buildings initiatives to block transmission in buildings, perhaps starting in research labs.
One of the things my lab did during COVID is, very early on, I figured this is obviously airborne. And therefore we knew pretty early on that there weren’t as many superspreading events in aeroplanes as you would expect; therefore, one complete air exchange every three minutes is your target. What would it take to get our laboratory’s rate up to that? And we just ordered 20 consumer-grade HEPA air purifiers and installed them in the lab and ran them full blast and had it as safe as planes, and then wore masks. And sure enough, we had zero infection events.
Luisa Rodriguez: Wow.
Kevin Esvelt: But you’re going to need to do something like that, ideally in advance. And again, that was with COVID which wasn’t all that serious to young and healthy people — although we were pretty scared at the beginning, because everyone was and we didn’t know for sure. But if it’s something that is much worse than that, you’re going to need enough reliable protective equipment for people to go out there.
And this is also true: The better you are at persuading people of the risk, if they’re essential workers, then you need protective equipment to persuade them that they can still go out there and keep everyone alive. Because I feel like that is one area where COVID taught us that some people are more essential than others. But we defined “essential” at a level that basically let society continue on more or less as it had — with some restrictions on the sides, a few inefficiencies, but otherwise basically that.
We need to be a little bit more serious about it. We need enough reliable, no-fit-testing-required protective equipment like current PAPRs, but better and cheaper. Either we need enough for everyone at the outset — which would be the great way: that’s the way that any nation can just be like, “Yes, we are totally ready for whatever comes. If it’s stealth, we may not be able to persuade everyone to wear it, but it will be available for everyone” — but if you aren’t willing to put in that kind of investment, you really need to know who needs it. And this is more important for the other scenario.
Wildfire pandemic scenario [01:57:21]
Luisa Rodriguez: OK, let’s move on to the wildfire pandemic scenario. Can you describe what happens there?
Kevin Esvelt: Wildfire is fairly simple. There is a pandemic so contagious that we can’t stop it. And although COVID showed us that most of society can in fact stay home and avoid getting infected, in extremis, there’s quite a lot of people who can’t. The people who need to ensure the continued distribution of food, water, power, and law enforcement: those folks still need to be out there. Some of them need to interact with other people. Any pandemic agent that is contagious enough to spread through those people and take them out will disrupt essential services, and society will collapse.
Luisa Rodriguez: Yeah, that is terrifying. I guess it wasn’t the case that COVID was able to do that. Is it basically because it wasn’t transmissible or lethal enough? And if so, how transmissible and lethal would the thing have to be?
Kevin Esvelt: That’s a great point, but I think it’s pretty clear that COVID was in fact transmissible enough — because it did ultimately infect everyone. That’s in part because many people were not taking it seriously. If there was something that was, say, 50% lethal, I think people would take it much more seriously, so you can argue then that we would perhaps adopt behaviours that would prevent infection among those essential workers. But we should also keep in mind that contagiousness levels go much higher than the omicron variant of COVID.
Again, our estimate for omicron is that it’s probably somewhere between R0 of 4 and 5.5, with an upper bound of 6.8, and measles has been estimated to go as high as 18. So some models have suggested that even if everyone wears an N95 mask all the time, perfectly, omicron would still infect people, let alone measles. So that suggests that there are viruses and bacteria that would end up infecting essential workers — and certainly essential workers in places like meatpacking plants that did not have adequate infection control measures or anything even close.
Many of these essential workers are some of the most vulnerable members of society, and yet they’re the ones who literally keep everything running. They are the ones who keep the lights on, the food on the table, the water in the taps, and order in the streets. And although you can imagine that you don’t necessarily need the police per se (if you’re willing to call down martial law and have the military do the same thing), but you do need someone to be handling that.
So the defence against wildfire is very straightforward: You need enough units of pandemic-proof personal protective equipment that don’t require fit testing to be sent to everyone who is going to need it — in the sense of the people who really do need to go out there and do their jobs or everything falls apart. All those people need protective equipment. If they have it, and all of the people who make the protective equipment have it, then we can weather the initial surge while everyone else locks down.
Then we need to have enough protective equipment for the next group of essential workers. The primary are the ones who directly deliver those key services. Secondary essential workers are those that repair the equipment that the primary ones rely on, or produce the kinds of supplies that the primary workers need. That is, the secondaries aren’t needed immediately when the pandemic hits and everyone is terrified — because, again, this is very different from COVID — this is, if you get it, you are very likely to die. Just a wildly different setting; people would take it way more seriously. And the risk, in fact, is that too many people are no longer willing to go out at all, under any circumstances — because they quite reasonably believe if they get infected, they will likely bring it home to their families and then their families will die too.
So we just need enough for all the primaries, and then we need to ensure that we produce it fast enough to deliver it to all the secondaries in time for when their services are required.
And then the complicating bit is what do we do about the group that we call “lifesaving workers”? Because normally, historically, we have considered “essential workers” to be those required for the economy to run basically as normal. That’s how we treated it in COVID. So this is why I would venture so far as to say that nations’ lists of essential workers that they have today are utterly useless. I mean, I would be delighted if they used them and said, “We just need that many pandemic-proof PPE units for all of them.” That’d be great. But assuming that nations are not willing to look that far ahead, you really need to know who are the primaries, who are the secondaries.
And the really harsh truth is, if you’re trying to ensure that civilisation survives, you don’t necessarily need medical workers. You don’t need doctors and nurses and physicians’ assistants and all of the support that they require. Nor do you need elder care, nor do you need social workers, nor do you need any of those things. Because if those people aren’t there, lots of people are going to die. And that would be horrific and tragic, but from a very cold-eyed, cold-hearted perspective, that’s better than almost everyone dying — which is what happens if you lose the truly essential services.
So a sane government will invest in enough units for primary and secondary essential workers, and all the lifesaving workers — but if they don’t, we at least need to know who those primaries and secondaries are, and at an absolute minimum, ensure that we can get enough units to them quickly. Now, the United States has this Strategic National Stockpile for essential medical goods and medical countermeasures and all kinds of disaster-preparedness type stuff. Bluntly, I don’t want the Strategic National Stockpile to have the pandemic-proof PPE. I mean, I would love for it to have it stockpiled, but they’re not very good at getting it out of the stockpile and into people’s hands — whereas we know for a fact that the private sector can do that reliably, probably on a next-day basis, but certainly within three days.
Luisa Rodriguez: Amazon Prime.
Kevin Esvelt: Amazon Prime. So we know we can deliver whatever to whoever very, very quickly using some services in society, and it is not obvious to me that the Strategic National Stockpile can do that. Now, perhaps they should just talk to Amazon and say, “Here is where it’s going to be stored. You need to get it into your distribution network for delivery within five days, absolute maximum.” Fine.
But the point is, we need to know who are the essential workers. We need enough “P4E” units — pandemic-proof personal protective equipment, P4E for short — for all of those at a minimum, and preferably for all lifesaving workers, and have lists of their addresses. And of course, “essential workers” includes all the folks who are going to be doing the deliveries, because everyone is going to need food. Are people going to need to go and pick up their food, or can we do online delivery?
And this is another important aspect: Not everyone who works in a sector that is primary essential necessarily needs P4E — because few enough people work, for example, on ensuring that the water keeps flowing, that they don’t necessarily need to interact with other humans. And if you can arrange for distribution to be done in a way that the drivers don’t need to interact with other people, that similarly cuts down on the number of P4E units that are needed. That said, people could very reasonably say, “Look, I know you say I don’t need to interact with anyone else, but I am not leaving my house without protective equipment.” And if we think that’s how people are going to respond, we need to give them protective equipment — because honestly, we should be doing it anyway.
But if we have that — if we have the lists of who needs it, and we have the equipment, and we have stockpiled enough materials so that we can last until we have equipment for the additional groups of workers who are needed to repair the essential equipment and provide the supplies and stockpiles of the new stuff where we don’t have enough in reserve — then we’ll be fine. Wildfire is an obviously solvable problem using our current capabilities and current technologies.
How much does it cost? Well, it depends on how low you can get the price of P4E. I’m pretty confident we can get it down below $250. But the trick is we also would ideally want it to be comfortable. And of course, it needs to be reliable and it needs to be convincing. People need to believe that it will work. Because if people don’t believe it works, it doesn’t matter whether or not it actually works: they’re not going to go out there and keep everybody alive. Or probably not. We can’t assume that they would.
Luisa Rodriguez: Yeah. I do have this intuition, that I think just comes from the fact that people do life-threatening jobs every day, that there will be some subsets of people who either won’t believe it’s as bad as it seems, or who will be willing at a price to do their jobs. But then I’m very sympathetic that we should prepare for them not to be, so that we don’t end up horribly surprised and unprepared.
Kevin Esvelt: And it’s a bit ironic, because those people are potentially saviours in the wildfire scenario, and they’re the people who are contributing to the problem in the stealth scenario.
The other thing is that we do have some historical data to rely on here: we can look at the SARS-1 outbreak, about 10% lethality. And there was a lot of pressure on nurses and doctors, especially the ones who had families and young children, to not go to work, and you definitely saw a bias towards the young and childless as the ones in the wards. So at 10%, that’s what you see. How high does it need to be to be a wildfire? Well, I don’t know, but it would need to debilitate enough of the essential workers that services would collapse. That’s the only way that you actually lose civilisation.
Luisa Rodriguez: How hard is it going to be to convince people not to give lifesaving healthcare workers P4E so that they can go on doing lifesaving work? That sounds like the kind of thing people are going to find really, really objectionable.
Kevin Esvelt: I totally agree. It’s just that people are bad at making tradeoffs. And in any scenario where you can afford to, you absolutely should give them P4E. There’s just no question, so I am not for a moment going to argue it.
I’m just going to point out that suppose that we fail to bring down the cost of P4E, and you just need to buy it now. Call it $1,000 a unit: How many units are we going to buy? The United States, in terms of primary essential workers and then the very near-term secondaries, probably can get away with 20 million or so. So there’s $20 billion. Last year, for context, Congress gave the Department of Defense $30 billion more than they asked for — so we could have just used that bonus that Congress handed the DoD to completely immunise the United States against wildfire pandemics.
That’s why I mean this is a totally solvable problem. And then figuring out who needs it is not super challenging. We did it on the fly for COVID, just under much reduced stringency in terms of who was needed to be essential. We can figure all this out. It’s not that much effort. It’s not that expensive. But if you wanted to get all the healthcare workers now, that’s like 16% of the workforce on top. So there you’re looking at probably more like 50 million [workers] instead of 20 million, if you want to get all of the medical workforce and the folks who support them. That’s just a lot more people. And that’s if you include social workers, elder care, all of that jazz.
So should we do that? Yes, absolutely. It’s just that’s an extra $30 billion. Should we spend that? Yes, yes, we should. But if we’re not, then it’s really important that when the time comes, we recognise the fact that you ship the units to the most essential people — and those may not be medical workers.
Now, you can always make an argument in the other direction, and say that if people believe that the hospitals are there to treat them, then they’re more likely to go to work. That’s fine. Maybe that’s true. I don’t understand human psychology. But then I tend to assume that against a competent adversary, the healthcare system is not going to save you — and that’s, again, somewhat different from the folks who think about more traditional, particularly natural-like threats, not enhanced. They tend to assume that the medical system will be able to do something.
Although it’s worth noting that we really struggled to figure it out in the early days of COVID, and COVID was very mild, but eventually we got things running and could help. But against something more serious — where we don’t have medical tools, specific medical countermeasures available yet — I would tend to assume you get infected and that’s potentially it. Because if that’s not true, you’re probably not talking about a wildfire pathogen to begin with.
Luisa Rodriguez: It sounds like you don’t have some exact percent lethality that you need to have to definitely be in a wildfire scenario — where people are unwilling to go to work, causing something like civilisational collapse. I’ve heard something like the reason we haven’t seen pandemics that have both high transmissibility and high lethality before in a way that causes this kind of particularly horrible situation is because those things come with evolutionary tradeoffs. Is that right?
Kevin Esvelt: That’s probably right for some pathogens and not for others. Certainly the Black Death had both. Certainly smallpox has both — or at least the variola major strain is 30% lethal and R0 between 3.5 and 6. So is that in wildfire territory? That’s the only one, though, that we label as being probably transmissible enough — because the Black Death, even if it weren’t susceptible to antibiotics, was just not transmissible enough in the modern world.
So the only one we know about is smallpox that we think would possibly be wildfire level today, at 30% lethality. And it’s worth noting that the Soviets almost certainly enhanced it, to the point where when there was an accidental outbreak in the Aralsk region, out of 10 known victims, the three who were unvaccinated all died and it was transmitted efficiently by vaccinated people, which wild-type smallpox does not do. So clearly it is possible. And again, this caused an outbreak. They managed to contain it: they shut down all the trains, they got it under control.
But we have to assume that you can go from an existing natural thing to something higher. And again, that is still sort of playing by nature’s rules, using natural-like things. And once we get good enough at programming biology, such that these other capabilities can apply, we just don’t know what is going to become possible. I would not assume that whatever natural tradeoff exists between contagiousness and virulence is necessarily going to always apply. That is one of the things where, if you’re governed primarily by natural selection, then for many classes of pathogens that appears to be a thing — though probably not for all of them, as best we can tell; it’s not a hard-and-fast rule.
And what’s more, even if there is a pathogen that is not evolutionarily stable — that is, mutants will accumulate that will, say, reduce the lethality over time — that doesn’t mean it can’t crash civilisation first. Because it doesn’t take that many transmission events for a sufficiently high R0 virus to go from release across a bunch of airports to infecting enough essential workers to bring down civilisation. That’s just because if you have a high enough multiplier of every person infects six or eight additional people, you don’t require that many transmission events in that chain until you get to very large numbers.
Luisa Rodriguez: Right. Just to make sure I understand the evolutionary tradeoff, is it basically at some high enough level of lethality, it can’t actually spread very far, because it’s killing people before it spreads?
Kevin Esvelt: Yes, it seems to be linked to whether or not it kills you before you have a chance to transmit it. If the transmission window ends and then it kills you — the stealth is an extreme version of that — then there’s no potential limit. Some known pathogens often kill you after the transmission window is mostly closed, so those ones don’t seem to be particularly subject to the tradeoff.
And you can certainly imagine selecting viruses for that particular trait. Or bacteria, for that matter: we shouldn’t just assume that the threat is only viruses, although that’s certainly the near-term and accessible threat. You could have a bacterium responsible for a stealth or a wildfire scenario; it’s just that most bacteria tend to be susceptible to antibiotics, so we tend to have better medical defences. But that doesn’t help you against a stealth pandemic, because you don’t know to use them unless you’ve detected it. But against a wildfire, it would make a difference. That’s why we’re not particularly concerned about, say, a souped-up Black Death.
Luisa Rodriguez: Got it. That’s helpful. It sounds like it is a bit of a tradeoff, though not a rule. And to what extent does engineering change that? You’ve made it sound like it might change the extent to which the tradeoff keeps existing.
Kevin Esvelt: Here is where I have to acknowledge there’s a difference between my view and that of traditional biosecurity researchers. As an actual biotech practitioner, if nature usually does something and it doesn’t always apply, then it definitely doesn’t apply to engineering. A lot of people would disagree with that, but from the engineering perspective, that’s just how it is.
If nature has some way around an apparent restriction, we can absolutely leverage that, and probably come up with more ways around it. If nature flat-out never does something, that does not mean that we can’t do it: that just means it’s not necessarily going to be trivial. It might be challenging, it might be impossible, but I would not assume that we can’t do it — because nature is subject to fundamental limitations on how it discovers things, how it samples mutations, and the number of possible ways it can combine them in its discovery strategy that just do not limit us.
Luisa Rodriguez: And in the case of smallpox, do we know why it is less limited by this tradeoff? Like what secret evolution has found there to make it both very transmissible and very lethal?
Kevin Esvelt: That is a great question. I am not a deep scholar of smallpox, so I don’t know the answer.
Luisa Rodriguez: Fair enough.
Kevin Esvelt: My guess is that most of the lethality is somewhat delayed after the transmission window. But it also doesn’t seem to be one of those ones that transmits before any symptoms; it just does most of its transmission before the kind that get away from you.
Luisa Rodriguez: Right, OK. That makes sense. So that’s how this kind of thing might be possible, and probably made much easier if you’re aiming right at it on purpose.
Kevin Esvelt: And there are some additional complications, such as in the example of the Aralsk incident, where it was transmitted by vaccinated folks, which wild-type smallpox does not do: smallpox is one of those where normally you expect sterilising immunity.
But the point is, whatever they did to it enhanced it to the extent that it could spread — it could still replicate and transmit and cause symptoms — in vaccinated people. And presumably because of those enhancements, against someone who was not vaccinated, it just outright killed them reliably.
That suggests you could have a scenario where an adversary might release agents of multiple levels of stringency: that enhanced strain might not have spread very efficiently in an unvaccinated population, because it would have killed people too quickly, but it would spread efficiently in a vaccinated population. So you could imagine releasing strains of differing levels of severity under the assumption that we will vaccinate as many people as we can, and then there would be a more virulent version that could still hit the vaccinated folks — and if they happen to pass it to someone who is not vaccinated, boom: reliable death. And then there could also be a version that the vaccine might stop, or at least slow and be effective against.
And that, I guess, is the last point: Do not assume that there is only one agent. Because against deliberate actors, why would they stop with one agent?
Luisa Rodriguez: Fair enough. Yeah, that’s horrifying.
Are there approaches to defending against this wildfire pandemic that you haven’t mentioned yet that seem particularly important?
Kevin Esvelt: I think wildfire is pretty simple.
Luisa Rodriguez: Just not let anything reach people, and make sure that the really important primary workers working on food and energy and water get to keep doing their jobs.
Kevin Esvelt: Yeah, that’s exactly it. And the complication comes in when you’re designing P4E and investing in it: Are you optimising for wildfire, where people may not care much about how comfortable it is because they’re sufficiently terrified they’re going to wear it anyway? But in a stealth scenario, you need to make it as unobtrusive as possible: you need to make it comfortable, you need to make it stylish. I would love to see fashion shows where competing designs for P4E get teamed up with prominent fashion designers, and you have runway shows and all that jazz. That’d be great. That’d be great PR that the stuff exists.
And you need some events. Imagine, for a particularly hilarious one, get some celebrity singers, and then get a bunch of their fans and have them do a joint concert, where the fans get to sing with the celebrities, but the celebrities all get the P4E and only half the fans do. And you’re going to pump the room full of a common cold virus and then record the results on reality TV. Kind of extreme, but I think it would get the point across. Does the P4E really work? Well, nobody wearing it should come down with that common cold and everybody else should, and that would make it pretty clear that you’re OK.
People have just got to believe that the suit that arrived on their doorstep will work, and it needs to reliably work for them.
Luisa Rodriguez: I guess for both wildfire and stealth, I feel like you’ve really driven home the point for me that we’ve got a lot of scientific solutions, but there’s this huge challenge on the social side that still sounds like it requires a bunch of thinking. Or maybe the thinking is also kind of done, and we just need to convince many, many people who will be incentivised not to be convinced. But hopefully we can chip away at that.
Kevin Esvelt: I’m not even sure it’s all that negative. Because, again, start with the defence, right? They can’t afford to let active military personnel get infected with a debilitating virus if there’s a possibility that an adversary was behind it. So I would argue that they need P4E for all military personnel. There’s just no getting around that.
The standard respirators that they have for Chemical and Biological Defense, Google says the Department of Defense has 770,000 of those — it’s not nearly enough for all enlisted personnel. And you really don’t want to wear those for more than eight hours at a time. In fact, you don’t want to wear them at all; they’re really uncomfortable. And so extended tours of duty, multiple eight-hour usage, you do not want to have to do that. And if you have to be deployed in a unit, you don’t get to take it off after eight hours, right?
This is just not a solution for a wildfire scenario. So if it’s possible that an adversary could engineer something that transmissible, you just need to protect your enlisted personnel. So all militaries just need to invest in enough P4E units, I think. You just can’t assume that bio is not going to be a potential vector of attack anymore. We’re getting good enough at programming it, and we believe that, despite the norms set by the Biological Weapons Convention, some countries are almost certainly violating that. So you’ve got to invest. And relative to a standard military budget, it’s just not that much money.
Once you do that, you have law enforcement covered, because you can always declare martial law and have the military cover law enforcement. And then you’ve established a market for P4E and some pressure for optimising it, and then you’re a good chunk of the way there.
Luisa Rodriguez: OK, that’s given me a bit more hope, and I do feel inspired. And if there’s anyone out there who wants to help…
Kevin Esvelt: If anyone wants to help develop better P4E, absolutely. This is my one big takeaway, that is very counterintuitive, which is: If you want to help fight biorisk, you probably don’t want to go into biology — because biology creates the problem, but it cannot create reliable solutions to that problem. You’ll note that wildfire is very solvable, but you need P4E to solve it, which is not a biological technology. Stealth is solvable, but what you need to solve that one is sequencing, informatics analysis, computer science, and P4E, and things like germicidal lights and better ventilation, and possibly computer science and cryptography for apps that tell people their individualised risk level. None of these tools is particularly biological, except for the sequencing, and even that is touch and go.
CRISPR-based gene drive [02:23:18]
Luisa Rodriguez: Moving to a very different and kind of more positive topic: You’ve worked on what feels like dozens of incredibly interesting and important biology issues, and we’ve barely scratched the surface. But I did want to squeeze in a few questions about some of the science you’ve worked on besides biosecurity, that really struck me as particularly relevant to some really pressing global problems.
A big one is CRISPR, which you worked on alongside George Church — which I’ve heard described as using the find-and-replace on a Word document, but for DNA sequences in a living organism. Scientists can apparently find a gene they want to modify and just replace it with an edited gene, which is incredible. Can you talk about some of the real-world applications you’re most excited about using gene drives for at the moment?
Kevin Esvelt: Just to be clear, I played only a very minor role in developing CRISPR: we were one of the groups that first managed to publish on how it can be used to edit the genomes of mammalian cells. Obviously, Jennifer [Doudna] and Emmanuelle [Charpentier] deservedly won the Nobel for it. My main contribution was in noticing that you can encode CRISPR into the genome of an organism, and then any organisms that inherit CRISPR will do genome editing on their own.
So if you imagine that you can ensure that the “replace” function works, you can imagine encoding an alteration in the genome that you want to see in a species as a whole, and you encode CRISPR next to it — the same CRISPR that you used to introduce that initial find-and-replace sequence in the DNA text. When that organism mates with a wild organism, the offspring will inherit one engineered version plus CRISPR, and one wild version. CRISPR will turn on, it will find the site in the wild version, it will cut it, and it will replace it with the engineered version and itself. So this is one form of CRISPR-based gene drive.
Luisa Rodriguez: Cool, and remind me why it’s called a drive?
Kevin Esvelt: Because it’s doing find-and-replace at the population level. And this is something that nature does all the time; there are a lot of natural gene drives. What’s different about CRISPR is it’s programmable. So if you can use CRISPR to do the find-and-replace on the genome in the first place, and the organism is amenable to doing the replace function efficiently — and that depends on the organism and the cell type when CRISPR turns on, and whether you express any other things that will direct it to do replace, rather than say, “You made a deletion at this point in the text; I’m going to jam the letters together and there will be something nonsensical there,” which is the other alternative — if you do get the replace function, then you can get efficient spread through the population.
And because CRISPR can be targeted to any sequence you want, this in principle means you can drive any kind of change you want. There are some limits, but within sexually reproducing organisms, that will do replace.
Now, the exciting bit is that the species that is absolutely best at doing replace is, very conveniently, the number one malaria vector, the Anopheles gambiae mosquito complex: they are up to 99% efficient at doing replace. So CRISPR does the cut, it reliably copies over the gene drive system. So how does this help us? Well, any one of three ways.
We can turn them all into males, and the population crashes to a level that is low enough that they can’t transmit malaria anymore.
We can take out a female fertility or viability gene that needs two copies to function: the drive spreads rapidly when it’s rare, because females that have one copy are fine, but as soon as you get a female that inherits two copies, no more reproduction — so the population again crashes to a level that’s too low to transmit malaria. But notably, the mosquito is not going to go extinct, or at least not unless we release a lot of these things very deliberately, trying to drive it extinct. Natively, it’s just not going to go extinct on its own; it will just decrease it to levels that, again, can no longer transmit malaria.
And then the third way is we could try to put in some molecular blockers into the mosquito that prevent it from getting infected with malaria itself. I’m personally somewhat less enthused in that, just because then you have the full evolutionary power of the malaria parasite trying to evolve ways around your blocks, and we are much better at programming CRISPR to hit a new sequence — and therefore finding a new way of crashing the population — than we are at coming up with a new molecular block against the parasite.
And with malaria, if you’ve been exposed to it a lot recently, you develop what’s called “clinical immunity.” This is why it primarily kills children, because adults have had it before. But if you’re an adult and you go for long enough without being exposed, you lose your clinical immunity. This happened notably in Madagascar, but in other places in the world where we pushed back against malaria and eradicated it from whole areas and populations, and then it came back, the lethality rate was much higher. So you really want to get it right the first time. And on an evolutionary level, I’m just much more convinced of our ability to keep ahead of natural selection when we are using CRISPR, which is trivial to reprogram to hit new sites, than if we are trying to encode fancy antibodies to block this famously elusive malaria parasite.
So the question is, of course: Who gets to decide which version we use, if any? Who develops it? How do you test it? Because note that you’re not testing the full-power version, because there is no such thing as a field trial of a self-propagating gene drive. It is the equivalent of a highly invasive gene in the sense that it will spread to all populations connected by gene flow.
Which means, by the way, that you have to assume that human trolls will move it. That is, if you think that geographic barriers will prevent it from spreading, if there is any human in the world that thinks it would be funny or that they could make a profit by moving some, then you should assume that that will happen. For example, cane toads in Australia: horrific invasive species. Could we build a gene drive to take out the cane toads in Australia? Yes. What is the risk though that someone is going to take samples of cane toad eggs or baby cane toads or whatever and move them back to South America just for some giggles? That’s humanity, right? There’s a decent chance that someone will do that.
So maybe you don’t want to do a full-power version. So that broadly means that gene drive you can separate into two use cases: cases where we want to edit the whole species — and then again, that raises the question of who decides? Is it everyone who lives in the area? That’s generally our watchword: only tackle problems first that everyone agrees are a really obvious problem that may warrant engineering a species, or at least a population of a wild species. And then you start very small and see what happens with your intervention in each relevant ecosystem. If it looks like everything’s OK, then you can scale up.
But that means you do not use the full-power version out of the gate, because then you don’t get to see what happens. We freely admit it: We don’t understand ecosystems very well. We do know that we haven’t found any instances of any predator that depends on Anopheles gambiae mosquitoes for more than 5% of its diet. If you’re a spider or a bird or a bat or whatever and you eat a mosquito, you don’t care what species it was. There’s 1,000 species of mosquitoes in Africa, literally: nothing depends on just that one for the bulk of its diet. Or the males sometimes pollinate flowers: nobody depends on just that one species, as far as we can tell. But what do we know about ecosystems? Some, but there’s a lot that we don’t.
So again, you want to try it on a small scale, see what happens, and then scale up.
Luisa Rodriguez: Yeah, can you explain how you do that?
Kevin Esvelt: Well, we came up with what you call a daisy drive. Basically you split up the CRISPR components across different chromosomes, and you think of it as a daisy chain that has a directionality and it drives in that direction. So each element of the link in the daisy chain that has an element behind it gets copied via the find-and-replace function, but the one on the end doesn’t: it’s a normal gene, so it can be lost through normal inheritance mechanisms. And in offspring that don’t inherit it, the next link in the chain is now the end and it doesn’t get an advantage anymore. So it’s like a genetic fuel: you just lose links in the chain until it stops.
So this daisy drive is inherently localised, but it does the same thing: that is, the edit at the end of the chain, which is whatever you’re using to cause your effect on the species, that is identical to the full-power version. So it’s a great way of just testing how does this work in this ecosystem?
So that’s the way forwards.
Luisa Rodriguez: Amazing. What’s an example of a case where we might use gene drives?
Kevin Esvelt: Well, malaria is an obvious one. And if you don’t get malaria right, then you probably don’t get a chance to use it against anything else, with possibly one or two exceptions.
There’s probably only four where you’d really definitely want to use it anyway, at least that I’ve come up with.
So those are malaria; then there is schistosomiasis, which is a horrific intestinal kidney bladder worm that causes growth and cognitive stunting, and currently infects 200-million-plus people. And schistosomiasis is particularly horrible, because we have a reliable treatment, praziquantel, that costs cents per dose. It’s just that it’s so rife in the waters in much of the world that people who go in the water just get reinfected immediately. So it’s a case where just medical countermeasures, again, are just not enough — there’s just too much of it. You just have to redose people over and over and over again. It’s one of the most effective charities we have, of course, as many listeners probably know, but it sure would be nice to just take out the source — and that’s what a gene drive could do.
Then there’s the desert locust. So this is the solitary desert grasshopper that when it rains and the desert blooms, they eat all the new vegetation. And then once the population grows and they come in close proximity, they actually undergo a stable inheritable epigenetic switch: rather than being solitary, they become gregarious, they form swarms, and they fly out of the desert and they eat everything in sight. This is what causes mass area crop failures and famines, and it has since ancient times. Which is why this is God’s eighth biblical plague.
Well, at risk of the “playing God” criticism that we discussed, this is one where we can probably tame God’s eighth biblical plague — because that switch is genetic, and we know that the solitary phase is stable. That is, for more than 1,000 years there have been populations living in the desert that have never swarmed and left and come back. So if we just use a gene drive to switch off the swarming behaviour, then they will stay in the desert as solitary desert grasshoppers and not cause horrific famines out in the world outside. And then, once we’re no longer dependent on agriculture in those regions, if we want we can switch it back. So that would be a particularly elegant one.
But in both of those cases, we’re talking more than 60 countries that would be affected in each of those. Maybe not quite 60 in the case of malaria, but certainly for schistosomiasis and the desert locust. That’s just a lot of countries affected.
Luisa Rodriguez: Interesting. And what’s the fourth?
Kevin Esvelt: So the fourth one might actually be the easiest to get going: the New World screwworm, which has the amazing scientific name of Cochliomyia hominivorax: “the man devourer.” But it doesn’t primarily eat humans; it feeds indiscriminately on warm-blooded things, so mammals and birds. It’s a botfly that lays its eggs in open wounds, anything as small as a tick bite. And it’s called the screwworm because the larvae are screw-shaped and they drill their way into living flesh, devouring it. And as they do, they cultivate bacteria that attract new gravid females that lay more eggs and continue the cycle.
So you have this macabre dance of parasitisation that results in the animal being devoured alive by flesh-eating maggots. And we know that it’s horrendously painful, because people get affected by this, and the standard of treatment is you give them morphine immediately so that surgeons can cut the things out — because it’s just that painful; it’s unbelievably agonising. And by my back-of-the-envelope calculations, there’s about a billion hosts of this every year — so a billion animals are devoured alive by flesh-eating maggots every single year.
We even know that we can eradicate this species from at least many ecosystems and not see any effects, because it used to be present in North America too, and we wiped it out using nuclear technology, oddly enough. Some clever folks noticed if you irradiate the larvae, then they grow up sterile. And if you release enough of them, then the wild ones will mate with a sterile one, and they only mate once, so you can suppress the population to the point of not being there anymore.
So we did this first up through Florida and then across the West, and then down through Texas to the Mexican border. The US Department of Agriculture then inked a deal with the Mexican government to eradicate them from Mexico because the southern border was shorter and therefore cheaper. And then they just went country by country down Central America to Panama. The southern border of Panama is the shortest, so American taxpayer dollars today contribute to the creation and maintenance of a living wall of sterile screwworm flies released in southern Panama that prevents the South American screwworm from reinvading North America — 10 million released every week.
Luisa Rodriguez: Wow.
Kevin Esvelt: But there’s too many of them in South America to wipe out by that means. And so the way forward is obviously gene drive. If the Mercosur countries agree that they want to get rid of the New World screwworm, they can start with something like a daisy drive locally — and Uruguay is working on this — then they can wipe it out from their country. Uruguay loses about 0.1% of their total country’s GDP to the screwworm because they’re so dependent on animal exports. I mean, Uruguay and beef is… To those listeners who eat beef, I’m going to start fights here, but it’s better than beef from Argentina, even. But anyway, they’re all very concerned about their beef, and screwworm is horrific.
It also, of course, preferentially hurts poor farmers who struggle to afford the veterinary treatments for their animals. And of course, they hate to see it, because here you’re watching these animals that you’re caring for literally get devoured by flesh-eating maggots, and it’s agonisingly painful.
But from an animal wellbeing perspective, in addition to the human development, the typical lifetime of an insect species is several million years. So 106 years times 109 hosts per year means an expected 1015 mammals and birds devoured alive by flesh-eating maggots. For comparison, if we continue factory farming for another 100 years, that would be 1013 broiler hens and pigs. So unless it’s 100 times worse to be a factory-farmed broiler hen than it is to be devoured alive by flesh-eating maggots, then when you integrate over the future, it is more important for animal wellbeing that we eradicate the New World screwworm from the wild than it is that we end factory farming tomorrow.
The ethics of CRISPR [02:38:34]
Luisa Rodriguez: What a take. I also love the application of gene drives for animal suffering in particular. I’d heard of many applications for human benefit, but the idea that we could make a dent on some wild animal suffering was just really moving to me. I feel like there are loads of concerns about kind of messing with an ecosystem. In this case, it’s already happening, just through a different method that can’t be scaled up — so it just seems like a really great case of how we’ve got this way to scale it up much bigger, eradicate this horrible insect in more places.
Kevin Esvelt: It might matter to some listeners, they might be concerned about the moral implications of actually driving a species to extinction. Which, of course, is also what we’re proposing for the malaria parasite (but not the mosquitoes) and also for the schistosoma. But for something that is not a major human disease, that’s [not] a microbe, here we’d be proposing eradicating the screwworm itself — the fly, the macroscopic thing from the ecosystem everywhere in the world.
But it’s worth noting that this is actually reversible, because screwworm is one of those comparatively few insects whereby you can freeze the larvae and unfreeze them decades later and they’re perfectly viable. So we don’t have to drive them extinct, we just need to remove them from the wild and then we can keep them on ice. So if for some reason we decide we need them again later, we can reintroduce them. It’s just we’ve got to ensure, if you want the animal welfare benefit…
One of the things that really I find attractive is, when you think about how much suffering humans have inflicted on animals in the course of our species, it almost certainly does not outweigh 1015 mammals and birds devoured alive by flesh-eating maggots. So to the extent that we’re now net negative on the scale, all we have to do is, before civilisation collapses, or we disassemble the Earth or whatever futurists think we’re going to be doing — or even if we lose, even if we fail and civilisation collapses, or even we go extinct — as long as we remove the New World screwworm first, we will be in morally net positive territory when it comes to our impacts on other species’ wellbeing. That’s tremendously inspiring.
Luisa Rodriguez: Yeah, I completely agree.
Kevin Esvelt: But it’s none of my business, because I don’t live in South America. It’s their environment; it’s their call. And so I would urge folks, if you want to reach out and know who to support in South America to fund that project, I’d be happy to connect folks — but moralising about how they have this moral duty to do this for the benefit of all humanity, probably not very helpful. If they decide to do it, it’s going to be for their own reasons, and us hectoring them is not going to be useful to the cause if you care about seeing it happen.
Luisa Rodriguez: Yeah, it sounds like in this case it’s a win-win. Doesn’t sound like anyone in South America is enjoying their livestock being eaten alive by these worms. But yes, that sounds totally right. It does sound like we, on our podcast, should not spend too much time moralising when it is their land.
Kevin Esvelt: And in the long run, you can imagine using gene drive for a series of more elegant tweaks. So got problems with pests eating your crops? Program them to not like the taste and otherwise go about their normal ecological business. Got a problem with predators devouring their prey in ways that cause suffering? Program them to secrete anaesthetic from their fangs.
Luisa Rodriguez: Sounds amazing to me.
Kevin Esvelt: There’s a lot of things that we could do with that, but we’re a long ways away from understanding enough biology to do that. But eventually we could get there. Natural selection is always going to fight back, of course, because these are useless traits from its perspective. But gene drive is an example of how there’s different levels of natural selection, and by choosing the one that we’re interacting with, we can sometimes overcome one from another area.
And this also, I have to confess, is my bias, right? A lot of people just look at it and say, “Nope, you just can’t win fighting nature. Ever.” Well, my job is harnessing and controlling evolution, and CRISPR-based gene drive works. We haven’t used it in the wild yet, thankfully, because there’s no country that has decided, “Yes, we definitely want to do this and the species doesn’t exist outside our borders, so go for it.” But it certainly works in every laboratory setting we’ve seen, with very high success rates — certainly in the Anopheles gambiae mosquitoes, and lower replace rates in others, but it still works.
So it sure looks like we do, in fact, have a way of overcoming natural selection even in the wild, outside of our control — which is tremendously inspiring with respect to what we could do that’s positive, but tremendously worrying with respect to what’s negative.
The day after I realised that CRISPR-based gene drive was possible, I woke up in a cold sweat because I was thinking, “This is wildly different. Could this be misused? Could this be weaponised?” So I didn’t tell anyone else. I didn’t tell George Church — who was my advisor at the time, who I was working with — I didn’t tell him until I was confident that this technology favoured defence.
And I concluded that, based on assessing that it’s slow — it only spreads parents to offspring every generation factor of 1.5 per generation-ish, so it’s slow.
And if you sequence anything and there’s a gene drive in there, you will see it, because it has CRISPR — which is never found in sexually reproducing organisms — along with genes from a sexually reproducing organism in the same sequence read. So you cannot possibly miss it, and you cannot possibly disguise it. It is obvious.
And then the third, and perhaps most important, is it can be easily countered. If you have a gene drive that works, that you see that is doing something you don’t like, you can always take that working, demonstrated version, remove whatever problem it’s causing, and add extra instructions for CRISPR telling it to cut-and-replace the original. Then yours will be just as good at spreading through the wild, because it has the same instructions for CRISPR to do that. So it will immunise the wild population against the bad one, and whenever it encounters the bad one, it will replace it with itself. We call this an “immunising reversal drive.”
And when we finally got to the point where we were confident in actually testing it in the lab, which was only after we disclosed it to the public, on grounds of this technology is defence-dominant, it is about changing the shared environment — that’s different from developing a drug, because people don’t get to opt-out of changes to the environment, so you need to be unusually transparent from the beginning and invite concerns and criticism so that everyone has a chance to have a voice in the development. But when we did test it, we built one gene drive in yeast, which was the safest organism we thought we could reasonably test it in. And we built that one to knock out a gene, and we built another one to overwrite the first one and replace it. And they both work perfectly.
So slow, obvious, and easily countered: that, to me, says defence-dominant. It’s hard to imagine any attack that is slow, obvious, and easily blocked that can cause serious harm. Most biotech is not like that. Pandemics are obviously not like that. Pandemics can be fast, pandemics can be stealthy, and pandemics can be unblockable. In fact, wildfire is some combination of so fast you can’t counter it in many ways, or just outright unblockable if there’s a version that was slow and then we just could never come up with a countermeasure, but it was so contagious, even with a lengthy incubation period, that it just would infect everyone. There’s nothing we could do. And then, of course, the stealth scenario really literally is something that’s just so stealthy, we just won’t know that it’s there until it’s too late.
So this really shaped my thinking on this. And it was terrifying, because there’s no guidance to an inventor — like, “You just invented a technology that can exponentially spread on its own without human control — what do you do?” Had to figure it out. There was really no guidance.
So yeah, it would be great, as we get more and more powerful technology, if we provide guidance to people as to how to think about these things. What is offensive versus defence-dominant? What is safe to tell the world? And what do you do if it is offence-dominant, given that someone else is going to invent it eventually? So maybe you shouldn’t tell anyone, but maybe you should. And how do you decide?
This is the sort of meta-level challenge of increasingly powerful technology development. Because again, in the end, as Feynman said, “What I cannot create, I do not understand.” We seek to understand things well enough to obviate material scarcity and grant all our wishes, more or less. That level of power is also sufficient to, in an offence-dominant technology to which we don’t yet have the defence, constitute a big red button. And if you disseminate enough big red buttons to enough people, someone’s going to push it. So this to me says: You want progress in technology? Defence first.
So that means we need to identify which technologies are defence-dominant and preferentially push those. And possibly slow down the offence-dominant stuff, but that’s much more controversial. At the very least, we can agree to accelerate the defensive stuff. And that’s, I think, the high-level point. And this applies not just to bio — this applies to AI, this applies to nuclear, this applies to geoengineering interventions, this applies to brain-computer interfaces. This applies to everything: Invent the defence first.
Luisa Rodriguez: For any listeners who’ve heard you talk about gene drives now — and in particular some of the more subtle uses, like getting certain animals to disprefer the taste of our crops — I imagine some of them will just be overwhelmed by the unnaturalness of it, the tinkering with natural things in the world. What would you say to them?
Kevin Esvelt: I’ve spent a lot of time talking with people from very diverse cultural backgrounds, who have taught me that there is a tremendous amount that I don’t understand about what people value. And more often than not, there are underlying accurate heuristics to evolved cultural views on the appropriateness of doing things that are very useful, and we should ignore at our peril.
So I can point to working with the Māori of Aotearoa, New Zealand. The Mātauranga Māori is the traditional knowledge of ecosystems and interconnectedness, but it’s also an associated way of looking at the world: there’s the physical, there’s the mental, and there’s the spiritual. And as a wonderful Māori colleague put it to me, he says, “It’s not your fault, but your culture only taught you to deal with the physical.” I was like, “Really? We don’t even get the mental? Really? I get that I’m blind to spiritual — fine, granted, this is why I listen to other people on that — but we don’t get credit for the mental?” No. Not the way they think about it, no.
So there’s these two spheres that they view as two-thirds of what is important about human experience that I just don’t get — because that’s my cultural background, and my reductionist mindset to how things work. So there really is something of a division between what some call métis – a sort of evolved cultural holistic knowledge, which tends to be place specific; it doesn’t generalise well outside the area where it evolved.
So this could be anything from understanding ecological interactions in Aotearoa that might not generalise outside, to, say, how you prepare a cassava plant to remove the poisons so that it’s actually safe for consumption. And that’s an elaborate enough process that it would be quite the scientific research project to work out all the optimal steps using primitively available technologies. But cultural evolution eventually figured it out. And if you were to simply apply the experimental method to the project of detoxing cassava, you would notice that you could skip a lot of those steps and feed it to your family, and everyone would be fine. And so you could then go off and use that time to help your family in other ways, right? And then 10 to 20 years later, you’d die of cyanide poisoning, and so would your family. Oops. Turns out being a rationalist in that environment was massively bad for your health and reproductive fitness, and therefore cultural fitness, right?
So we ignore local métis, local culturally evolved knowledge, at our peril. And I would say that also includes just ways of the human experience. So the naturalness heuristic, if we’re doing something that’s obviously not the way things have been done, I view that as a sort of useful formulation of Chesterton’s Fence: the idea of if you don’t know why a fence is there, you should probably figure it out before you remove it. Because someone built it; it’s there for some reason — and until you know what that reason is, you should be careful and not remove it.
But nature is amoral. Look at the screwworm. The long project of civilisation has been freeing us from natural constraints that cause us suffering. That is what technology is. So if you think that it is good for people to die of smallpox, then I don’t really know what to say to you. But you are at least being faithful to the naturalness heuristic, because humans are meant to be dying of smallpox all the time. We are meant to be dying of famine. We are meant to be killing each other. That is what nature would have us do. So it’s a heuristic. We should be very careful of it. We should also understand that it’s not always right, and we should think about it very carefully.
And this is why I struggle with cases like malaria so much. Because if my kids lived in Africa and were vulnerable, I would say, heck, I don’t care about additional safe field trials and daisy drive and all that jazz. I don’t even really care about getting international agreement. And I wouldn’t be worrying about if we can’t use it for malaria, then we won’t be able to use it for schistosomiasis if something goes wrong; we’ve got to be careful. No, my kids are at risk. Screw it. Do it now. Kids are dying. My kids are at risk. My kids might die of this. Do it. How could you not do it? There’s just an overwhelming moral imperative to do it now.
Even if you step back, from a consequentialist perspective, you might say actually you’ve got to wait, because if you piss off one government enough, they don’t cooperate in the eradication campaign, it could evolve around your gene drive. And eventually, given a decade, if you haven’t gotten it to zero, the parasite could come screaming back — end up killing a lot more people, a lot more money is required than if you just waited an extra year to bring them on board, right? So there’s a lot of reasons to not do it now, but good gods, if my kids lived there, I would say do it now.
So you’ve got to be sensitive to that. And I think this is why my number one watchword that I do think applies to pretty much every possible case is humility. Humility is the most useful virtue, because it causes us to question our assumptions no matter what they are, even all the way up into moral systems. And I view this as vital, because we know that in mathematics you can’t have consistency and completeness at the same time. And if you can’t have that in mathematics, why on Earth do you think you can get it out of morality? Every moral system is wrong, every moral framework is wrong. It’s kind of like “All models are wrong, but some are useful”: same deal. So always question it. And that goes for the naturalness heuristic and it also goes for whatever we’re setting up against it.
So that was a very long-winded and theoretical philosophical discussion. But it comes down to, look, right now we’re using chemicals to kill the pests and a bunch of other insects that are not targeted, and this is very bad. It’d be better if we didn’t have to do that. So another thing that we do, that many people object to, is we program the crops to kill the pests or anything that eats them. You can argue that’s better ecologically, because now you’re not killing any insects in the environment that are not eating the crops. But then a lot of people object to that because they don’t want to eat engineered things — because, again, naturalness heuristic: if it wasn’t there in the organism before, maybe you should question whether it’s safe. And again, that’s a reasonable first-principle heuristic. Very reasonable. So then it comes down to, if you ask people, “Would you rather we engineer your food or would you rather we engineer the main pests that eat the food?” they will, without exception, say, “Please engineer the pests.”
And I know this from our efforts working with communities, guiding our development of technologies to prevent the spread of Lyme disease. We’ve been working mostly with folks on Nantucket and Martha’s Vineyard, which are sort of the extreme opposite scenario from the malaria project. It’s a case where, yes, everyone agrees Lyme disease is a huge problem, some of the highest rates in the world on these islands. But there are no issues with power dynamics, because these are some of the most politically influential communities in the world; they are some of the best-educated communities in the world, in that virtually everyone on the island knows someone with enough technical background to understand exactly what we’re doing. And they have a tradition of New England town hall democracy of getting together to discuss these issues and decide. So it’s sort of the easiest possible place to test out community-guided ecotechnology development, which is in large part why we chose it. Also, it’s really great to get to go off down to the Cape and then to Nantucket and the Vineyard as part of your work — this is a plus, and my lab appreciates it.
So we’ve run into a lot of people there who have, you know, crunchy nuts and granola-hippie-type views: “We don’t feed our kids GMOs if we can help it” — and pre-COVID there was also a “…and I’m not so sure about vaccines either.” — but pretty much all of these people hate Lyme disease. And when it comes to the question of, “Could you engineer the mice so that they don’t infect the ticks?” they say yeah, of course you should do that — because it’s not going to go in their bodies. And they don’t necessarily like the idea of engineering the environment, but what’s the alternative? That we spray a bunch of pesticides and acaricides to kill the ticks? That’s going to kill a bunch of other stuff, and they know that. Or we need to shoot all the deer. Well, they don’t want to shoot all the deer… because Bambi.
So given the choices — live with Lyme disease, shoot all the deer, spray a lot of pesticides and still get Lyme disease, or add a gene taken ideally from a white-footed mouse into the germline of the white-footed mouse so that all of its descendants that inherit that gene are immune… There, you’re not adding anything new that wasn’t there in at least some mice already. Some mice do develop immunity over time, acquired immunity. All we’re doing is taking the gene responsible for that and encoding it in the mice so they can pass it on to their descendants. That’s not natural, but we’re not adding anything that wasn’t already there in a mouse — and that matters to people tremendously. They very strongly prefer that you not add something that wasn’t already there.
You have to keep in mind though that if you have the power to solve the problem, you are morally responsible for the consequences — whether or not you use it. And it’s the “not” that’s important. Everyone agrees that if you intervene and something goes wrong, it’s your fault, it’s your moral responsibility. Because I chose to tell the world about CRISPR-based gene drive, it’s on some level on me; it’s my moral responsibility. But if I had chosen not to, and then it took longer to use it for beneficial applications, that’s also on me. So if we choose to do nothing when we could have done something, we are morally responsible for the continuation of the status quo — be it Lyme disease, malaria, or any other problem that we could solve with technology.
So I suppose this is a very long-winded way of saying I can come across as, “Whoa there. Technology can be super dangerous, and we need to be ultra cautious, and let’s go slow and advance defence and not other stuff.” I’m actually very pro-technology. I’m a technophile. I think that we can ultimately build a world of incredible flourishing beyond the wildest dreams of the ancestors who gave us this world, which is so much better than theirs. I think we can pass that on to our descendants, multiplied manyfold — something that is beyond what we can currently imagine. But we can’t do it if we make a misstep and everything crashes and burns. And as the power goes up, the chance that we can make that kind of misstep necessarily grows. So: yes, progress — but defence first.
Luisa Rodriguez: That sounds like a perfect way to end.
What Kevin would do if he didn’t care about making the world a better place [02:58:38]
Luisa Rodriguez: Before we totally wrap up, I’d like to ask one more question. If you just had to completely change careers and somehow became totally indifferent to making the world a better place, what would be the most self-indulgent or enjoyable career you’d pursue instead?
Kevin Esvelt: I would love to be one of the active rich. Not one of the idle rich, because that’s boring. But I would love to be one of the active rich, who doesn’t need to worry about security, who doesn’t need to worry about whether my children will live a flourishing life, who doesn’t need to worry about people dying or suffering somewhere or whatever. I want to live in a world where we’ve obviated material scarcity, where no one has to suffer unless they want to, where no one has to wither away and cease to exist involuntarily.
Because then I can read books, I can write books, I can tell stories, I can listen to stories, I can explore, I can discover, I can wander. I can learn new ways of viewing the world. I can decide to become a different person. Maybe eventually I can be multiple people at once, and I can continue doing all of the amazing things in the adventure that we could have ahead of us if we get this right. I can have my family and not worry about them, not worry about what happens to them.
And some people say this is horrible. I mean, think about John Stuart Mill‘s crisis of confidence when he imagined the world if all of his reforms came to pass and everything was great, and he asked whether he wanted to live in such a world, and part of him screamed in horror — because then what would there be left to do? What purpose would there be? Well, that’s a problem, but then that’s a purpose: figure out what our purpose is. OK, fine. Deal accepted. Challenge accepted. We’ll deal with that. And if it turns out we have to run simulations with philosophical zombies who don’t actually morally matter, in which we’re giving ourselves the illusion that in fact we are saving everyone that actually does matter, and we never figured out otherwise, and we consciously choose that — fine, that’s great. Along with all the other things.
I’m not a very good mathematician. I would love to get better and see if I can stare straight in the face of truth. I am a terrible artist. I would love to understand much more than I currently do what it is that beauty means. I would love to explore that across senses that I don’t currently have. And when I think about what matters most to me today, I’m biologically programmed enough that that’s probably my family and my children, and it’s caring for them — because it is very true that you have children and all of a sudden a switch flips, and that’s what you care about. That’s what natural selection gave us. That is the gift of evolution. That is the positive side to all of the suffering and horror. That’s what makes everything worth it. And evolution tried hard at that, right? It had to. It had to provide enough rewards to keep us going to make that feel good.
But just like I believe that we can do better than natural selection when it comes to causing harm, I also believe that we can do better when it comes to engineering flourishing, whatever that means. I think that we can explore vistas that no human has ever before imagined. And I would love to see that day without having to worry about whether someone’s going to engineer a pandemic to wipe us all out.
Luisa Rodriguez: What a great answer. My guest today has been Kevin Esvelt. Thank you so much for coming on, Kevin.
Kevin Esvelt: Thank you, Luisa.
Luisa’s outro [03:02:07]
Luisa Rodriguez: If you enjoyed that, you might want to check out a few related episodes of this show, like:
And in case you didn’t know: we always list 8 related episodes in the blog post associated with any episode.
You could also check out the 80,000 Hours problem profile on preventing catastrophic pandemics. You can find a link to that in the blog post, or go to 80000hours.org/problem-profiles/preventing-catastrophic-pandemics.
And if you want to send us feedback about this episode, or the show more broadly, you can email [email protected].
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
The audio engineering team is led by Ben Cordell, with mastering and technical editing by Simon Monsour.
Additional content editing by me and Katy Moore, who also puts together full transcripts and an extensive collection of links to learn more — those are available on our site.
Thanks for joining, talk to you again soon.