Max Tegmark on how a ‘put-up-or-shut-up’ resolution led him to work on AI and algorithmic news selection
By Robert Wiblin and Keiran Harris · Published July 1st, 2022
Max Tegmark on how a ‘put-up-or-shut-up’ resolution led him to work on AI and algorithmic news selection
By Robert Wiblin and Keiran Harris · Published July 1st, 2022
On this page:
- Introduction
- 1 Highlights
- 2 Articles, books, and other media discussed in the show
- 3 Transcript
- 3.1 Rob's intro [00:00:00]
- 3.2 The interview begins [00:01:19]
- 3.3 How Max prioritises [00:12:33]
- 3.4 Intro to AI risk [00:15:47]
- 3.5 Superintelligence [00:35:56]
- 3.6 Imagining a wide range of possible futures [00:47:45]
- 3.7 Recent advances in capabilities and alignment [00:57:37]
- 3.8 How to give machines goals [01:13:13]
- 3.9 Regulatory capture [01:21:03]
- 3.10 How humanity fails to fulfil its potential [01:39:45]
- 3.11 Are we being hacked? [01:51:01]
- 3.12 Improving the news [02:05:31]
- 3.13 Do people actually just want their biases confirmed? [02:16:15]
- 3.14 Government-backed fact-checking [02:37:00]
- 3.15 Would a superintelligence seem like magic? [02:49:50]
- 3.16 Rob's outro [02:56:09]
- 4 Learn more
- 5 Related episodes
Frankly, this is to me the worst-case scenario we’re on right now — the one I had hoped wouldn’t happen. I had hoped that it was going to be harder to get here, so it would take longer. So we would have more time to do some AI safety.
I also hoped that the way we would ultimately get here would be a way where we had more insight into how the system actually worked, so that we could trust it more because we understood it. Instead, what we’re faced with is these humongous black boxes with 200 billion knobs on them and it magically does this stuff.
Max Tegmark
On January 1, 2015, physicist Max Tegmark gave up something most of us love to do: complain about things without ever trying to fix them.
That “put up or shut up” New Year’s resolution led to the first Puerto Rico conference and Open Letter on Artificial Intelligence — milestones for researchers taking the safe development of highly-capable AI systems seriously.
Max’s primary work has been cosmology research at MIT, but his energetic and freewheeling nature has led him into so many other projects that you would be forgiven for forgetting it. In the 2010s he wrote two best-selling books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, and Life 3.0: Being Human in the Age of Artificial Intelligence, and in 2014 founded a non-profit, the Future of Life Institute, which works to reduce all sorts of threats to humanity’s future including nuclear war, synthetic biology, and AI.
Max has complained about many other things over the years, from killer robots to the impact of social media algorithms on the news we consume. True to his ‘put up or shut up’ resolution, he and his team went on to produce a video on so-called ‘Slaughterbots’ which attracted millions of views, and develop a podcast and website called ‘Improve The News’ to help readers separate facts from spin.
But given the stunning recent advances in capabilities — from OpenAI’s DALL-E to DeepMind’s Gato — AI itself remains top of his mind.
You can now give an AI system like GPT-3 the text: “I’m going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that’s in?” And it gives the correct answer (Saint Paul, Minnesota) — something most AI researchers would have said was impossible without fundamental breakthroughs just seven years ago.
So back at MIT, he now leads a research group dedicated to what he calls “intelligible intelligence.” At the moment, AI systems are basically giant black boxes that magically do wildly impressive things. But for us to trust these systems, we need to understand them.
He says that training a black box that does something smart needs to just be stage one in a bigger process. Stage two is: “How do we get the knowledge out and put it in a safer system?”
His favourite MIT project so far involved taking a bunch of data from the 100 most complicated or famous physics equations, creating an Excel spreadsheet with each of the variables and the results, and saying to the computer, “OK, here’s the data. Can you figure out what the formula is?”
For general formulas, this is really hard. About 400 years ago, Johannes Kepler managed to get hold of the data that Tycho Brahe had gathered regarding how the planets move around the solar system. Kepler spent four years staring at the data until he figured out what the data meant: that planets orbit in an ellipse.
Max’s team’s code was able to discover that in just an hour.
Today’s conversation starts off giving a broad overview of the key questions about artificial intelligence: What’s the potential? What are the threats? How might this story play out? What should we be doing to prepare?
Rob and Max then move on to recent advances in capabilities and alignment, the mood we should have, and possible ways we might misunderstand the problem.
They then spend roughly the last third talking about Max’s current big passion: improving the news we consume — where Rob has a few reservations.
They also cover:
- Whether we would be able to understand what superintelligent systems were doing
- The value of encouraging people to think about the positive future they want
- How to give machines goals
- Whether ‘Big Tech’ is following the lead of ‘Big Tobacco’
- Whether we’re sleepwalking into disaster
- Whether people actually just want their biases confirmed
- Why Max is worried about government-backed fact-checking
- And much more
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore
Highlights
What's actually going to happen with AI?
Max Tegmark: I think a very common misconception, especially among nonscientists, is that intelligence is something mysterious that can only exist inside of biological organisms like human beings. And if we’ve learned anything from physics, it’s that no, intelligence is about information processing. It really doesn’t matter whether the information is processed by carbon atoms in neurons, in brains, in people, or by silicon atoms in some GPU somewhere. It’s the information processing itself that matters. It’s this substrate-independent nature of information processing: that it doesn’t matter whether it’s a Mac or a PC you’re running it on, or a Linux box, or for that matter what the CPU manufacturer is — or even whether it’s biological or silicon-based that matters.
Max Tegmark: It’s just the information processing that matters. That’s really been the number one core idea, I would say, that’s caused the revolution in AI: that you can keep swapping out your hardware and using the same algorithms. Once you accept that — that you and I are blobs of quarks and electrons that happen to be arranged in a way such that they can process information well — it’s pretty obvious that, unless you have way more hubris than I do, we are not the most optimized quark blobs possible for information processing, of course not.
Max Tegmark: And then of course it’s possible, but the question then is how long will it take for us to figure out how to do it? A second fallacy that makes people underestimate the future progress is they think that before we can build machines that are smarter than us, we have to figure out how our intelligence works. And that’s just wrong. Just think about airplanes. When was the last time you visited the US?
Rob Wiblin: Oh, it’s actually been a while. It’s been a couple of years now.
Max Tegmark: So when you came over to the US, do you remember, did you cross the Atlantic in the mechanical flying bird, or in some other kind of machine?
Rob Wiblin: No, I think I came across in a plane rather than on an ornithopter.
Max Tegmark: There’s an awesome TED Talk that anyone listening to this should Google about how they actually built the flying bird. But it took 100 years longer to figure out how birds fly than to build some other kind of machine that could fly even faster than birds. It turned out that the reason bird flight was so complicated was because evolution optimized birds not just to fly, but it had all these other weird constraints. It had to be a flying machine that could self-assemble. Boeing and Airbus don’t care about that constraint. It has to be able to self-repair, and you have to be able to build the flying machine out of only a very small subset of atoms that happen to be very abundant in nature, like carbon and oxygen, nitrogen, and so on. And it also has very, very tight constraints on its energy budgets, because a lot of animals starve to death.
Max Tegmark: Your brain can do all this great stuff on 25 watts. It is obviously much more optimized for that than your laptop is. Once you let go of all these evolutionary constraints, which we don’t have as engineers, it turns out there are much easier ways of building flying machines. And I’m quite confident that there are also much easier ways of building machines with human-level intelligence than the one we have in our head.
Max Tegmark: It’s cool to do some neuroscience, and I’ve written some neuroscience papers — steal some cool ideas from how the brain does stuff for inspiration. Even the whole idea of an artificial neural network, of course, came from looking at brains and seeing that they have neural networks inside. But no, we are told the first time we’re going to figure out really how our brain works is when we first build artificial general intelligence — and then it helps us figure out how the brain works.
Slaughterbots
Max Tegmark: It used to be that it’s a very honored tradition in the military that humans should take responsibility for things. You can’t just be in the British army and decide to go shoot a bunch of people because you felt like it. They will ask, “Who ordered you to do this? And who is responsible?” But there was the United Nations report that came out showing that last year, for the first time, we had these slaughterbots in Libya — that had been sold to one of the warring parties there by a Turkish company — that actually hunted down humans and killed them, because the machines decided that they were bad guys. This is very different from the drone warfare that’s mostly on the news now with Ukraine, for example, where there’s a human looking at cameras and deciding what to do. It’s where you actually delegate it to the machine: “Just go figure out who’s a bad guy and then kill them.”
Rob Wiblin: Do you know on what kind of basis the drones were making those decisions?
Max Tegmark: That was ultimately proprietary information from the company that they chose not to release.
Rob Wiblin: Wow. OK. I didn’t know that was already a thing.
Max Tegmark: And so far, the relatively few people who have been killed by this, as usual, it tends to be more vulnerable people in developing countries who get screwed first. But it’s not hard to imagine that this is something that could escalate enormously. We don’t generally like to have weapons of mass destruction where very few can kill very many, because it’s very destabilizing. And these slaughterbots — if you can mass produce them for the cost of an iPhone, each one, and you can buy a million of them for a few hundred million dollars — would mean that one person, in principle, could then go off and kill a million people.
Max Tegmark: And you might think it’s fine because we can program these to only be ethical and only kill the good guys or whatever, if you don’t have any other moral qualms. But who’s to say what ethics you put into it? Well, the owner says that, right? So if the owner of them decides that the ethical thing to do is to kill everybody of a certain ethnic group, for example, then that’s what these machines will go off and do. And I think this kind of weapon of mass destruction will be much more harmful to the future of humanity than any of the ones we’ve had before, precisely because it gives such outsized power to a very tiny group of people. And in contrast to other conflicts where we’ve had a lot of people do bad things, there were often officers or some soldiers who refused to follow orders or assassinated the dictator or whatever. These machines are the ultimate Adolf Eichmann on steroids, who have been programmed to be just completely loyal.
Max Tegmark: So when we started warning about this, we worked with Stuart Russell, for example, to make this video called Slaughterbots a few years ago, which actually has racked up over a million views now. Some people accused us of being completely unrealistic, and now they’ve stopped saying that, because they’ve been reading in the newspaper that it’s already happened.
Making sure AI benefits us all
Max Tegmark: Even before we get to the point where we have artificial general intelligence, which can just do all our jobs, some pretty spectacular changes are going to happen in society.
Max Tegmark: It could be great, in that we might just produce this abundance of services and goods that can be shared so that everybody gets better off. Or it could kind of go to hell in a handbasket by causing an incredible power concentration, which is ultimately harmful for humanity as a whole. If you’re not worried about this, then just take a moment and think about your least favorite political leader on the planet. Don’t tell me who it is, but just close your eyes and imagine the face of that person. And then just imagine that they will be in charge of whatever company or organization has the best AI going forward as it gets ever better, and gradually become in charge of the entire planet through that. How does that make you feel? Great, or less so?
Rob Wiblin: Less so.
Max Tegmark: We’re not talking about the AI itself, the machine taking over. It’s still this person in charge.
Rob Wiblin: It seems suboptimal.
Max Tegmark: But you don’t look too excited.
Rob Wiblin: No. I would not be psyched by that.
Max Tegmark: Yeah. So that’s the challenge then. We can already see slow trends in that direction. Just look at the stock market: what were the largest companies in the US, for example, 10 years ago? They were oil companies and this and that and the other thing. Now, all the largest companies on the S&P 500 are tech companies, and that’s never going to be undone. Tech companies are gradually going to continue consolidating, growing, and eating up more and more of the lunch of the other companies, and become ever more dominant. And those who control them, therefore, get ever more power.
Max Tegmark: I personally am a big democracy fan. I love Winston Churchill’s quip there, that democracy is a terrible system of government, except for all the other ways. If we believe in the democratic ideal, the solution is obviously to figure out a way of making this ever-growing power that comes from having this tech be in the hands of people of Earth, so that everybody gets better off. It’s very easy in principle to take an ever-growing pie and divide it up in such a way that everyone gets better off and nobody gets seriously screwed over. But that’s not what happens by default, right? That’s not what’s been happening in recent decades. The poorest Americans have been getting actually poorer rather than richer. It’s an open question, I think, of how to deal with this.
Max Tegmark: This is not the question we should go blame my AI research friends for not having solved by themselves. It’s a question economists, political scientists, and everybody else has to get in on and think about: how do we structure our society to make sure that this great abundance ultimately gets controlled in a way that benefits us all?
Max Tegmark: The kind of tools that have already caused the problems that I mentioned — for example, weapons that can give an outsized power to very few, or machine learning tools that, through media and social media, let very few control very many — those obviously have to be part of the conversation we have. How do we make sure that those tools don’t get deployed in harmful ways, so that we get this democratically prosperous future that I’m hoping for?
Imagining a wide range of possible futures
Max Tegmark: This approach of just encouraging people to think about the positive future they want is very inspired by the rest of my life. I spend so much time giving career advice to students who walk into my office — and through 80,000 Hours, you have a lot of experience with this. And the first thing I ask is always, “What is the future that you are excited about?” And if all she can say is, “Oh, maybe I’ll get cancer. Maybe I’ll get run over by a bus. Maybe I’ll get murdered” — terrible strategy for career planning, right? If all you do is make lists of everything that can go wrong, you’re just going to end up a paranoid hypochondriac, and it’s not even going to improve your odds. Instead, I want to see fire in her eyes. I want her to be like, “This is where I want to be in the future.” And then we can talk about the obstacles that have to be circumvented to get there.
Max Tegmark: This is what we need to do as a society also. And then you go to the movies and watch some film about the future and it’s dystopian. Almost every time it’s dystopian. Or you read something in the news about what people are talking about, the future. One crisis disaster after another. So I think we, as a species, are making exactly the same mistake that we would find ridiculous if young people made when we were giving them career advice. That’s why I put this in the book.
Max Tegmark: And I also think it’s important that this job of articulating and inspiring positive vision is not something we can just delegate to tech nerds, like me. People who know how to train a neural network in PyTorch, that doesn’t give them any particular qualifications in human psychology to figure out what makes people truly happy. We want everybody in on this one, and talking about the destination that we’re aiming for. That’s also a key reason I wrote the book: I wanted people to take seriously that there are all these different possibilities, and start having conversations with their friends about what they would like today and their future life to be like — rather than just wait to see some commercial that told them how it was supposed to be. That’s the way we become masters of our own destiny. We figure out where we want to go and then we steer in that direction.
Recent advances in capabilities and alignment
Max Tegmark: We were definitely right seven years ago when we took this seriously as something that wasn’t science fiction — because a whole bunch of the things that some of the real skeptics then thought would maybe never happen have already happened.
Max Tegmark: And also, we’ve learned a very interesting thing about how it’s happening. Because even as recently as seven years ago, you could definitely have argued that in order to get this performance that we have now — where you can just, for example, ask for a picture of an armchair that looks like an avocado, and then get something as cool as what DALL·E made, or have those logical reasoning things from PaLM… Maybe for the listeners who haven’t read through the nerd paper, we can just mention an example. So there’s this text: “I’m going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that’s in?” And it gives the correct answer.
Max Tegmark: So back even as recently as seven years ago, I think a lot of AI researchers would’ve said that that’s impossible to do, unless you have developed some fundamental new breakthroughs in logic-based systems, having some really clever sort of internal knowledge representation. You would really need to build a lot of new tools. And instead, what we’ve seen is it wasn’t actually necessary.
Max Tegmark: People have built these gigantic black boxes. You basically take a bunch of simulated neurons like we have in our brain — basically, you can think of them as wires sending voltages to each other in a certain way. And then you have a bunch of knobs, which are called “parameters,” which you can tweak, affecting how the different neurons affect one another. Then you have some definition of what good performance means. Like maybe answering a lot of questions correctly. And then it just becomes a problem of tweaking all these knobs to get the best performance. This we call “training.” And you can tell the computer what it means to be good, and then it can keep tweaking these knobs, which in computer science is called an “optimization problem.”
Max Tegmark: And basically, that very simple thing with some fairly simple architectures has gotten us all the way here. There have been a few technical innovations. There’s an architecture called “transformers,” which is a particular way of connecting the neurons together, whatever, but it’s actually pretty simple. It’s just turned out that when you just kept adding more and more data and more and more compute, it became able to do all of these sorts of things.
Max Tegmark: And frankly, this is to me the worst-case scenario we’re on right now — the one I had hoped wouldn’t happen. I had hoped that it was going to be harder to get here, so it would take longer. So we would have more time to do some AI safety. I also hoped that the way we would ultimately get here would be a way where we had more insight into how the system actually worked, so that we could trust it more because we understood it.
Max Tegmark: Instead, what we’re faced with is these humongous black boxes with 200 billion knobs on them and it magically does this stuff. A very poor understanding of how it works. We have this, and it turned out to be easy enough to do it that every company and everyone and their uncle is doing their own, and there’s a lot of money to be made. It’s hard to envision a situation where we as a species decide to stop for a little bit and figure out how to make them safe.
Regulatory capture
Max Tegmark: For example, let’s look at some past failures. So in the 1950s, the first article came out in the New England Journal of Medicine saying smoking causes lung cancer. Twenty years later, that whole idea was largely silenced and marginalized, and it took decades until there was much policy, and warning labels on cigarettes, and restrictions on marketing cigarettes to minors. Why was that? Because of a failure of alignment. Big Tobacco was so rich and so powerful that they successfully pulled off a regulatory capture, where they actually hacked the system that was supposed to align them and bought them.
Max Tegmark: Big Oil did the same thing. They’ve of course known for a very long time that there was a little conflict between their personal profits and maybe what was best for society. So they did a regulatory capture, invested a lot of money in manufacturing doubt about whether what they were doing was actually bad. They hired really, really good lawyers. So even though in the social contract the idea had been that the governments would be so powerful that they could give the right incentives to the companies, that failed.
Rob Wiblin: I guess the companies became too close in power to the government, so they could no longer be properly constrained anymore.
Max Tegmark: Exactly. And whenever the regulator becomes smaller or has less money or power than the one that they’re supposed to regulate, you have a potential problem like this. That’s exactly why we have to be careful with an AI that’s smarter than the humans that are supposed to regulate it. What I’m saying is it’s trivial to envision exactly the same failure mode happening now. If whatever company that first builds AGI realizes that they can take over the world and do whatever the CEO wants with the world — but that’s illegal in the country they’re in — well, they can just follow the playbook of Big Tobacco and Big Oil and take over the government.
Max Tegmark: I would actually go as far as saying that’s already started to happen. One of the most depressing papers I’ve read in many years was written by two brothers, Abdalla and Abdalla, where they made a comparison between Big Tobacco and Big Tech.
Max Tegmark: Even though the paper is full of statistics and charts that I’ll spare you — people can find it on archive.org — they open with this just spectacular hypothetical: suppose you go to this public health conference. Huge conference, thousands of top researchers there. And the person that is on the stage, giving this keynote about public health and smoking and lung cancer and so on, you realize that that person actually is funded by a tobacco company. But nobody told you about that: it doesn’t say in the bio, and they didn’t say when they introduced it. Then you go out into the expo area and you see all these nice booths there by Philip Morris and Marlboro, and you realize that they are the main sponsors of the whole conference. That would be anathema in a public health conference. You would never tolerate that.
Max Tegmark: Now you go to NeurIPS — tomorrow is the deadline for my group to submit two papers; this is the biggest AI conference of the year — and you have all these people talking in some session about AI in society or AI ethics. And they forget to mention that they got all these grants from Big Tech. And then you go out to the expo area and there’s the Facebook booth and there’s the Google booth and so on and so forth. And for some reason, this kind of capture of academia that would be considered completely unacceptable at a public health conference, or for that matter a climate change conference, is considered completely OK in the AI community.
Articles, books, and other media discussed in the show
Max’s work:
- Our Mathematical Universe: My Quest for the Ultimate Nature of Reality
- Life 3.0: Being Human in the Age of Artificial Intelligence
- Max’s MIT research group and the Institute for Artificial Intelligence and Fundamental Interactions (which Max helped launch), which use ideas from physics to make AI safer by rendering its inner workings more transparent
- AI safety conferences through the Future of Life Institute, including the 2017 Asilomar Conference, which resulted in the Asilomar AI Principles (and included a panel discussion with AI industry and academia leaders about the likelihood of achieving superintelligence)
- Slaughterbots — a 2017 video developed with other leaders in AI safety and governance to warn of the risks of autonomous weapons (later confirmed by a United Nations report that they were used in Libya)
- Slaughterbots – if human: kill() — the 2021 followup to the original Slaughterbots video
- Improve The News — Max’s latest project to use machine learning to aggregate facts from the news and help readers identify media bias
- Machine-Learning media bias by Max and MIT student Samantha D’Alonzo
AI alignment work, regulations, and Big Tech:
- The Future of Life Institute’s Artificial Intelligence Grants Program and Vitalik Buterin PhD Fellowships in AI Existential Safety to incentivise more talented young people to enter the field
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
- Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell (and also see his TED Talk: 3 principles for creating safer AI
- Meditations on Moloch by Scott Alexander on Slate Star Codex
- The EU Artificial Intelligence Act — which Max testified about at the European Parliament
- The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity by Mohamed Abdalla and Moustafa Abdalla
- Ethical issues in advanced artificial intelligence by Nick Bostrom
- Compute trends across three eras of machine learning by Jaime Sevilla and others
Media bias and ‘disinformation’:
- Noam Chomsky on bias in the BBC
- Thinking, Fast and Slow by Daniel Kahneman — Max believes most media consumption is dictated by the impulses of our ‘System 1’
- Disinformation Governance Board — a new advisory board of the US Department of Homeland Security
Other 80,000 Hours Podcast episodes:
- Brian Christian on the alignment problem
- Ben Garfinkel on scrutinising classic AI risk arguments
- Stuart Russell on the flaws that make today’s AI architecture unsafe and a new approach that could fix it
- Vitalik Buterin on effective altruism, better ways to fund public goods, the blockchain’s problems so far, and how it could yet change the world
Everything else:
- TED Talk: A robot that flies like a bird with Markus Fischer
- The Feynman Lectures on Physics
Transcript
Table of Contents
- 1 Rob’s intro [00:00:00]
- 2 The interview begins [00:01:19]
- 3 How Max prioritises [00:12:33]
- 4 Intro to AI risk [00:15:47]
- 5 Superintelligence [00:35:56]
- 6 Imagining a wide range of possible futures [00:47:45]
- 7 Recent advances in capabilities and alignment [00:57:37]
- 8 How to give machines goals [01:13:13]
- 9 Regulatory capture [01:21:03]
- 10 How humanity fails to fulfil its potential [01:39:45]
- 11 Are we being hacked? [01:51:01]
- 12 Improving the news [02:05:31]
- 13 Do people actually just want their biases confirmed? [02:16:15]
- 14 Government-backed fact-checking [02:37:00]
- 15 Would a superintelligence seem like magic? [02:49:50]
- 16 Rob’s outro [02:56:09]
Rob’s intro [00:00:00]
Rob Wiblin: Hi listeners, this is The 80,000 Hours Podcast, where we have unusually in-depth conversations about the world’s most pressing problems, what you can do to solve them, and whether a brutal dictator getting AGI first would be a good thing, or a bad thing. I’m Rob Wiblin, Head of Research at 80,000 Hours.
Max Tegmark will already be known to many of you as a gregarious activist polymath physics professor, as well as the founder of the Future of Life Institute, which works to address many of the problems we discuss on this show.
Today our central topic is advances in AI, which is a hot topic at the moment — given the wildly impressive (and for some of us, very alarming!) recent advances coming out of labs like OpenAI and DeepMind. These advances are starting to have real practical implications — probably the most important of which is that we’ve already started using OpenAI’s tool DALL·E to generate beautiful banner images for the blog posts that come out with each podcast episode.
You don’t need to know much at all about AI going in to follow this one, as Max starts off with a broad intro to the topic.
We then get into these recent advances in capabilities and alignment, the mood we should have, possible ways we might misunderstand the problem, and killer robots as nearer-term problems.
We then spend roughly the last third talking about Max’s current big passion: improving the news we consume — an endeavour I like but have a few reservations about.
All right, without further ado, here’s Max Tegmark.
The interview begins [00:01:19]
Rob Wiblin: Today, I’m speaking with Max Tegmark. Max is an MIT professor and president of the Future of Life Institute, a nonprofit organization that works to reduce existential risks facing humanity. After focusing on cosmology for 25 years, he shifted his MIT research group to machine learning six years ago, helping launch MIT’s Institute for Artificial Intelligence and Fundamental Interactions. That group is trying to use ideas from physics to make AI safer by rendering its inner workings more transparent.
Rob Wiblin: Outside his academic work, Max has had a hand in all sorts of interesting things over the years. For instance, he was an instigator of the so-called Puerto Rico Conference in 2015 and the Asilomar Conference in 2017 — events which brought together the heavy hitters in artificial intelligence research to discuss both the opportunities and the risks that were being created by their work and hopefully agree on principles that would reduce the latter.
Rob Wiblin: He has overseen $9 million in grants for technical research to make advanced AI safer, funded by Elon Musk, and another $25 million in grants more recently to tackle a range of global catastrophic risks — this time funded by Vitalik Buterin. He’s also been involved in international campaigns, not only to reduce threats from killer robots, but also nuclear weapons.
Rob Wiblin: Finally, aside from the many technical papers he’s written that I probably could not follow, Max is the author of Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, and Life 3.0: Being Human in the Age of Artificial Intelligence, both of which made the US bestseller lists. Thanks for coming on the podcast, Max.
Max Tegmark: Thank you. It’s a pleasure to be here.
Rob Wiblin: I hope we’ll get to chat about what should be done about the threats and opportunities arising from the progress we’re seeing every day in AI, and your new project to improve access to reliable news. But first, as always, what are you working on at the moment and why do you think it’s important?
Max Tegmark: I’m working on a number of things, both having a lot of fun helping lead the Future of Life Institute to reduce threats from nuclear war, synthetic biology, people doing dumb things with ever more powerful AI; and trying to steer the course of humanity in a direction where we can really flourish with all this tech rather than flounder with it. I’m also continuing to do some fun nerdy research with my MIT research group on machine learning, as you mentioned. And my big COVID project to not go stir-crazy during the pandemic was to focus on how we could use some of these machine learning tools to actually improve the media ecosystem, which, as we can come back to, I think is critically necessary for anyone interested in effective altruism and steering our course in a good direction.
Rob Wiblin: You’re one of these people who just seems to always be working on a ton of different things all at once. You’ve got the AI safety stuff, you’ve got the cosmology, this media thing is somewhat new, you’ve got nuclear weapons, bio and everything. In your mind, how do all of these different threads connect together and form a cohesive whole?
Max Tegmark: That’s a great question, because it first sounds like it’s just a random grab bag of stuff, even though it’s actually, in my mind, all part of one very simple story. It’s impossible to spend 25 years studying our universe without being struck by how far we are from fulfilling our potential as a human species. Here we are after 13.8 billion years of cosmic history, waking up on this little blue spinning ball in space, realizing that, oh my goodness, this awesome reality we’re in, it’s much grander than our ancestors thought. And even this vast Earth is just a teeny tiny bit of what life could ultimately be doing in the future.
Max Tegmark: And we are also realizing that we have much more potential to actually be the captains of our own destiny. We spent most of human evolutionary history, over 100,000 years, just trying to not get stepped on, or eaten up, or starving to death — lame stuff like that — and feeling very powerless, knowing that the next generation would have the same technology as the last generation. Yet what’s happening now is we are realizing that we had underestimated not just the size of our cosmos and our potential, but our ability to get empowered through looking carefully at the world around us. We’ve got science enabling us to understand enough about how our universe works that we’ve been able to build all this awesome technology that can put us in charge, and we can start shaping our own destiny.
Max Tegmark: If you look at some photos of the Earth today, it looks very different in many places from how it used to, because we’ve shaped it this way. And on a larger scale, if you look through a telescope, our universe still looks mostly dead. You can speculate about aliens, et cetera, but it looks, at face value, like the whole thing is mostly dead and life is just a tiny perturbation. Yet we now know that with the technology that we’re on the cusp of developing, there’s no reason whatsoever why we couldn’t help life spread and make much of our universe come alive too.
Max Tegmark: So those are empowering thoughts. We have so much potential. And yet turn on the news, and oops, what are we doing? One child is starving to death in Yemen every 11 minutes. We’re busy killing each other in stupid wars. And we might even be hoisting ourselves with our own petard by taking some technologies that could be used for great things and using them to ultimately drive ourselves extinct.
Max Tegmark: So in summary, I feel there’s both a hugely exciting opportunity and also a moral responsibility to think hard about the big picture. How can we take all this ever more powerful tech and steer it in good rather than bad directions? And before we delve into any of these individual topics, let’s just talk a bit about how the different pieces in the puzzle fit together.
Rob Wiblin: Yeah, yeah. Go for it.
Max Tegmark: For every science, of course, you can do great stuff with it and not-so-great stuff. So we physicists feel very proud that we gave the world lasers and computers and a lot of other things. We feel a lot more guilty about giving the world the 13,000 hydrogen bombs that we have on Earth right now, that have almost triggered catastrophic nuclear war about a dozen times by accident, and it keeps happening. And right now we have politicians here in the US Congress who are seriously talking about going to war with Russia: what could possibly go wrong there?
Max Tegmark: If you look at biology, again, of course, it has been hugely helpful to cure diseases and help us live longer and healthier lives. But you can also use biology, especially modern synthetic biology, to create new designer pandemics — which make all past diseases seem like kindergarten in comparison. Biologists, the scientists, have so far done the best job, I think, of really drawing red lines, saying “We’re not going to do that stuff.”
Max Tegmark: Then you’ve got chemistry. It’s given us all these awesome new materials we have, but it also gave us climate change, which we’re still struggling with. And the latest kid on the block among the sciences to really impact society is computer science, of course: going from being more of a gimmick to being something which impacts us really quite dramatically today. Manipulating us into filter bubbles where we start hating each other, increasing suicide, then anorexia among teenage girls, you name it — or being used to kill people in new ways with slaughterbots. And as many people who you undoubtedly have had on your podcast before have already mentioned, these are just the first warning signs that this is something which is either going to be the worst thing ever to happen to humanity, or the best.
Max Tegmark: And then finally, what does media have to do with any of this? Well, I talked about steering. We want as humans to be asking ourselves, not just, “How should we make our tech as powerful as possible?” but we want to ask, “What’s our destination? What are we trying to do with this?” My friend Jaan Tallinn likes to make this really great metaphor with rockets: you don’t just focus on building a rocket that’s really powerful without also figuring out how to steer it and think hard about where you want to go with it.
Max Tegmark: And how do we do this? One of the key things is we have to be able to get a good understanding of where we are right now and how the world actually is. If you try to live your life wearing earplugs and noise canceling your headphones and your eyes covered, it’s a lot harder. You just don’t get reliable information about the world. It’s even worse if you’re wearing these headphones that aren’t noise canceling, and some person who doesn’t like you keeps telling you about what’s actually out there, and they’re lying to you to just make you fall down as much as possible on Candid Camera or whatever.
Max Tegmark: Sadly, in many ways, that’s what I feel the media ecosystem is doing to our species today. So much of what we find out about the world isn’t actually the way the world is, but it’s the way certain powerful entities in the world want us to think the world is. This idea that people are trying to manipulate us with biased reporting is, of course, as old as civilization itself. I grew up in Sweden, for example, and if you were to find out a bit of what people were told about the news situation in Sweden, from the King’s guards or whatever 500 years ago, they would be told that the king is great and it’s super awesome that we get to pay all these taxes to the king so that he can go and invade Ukraine. Sweden actually invaded Ukraine and got completely crushed in the city of Poltava once.
Rob Wiblin: Yeah.
Max Tegmark: But they were told that this was great, and that’s why people went along with it. So, that’s not new at all. But what is new is how machine learning plays into this: where now, any kind of propaganda manipulation can be scaled up so dramatically that it starts to really shape and affect society. And as an effective altruist, I think it’s really important to not just sit and optimize, given what we currently believe the truth to be — but to also put a lot of effort into actually figuring out what the truth is, so we make sure we’re steering in the right direction.
Rob Wiblin: So that’s how I see this all fit together. I’ve been fascinated by big questions. I’m very optimistic by nature, feeling that we have so much potential. And I would like to do what I can to help us seize that potential, rather than just squander it.
Rob Wiblin: Yeah, so to map out the conversation a little bit, as you mentioned a lot of different issues there: we’ve fortunately, over the last year or two, had quite a lot of episodes on climate change and nuclear weapons and threat of nuclear war, as well as biosecurity and pandemics obviously. So I think we mostly won’t focus on those, even though you’ve had a hand in a bunch of projects on those different problems. But actually we haven’t done an episode focused on artificial intelligence in like nine or 10 months, and it’s an area where an awful lot is happening. So that’s probably going to be a big part of the conversation. Then we’ll talk about this media stuff, which is your new exciting project where I think you maybe have a different perspective and some different tools that we’ve never discussed before.
How Max prioritises [00:12:33]
Rob Wiblin: Just before we dive into AI, you listed off so many different problems there, so many different threats to humanity and threats to our opportunity to really seize the day or seize the universe, take command of our future and our potential. How do you prioritize personally between all of these different issues that you could potentially try to contribute to? It seems like you’ve maybe made an active decision to do a little bit in all of them.
Max Tegmark: I think a very simple effective altruism approach, where I asked myself, “Where will I personally be able to have the most impact per time spent?” So that means upvote for something which is an important issue, which really could make a difference if I succeed, or if someone succeeds, and then multiply that by what’s the chance that one really could succeed? And then also look at what can I personally do? There are some issues where I’m actually fortunate to have a better opportunity to contribute, because it’s something I might know a lot about, for example. And that’s basically it. And just like if you go invest money on the stock market, you don’t want to put all your eggs in one basket — I feel it’s the same with my time. Sometimes I can be fairly helpful with something, with a fairly small amount of my time. And then it’s diminishing returns from there on.
Rob Wiblin: I haven’t followed exactly how you’re contributing to all these projects over the years, but it seems like a special sauce that you sometimes bring is getting things started, connecting people, convincing people that action can actually be taken and inspiring them to do things. You’re maybe like a founder character that often brings in cofounders, and then they can maybe spend 100% of their time specializing in their area and keeping it going. Is that impression true?
Max Tegmark: Yeah, maybe. I think two weird character traits that I have are, one, I tend to be very obsessed about big-picture stuff. That also explains my choice of nerdy academic research topics. The two biggest questions I could think of when I was a teenager lying in my hammock were our universe out there, and our universe here in our heads — how that works, all the AI stuff. And similarly, if you want to make the world better, look at it at the systems level.
Max Tegmark: And then the other trait — which also is very defining of me, for better or for worse — is I’m a doer. If people are just sitting around talking about things and talking some more and talking some more, I just feel really eager to actually start doing something, even if it’s a small step in the right direction. I even took that to the point that I made a New Year’s resolution to my wife on January 1, 2015, which was that from now on, I was no longer allowed to complain about anything, unless I actually put some time into trying to do something about it. It was “put up or shut up.”
Rob Wiblin: Yeah.
Max Tegmark: And that’s actually what led me to put all this work into starting the Future of Life Institute, for instance. I spent a lot of time also complaining earlier that people weren’t taking AI safety seriously and that it really needed to be more mainstream. And then I suddenly caught myself, and said, “Wait a minute, I’m not allowed to complain about this anymore, unless I actually put some work into mainstreaming it,” which is why we did the first Puerto Rico conference.
Rob Wiblin: Yeah. You’ve got to earn your complaining time.
Max Tegmark: Exactly.
Intro to AI risk [00:15:47]
Rob Wiblin: All right. Well, speaking of AI, let’s dive in. Obviously a big thread of your work has been trying to understand, What’s the potential here? What are the threats here? How might this AI story play out? What should we be doing to prepare? That was the bulk of what you wrote in Life 3.0: Being Human in the Age of Artificial Intelligence, which is five years old now, I guess probably you wrote it about six or seven years ago. That book covers a ton of ground, and I don’t want to go back over all of it, because there are themes that have shown up on the show fairly regularly before. But before we get to talking about where you think we stand today, it would be good to hear a bit about how you think about both the upsides and downsides we’re facing as humanity advances — what AI can do. Can you lead us in?
Max Tegmark: Yeah. So let’s start with the upsides, because that’s the most obvious and exciting. If you look around ourselves today, would you rather live in 2022 or during the Stone Age, if you could pick?
Rob Wiblin: I think I’ll go for today, Netflix is a lot better.
Max Tegmark: Life expectancy, also a lot better.
Rob Wiblin: Yeah.
Max Tegmark: A lot of things are a lot better. And if you look at why, you’ll see that virtually everything you liked about civilization today that we didn’t have in the Stone Age is the product of human intelligence. So it’s pretty obvious that if we could amplify our intelligence with artificial intelligence, that we could make it even more awesome. We could solve all sorts of problems that our human intelligence has been stumped by so far. And if you get a bit more nerdy about it, which I love to do as a physicist, then it’s quite obvious that if we really succeed in figuring out what intelligence is and how to build things that are as intelligent as the laws of physics allow, that then we would shift from suddenly being limited not by what we can figure out, but just being limited by the laws of physics.
Max Tegmark: And you can get nerdy on that and realize that it’s just so many orders of magnitude more in all ways: if you ask how much compute per second you can get, or if you ask how possible it is to go travel to other galaxies, or basically any access you’re excited about, we’re nowhere near the limits that the laws of physics placed — the speed of light that you can’t beat, stuff like that. And the secret to unlocking this is through artificial intelligence.
Max Tegmark: So that’s the upside: basically anything you would like to do, which we haven’t been smart enough to do yet — cure cancer; eliminate poverty; help people live exciting, inspiring lives; or for that matter, if you would like to help life spread throughout much of our observable universe — any of those things, artificial intelligence has a potential to give them to you. Moreover, you don’t have to wait millions of years like in the sci-fi novels. I believe it’s quite likely that we could get this in our lifetime. So that’s the upside.
Max Tegmark: But we also know that every tool we have is a double-edged sword. Not just a sword, but take something innocuous like fire, for example: it’s great for keeping warm in the winter where you live, or for making a nice barbecue — but you can also use it to burn down your neighbor’s house. So the technology itself isn’t morally good or morally evil. It’s a tool, and it comes down to what you use it for. Artificial intelligence is no different. What is different about AI isn’t that it’s morally neutral — it’s just that it’s powerful and it’s going to get dramatically more powerful. So of all the technologies that we need to make sure we put to good rather than bad use, AI is the one we need to pay the most attention to.
Rob Wiblin: I’ve talked about AI a bunch on the show, and in lots of conversations over the years. It’s a very difficult one to know how to explain or message, because I can’t think of almost any other issue where people come at it with just such strong and completely conflicting views. Some people just have a very strong attitude that it’s going to end well: AI will come, it’s going to be great, it’ll be an extraordinary surprise if things went badly — and it’s very hard to persuade them there’s a problem. Many other people, their intuition is that this is a terrifying development: they’re horrified, even before they’ve read a book like Superintelligence or something like that; it just sounds incredibly unnerving to them. Then you have other people who just have a very strong preconception that we’ll never have really intelligent machines. I hear that less these days, but 10 years ago.
Rob Wiblin: So yeah, I’m always a little bit unsure. LIke, what should the second question here be? But I guess a natural one is, what’s one way that things could plausibly play out that we might want to steer away from?
Max Tegmark: First of all, sociologically you’re spot on there, Rob. People are really all over the place, even very educated people. And there are these two basic dimensions you outline. First, the question of how soon will this happen and how far will it go? You’ll have some people like my former MIT colleague, Professor Rodney Brooks, who says it’s not going to happen for 300 years that we’ll even get to human level, and 100% sure. And then you have other people — which aren’t actually most people in the technical community now — who think it’s going to happen a lot sooner. And recent polls of AI researchers tend to predict that about decades from now we’ll have AI that can do basically all our human jobs better or cheaper than we can. That’s one axis, where you can basically classify people from techno-skeptics who think it’s never going to work, to techno-optimists.
Max Tegmark: And then the other axis is: do they think this is going to suck or going to be great? Maybe it actually says more about people’s personality traits than anything scientific really where they land on that. Some people tend to be hopeful and optimistic about everything. Some people are more prone to wishful thinking than others.
Rob Wiblin: Yeah.
Max Tegmark: If we take the first one, the question of what will actually happen: I think a very common misconception, especially among nonscientists, is that intelligence is something mysterious that can only exist inside of biological organisms like human beings. And if we’ve learned anything from physics, it’s that no, intelligence is about information processing. It really doesn’t matter whether the information is processed by carbon atoms in neurons, in brains, in people, or by silicon atoms in some GPU somewhere. It’s the information processing itself that matters. It’s this substrate-independent nature of information processing: that it doesn’t matter whether it’s a Mac or a PC you’re running it on, or a Linux box, or for that matter what the CPU manufacturer is — or even whether it’s biological or silicon-based that matters.
Max Tegmark: It’s just the information processing that matters. That’s really been the number one core idea, I would say, that’s caused the revolution in AI: that you can keep swapping out your hardware and using the same algorithms. Once you accept that — that you and I are blobs of quarks and electrons that happen to be arranged in a way such that they can process information well — it’s pretty obvious that, unless you have way more hubris than I do, we are not the most optimized quark blobs possible for information processing, of course not.
Rob Wiblin: Yeah. That’d be quite a coincidence.
Max Tegmark: Yes. And then of course it’s possible, but the question then is how long will it take for us to figure out how to do it? A second fallacy that makes people underestimate the future progress is they think that before we can build machines that are smarter than us, we have to figure out how our intelligence works. And that’s just wrong. Just think about airplanes. When was the last time you visited the US?
Rob Wiblin: Oh, it’s actually been a while. It’s been a couple of years now.
Max Tegmark: So when you came over to the US, do you remember, did you cross the Atlantic in the mechanical flying bird, or in some other kind of machine?
Rob Wiblin: No, I think I came across in a plane rather than on an ornithopter.
Max Tegmark: There’s an awesome TED Talk that anyone listening to this should Google about how they actually built the flying bird. But it took 100 years longer to figure out how birds fly than to build some other kind of machine that could fly even faster than birds. It turned out that the reason bird flight was so complicated was because evolution optimized birds not just to fly, but it had all these other weird constraints. It had to be a flying machine that could self-assemble. Boeing and Airbus don’t care about that constraint. It has to be able to self-repair, and you have to be able to build the flying machine out of only a very small subset of atoms that happen to be very abundant in nature, like carbon and oxygen, nitrogen, and so on. And it also has very, very tight constraints on its energy budgets, because a lot of animals starve to death.
Max Tegmark: Your brain can do all this great stuff on 25 watts. It is obviously much more optimized for that than your laptop is. Once you let go of all these evolutionary constraints, which we don’t have as engineers, it turns out there are much easier ways of building flying machines. And I’m quite confident that there are also much easier ways of building machines with human-level intelligence than the one we have in our head.
Max Tegmark: It’s cool to do some neuroscience, and I’ve written some neuroscience papers — steal some cool ideas from how the brain does stuff for inspiration. Even the whole idea of an artificial neural network, of course, came from looking at brains and seeing that they have neural networks inside. But no, we are told the first time we’re going to figure out really how our brain works is when we first build artificial general intelligence — and then it helps us figure out how the brain works.
Rob Wiblin: Or at least probably.
Max Tegmark: Is my guess.
Rob Wiblin: That’s a very likely way to go.
Max Tegmark: Yeah.
Rob Wiblin: So that’s one reason to think that we shouldn’t expect that the brain is mysterious, and that we’re not going to be able to design machines that can do the same thing, or extremely similar things, long before we can potentially reverse-engineer completely how the brain works. But what’s the most likely way for that to not pan out as well as we hope?
Max Tegmark: Well, before talking about future problems, let’s just look at the problems that have already been caused by artificial intelligence. So have you noticed in England that people seem to hate each other a lot more now than 10 years ago?
Rob Wiblin: I wish I could say yes, because that’d be very convenient for this interview. Maybe I just don’t talk to enough English people, but I find people are pretty nice. But maybe I’m just very good at selecting my friends.
Max Tegmark: Could it be that you’re living in a bubble of people who all get along?
Rob Wiblin: Well, that’s what I aim for. So I think it’s working.
Max Tegmark: You can test this hypothesis: just go ask your circle of friends if they voted for Brexit or against Brexit, for example. I suspect it won’t be representative of the British population as a whole.
Rob Wiblin: No, I suspect not.
Max Tegmark: Which is why you guys get along great amongst each other. But we’ve definitely seen a lot (even more so in the US, I would say, than in the UK) that a society where people had more of a shared understanding of the truth — that they agree on “this is what’s going on” — it’s now fragmented into people living in these different parallel universes, where their understanding of reality is completely different. These filter bubbles.
Max Tegmark: And if you ask yourself what caused this, it’s easy to dismiss it and say, “It’s just because politician X, it’s his fault” or whatever. But I think that’s way too glib. It’s really important when you see a problem to actually do a proper diagnosis. You don’t want to go to the doctor and just be told that your problem is that you have a headache, and take this pill. You would like to know what’s causing it. Is it COVID-19, or is it pneumonia, or what?
Max Tegmark: And if you try to diagnose why we have so much more fragmentation of our Western societies into different groups that can hardly talk to each other anymore, I think it’s pretty obvious that the explanation cannot be just that some opportunistic politician came along — because we’ve had those since time immemorial, as long as there were politicians. If you go read ancient Roman history, they were just as unscrupulous — you can read Machiavelli, that’s not new.
Max Tegmark: What is new is the internet and machine learning. So it’s technology. And in particular, social media companies have deployed some of the most powerful optimization algorithms so far using machine learning to actually influence people’s behavior. They came at it with a goal that you might think is fairly harmless, if you’re into capitalism: they were just trying to increase the profit for their company, trying to figure out what to show people, to make them watch as many ads as possible — which is called “engagement” in marketing speak.
Max Tegmark: But what they hadn’t realized was that the machine learning algorithm was so smart that it would figure out how to do this way better than they thought, in ways that actually caused a lot of harm. It turned out that the best way to keep people glued to their screens was to really piss them off and show them all sorts of things that made them really, really angry. And whether they were true or false was completely irrelevant to the algorithm, as long as they kept clicking and kept watching the ads. And the algorithms also discovered that actually false information often spread faster than true information. Then gradually other powerful entities noticed how effective this was and started throwing their money into making people believe in their truths rather than other people’s truths, et cetera.
Max Tegmark: And this is already a very real problem in a country like the UK: we had Cambridge Analytica and Facebook doing things which were pretty instrumental to Brexit. And you might feel that that’s not perhaps such a big deal in the grand scheme of things. But if you go to Kenya, for example, these phenomena actually caused a lot of people to get killed in really horrible riots there. And then it’s not hard to imagine future wars starting over things like this, et cetera. So this is one example of how machine learning itself has caused very real social harm.
Max Tegmark: You even see it in smaller scales, which aren’t existential risks, but they cause a lot of suffering. Like there’s a much higher rate of anorexia now among teenage girls in the United States, which has been very directly linked to what they get shown on their social media. I want to just mention three examples of how machine learning is already causing harm in society, so people don’t come away from this thinking that we’re just sitting here freaking out about some future which might not happen.
Max Tegmark: Second one is in warfare. It used to be that it’s a very honored tradition in the military that humans should take responsibility for things. You can’t just be in the British army and decide to go shoot a bunch of people because you felt like it. They will ask, “Who ordered you to do this? And who is responsible?” But there was the United Nations report that came out showing that last year, for the first time, we had these slaughterbots in Libya — that had been sold to one of the warring parties there by a Turkish company — that actually hunted down humans and killed them, because the machines decided that they were bad guys. This is very different from the drone warfare that’s mostly on the news now with Ukraine, for example, where there’s a human looking at cameras and deciding what to do. It’s where you actually delegate it to the machine: “Just go figure out who’s a bad guy and then kill them.”
Rob Wiblin: Do you know on what kind of basis the drones were making those decisions?
Max Tegmark: That was ultimately proprietary information from the company that they chose not to release.
Rob Wiblin: Wow. OK. I didn’t know that was already a thing.
Max Tegmark: And so far, the relatively few people who have been killed by this, as usual, it tends to be more vulnerable people in developing countries who get screwed first. But it’s not hard to imagine that this is something that could escalate enormously. We don’t generally like to have weapons of mass destruction where very few can kill very many, because it’s very destabilizing. And these slaughterbots — if you can mass produce them for the cost of an iPhone, each one, and you can buy a million of them for a few hundred million dollars — would mean that one person, in principle, could then go off and kill a million people.
Max Tegmark: And you might think it’s fine because we can program these to only be ethical and only kill the good guys or whatever, if you don’t have any other moral qualms. But who’s to say what ethics you put into it? Well, the owner says that, right? So if the owner of them decides that the ethical thing to do is to kill everybody of a certain ethnic group, for example, then that’s what these machines will go off and do. And I think this kind of weapon of mass destruction will be much more harmful to the future of humanity than any of the ones we’ve had before, precisely because it gives such outsized power to a very tiny group of people. And in contrast to other conflicts where we’ve had a lot of people do bad things, there were often officers or some soldiers who refused to follow orders or assassinated the dictator or whatever. These machines are the ultimate Adolf Eichmann on steroids, who have been programmed to be just completely loyal.
Max Tegmark: So when we started warning about this, we worked with Stuart Russell, for example, to make this video called Slaughterbots a few years ago, which actually has racked up over a million views now. Some people accused us of being completely unrealistic, and now they’ve stopped saying that, because they’ve been reading in the newspaper that it’s already happened. So that’s something that’s here.
Max Tegmark: The third one is income inequality. I haven’t looked specifically at the UK income inequality recently, but in most Western countries, it’s gone up a lot. And some populist politicians like to blame the Chinese or the Mexicans for this, others like to blame some political party for this. But many of my economist friends argue that maybe the number one cause is actually information technology itself.
Max Tegmark: It’s pretty obvious if you replace human workers with machines, that the money that was previously paid in salaries to humans for doing the work will now get paid to the owners of the machines — who were normally richer to start with. So that drives inequality. And a very simple example to look at is if you just compare, for example, Ford with Facebook: Ford has way more employees and a much smaller market cap than Facebook. Last time I checked, I think Facebook has about 100 times as much market cap per employee as Ford does.
Max Tegmark: So Facebook is a more future kind of company. More companies will become Facebook-like, where much more of the value is added by the ML than Ford is. And I think unless we take some corrective action here, what’s going to happen by default is just that an ever-smaller fraction of the humans on Earth will control an ever-larger fraction of the wealth. You might think that’s fine, because “the rising tide raises all boats,” but if you actually just look at the absolute income corrected for inflation, you see that Americans without a college degree have often gotten poorer than they were even 40 years ago. So a lot of the anger that’s been fueling certain politicians, helping them get elected, is actually, I believe, driven by this: that people notice something which is actually real, which is that they’re getting screwed by society.
Rob Wiblin: Yeah. They’re not sure what the future holds for them. I’m a little bit skeptical of the first one there, about algorithms driving social division. But maybe we can come back to that in the later section about media.
Max Tegmark: Yeah. I would love to.
Rob Wiblin: I slightly worry that that one’s been overhyped. But yeah, the other two either definitely are problems today, or will be problems in future. And certainly I think algorithms could be used to drive social division. My question maybe is just whether they have been the main culprit over the last few years.
Superintelligence [00:35:56]
Rob Wiblin: Maybe let’s push forward a little bit in time from the kind of issues that we’re facing in the here and now to what issues might we face with intelligent systems that are much more impressive than what we have now — in 10, or 20, or 30 years’ time. Would that look significantly different?
Max Tegmark: Absolutely. So it’s important to not conflate artificial general intelligence — or stuff close to it, which can start doing an ever larger fraction of human jobs — with what Nick Bostrom and others would call “superintelligence.” If we start with the former here, machine learning keeps getting better and better: we get self-driving cars that are great, and we get all sorts of other jobs being increasingly complemented or augmented and replaced by ML. Even before we get to the point where we have artificial general intelligence, which can just do all our jobs, some pretty spectacular changes are going to happen in society.
Max Tegmark: It could be great, in that we might just produce this abundance of services and goods that can be shared so that everybody gets better off. Or it could kind of go to hell in a handbasket by causing an incredible power concentration, which is ultimately harmful for humanity as a whole. If you’re not worried about this, then just take a moment and think about your least favorite political leader on the planet. Don’t tell me who it is, but just close your eyes and imagine the face of that person. And then just imagine that they will be in charge of whatever company or organization has the best AI going forward as it gets ever better, and gradually become in charge of the entire planet through that. How does that make you feel? Great, or less so?
Rob Wiblin: Less so.
Max Tegmark: We’re not talking about the AI itself, the machine taking over. It’s still this person in charge.
Rob Wiblin: It seems suboptimal.
Max Tegmark: But you don’t look too excited.
Rob Wiblin: No. I would not be psyched by that.
Max Tegmark: Yeah. So that’s the challenge then. We can already see slow trends in that direction. Just look at the stock market: what were the largest companies in the US, for example, 10 years ago? They were oil companies and this and that and the other thing. Now, all the largest companies on the S&P 500 are tech companies, and that’s never going to be undone. Tech companies are gradually going to continue consolidating, growing, and eating up more and more of the lunch of the other companies, and become ever more dominant. And those who control them, therefore, get ever more power.
Max Tegmark: I personally am a big democracy fan. I love Winston Churchill’s quip there, that democracy is a terrible system of government, except for all the other ways. If we believe in the democratic ideal, the solution is obviously to figure out a way of making this ever-growing power that comes from having this tech be in the hands of people of Earth, so that everybody gets better off. It’s very easy in principle to take an ever-growing pie and divide it up in such a way that everyone gets better off and nobody gets seriously screwed over. But that’s not what happens by default, right? That’s not what’s been happening in recent decades. The poorest Americans have been getting actually poorer rather than richer. It’s an open question, I think, of how to deal with this.
Max Tegmark: This is not the question we should go blame my AI research friends for not having solved by themselves. It’s a question economists, political scientists, and everybody else has to get in on and think about: how do we structure our society to make sure that this great abundance ultimately gets controlled in a way that benefits us all?
Max Tegmark: The kind of tools that have already caused the problems that I mentioned — for example, weapons that can give an outsized power to very few, or machine learning tools that, through media and social media, let very few control very many — those obviously have to be part of the conversation we have. How do we make sure that those tools don’t get deployed in harmful ways, so that we get this democratically prosperous future that I’m hoping for?
Max Tegmark: And now that we’ve talked about that, suppose we get artificial general intelligence. Now we have machines that can basically do all our jobs for us. It can either be great, in that they just produce everything we need and we can have fun all day long… Let’s go with that happy scenario for a moment. This is clearly not necessarily a stable situation though. So Irving J. Good articulated this very nicely way, way, way back by just pointing out that if you have machines that can do all jobs as well as humans, that includes the jobs like my job, your job, it includes the job of AI development — which means that after that, the typical R&D cycle to make the next-generation AI system that’s twice as good or whatever, it might not take two years, whatever it takes for humans.
Rob Wiblin: Could speed up a lot.
Max Tegmark: It might take two weeks or two days or two hours or two minutes. It can speed up a lot. And then the new 2.0 version can similarly make the 3.0 version and the 4.0 version, and you get this exponential growth. Whenever you have anything that just keeps doubling at regular intervals, that’s what we call an “explosion,” right? If it’s number of people that is doubling, we call it a “population explosion.” If it’s the number of neutrons in a hydrogen bomb, we call it a “nuclear explosion.” So it’s very appropriate to call this, if it’s the intelligence that keeps doubling, an “intelligence explosion.” And if that happens, the most natural place for that to stop is when it just bumps up against the laws of physics, and you just can’t get any better. Which, as we talked about in the beginning, is just so many orders of magnitude above human intelligence that these intelligent machines will seem completely godlike to us.
Rob Wiblin: Yeah. Although I suppose it’s not strictly necessary that things have to go that far. You could imagine a situation where you do create AI systems that are better than humans at programming AI systems and improving them. They’re better ML researchers, so they turn their computing power towards doing that. However, the problem of programming machines better to be more intelligent gets harder the closer to optimal that they are. So even though they’re getting better at the task as they get smarter, the task on the margin is getting a bunch harder. And so perhaps the explosion is more gradual, or ultimately it could kind of level off at some point, or at least slow down for some stages. However, as long as they’re leveling off or continuing to grow to a point that’s substantially above human level, then there’s a lot of potentially interesting things that could happen. Even if you’re not hitting some actual limit set by physics.
Max Tegmark: Absolutely. I confessed to you earlier that I like betting. Everybody’s entitled to at least one vice, right?
Rob Wiblin: Yeah.
Max Tegmark: And I’m absolutely not betting 100% that we are going to get superintelligence. But what I am saying is, first of all, it’s a possibility we need to take seriously — it’s not crazy impossible science fiction. And second, if you do get into betting, at the Asilomar conference in 2017 — where we had the benefit of having many of the leaders, both from AI industry and AI academia — I actually asked the panel full of folks (you can see it on YouTube) if they think we are going to get superintelligence. Demis Hassabis, the CEO of Google DeepMind, was there and he said yes. We had a bunch of AI professors who were there and said yes, et cetera. In fact, everybody on the panel, more or less, said yes. That doesn’t tell us it is going to happen. It just tells us that we have to take it seriously as a possibility. We can’t just dismiss this out of hand.
Max Tegmark: If you like to think big, you might find this to be actually a very inspiring idea. Maybe this could be the next stage in the development of life in our universe. You can ultimately unlock this intelligence inherent in our universe that’s just never really come out before, and put it to all these awesome uses. But on the other hand, as we talked about earlier, every technology is a double-edged sword. So if you can screw up a little bit with a fire or with a knife, there’s a lot more potential for screwing up in a very big way with superintelligence. And Nick Bostrom’s book is full of examples of that.
Max Tegmark: So I don’t want to conflate those two things with one another, but it’s not too soon. It’s kind of like if you’re playing chess, you have to pay attention to your next move and be thinking two moves ahead. But you also need to, at the same time, have a little bit of a strategic look ahead much further, and that’s what we need to do now also. We need to make sure that social media doesn’t get out of control and we don’t have horrible income inequality and wreck our democracy, blah, blah, and whatever. But we also have to keep thinking about this — because the situation today, let’s face it, is that there is so much money to be made by further improving the powerful AI systems we have right now that there’s a huge commercial pressure to do it more. And all the top students at MIT want to study it and investors want to invest in it.
Rob Wiblin: The amount of money that’s going into attracting talent and just buying computing power for this purpose has exploded. We can link to some people who’ve looked into this. I mean the amount of compute going into it has just been increasing orders of magnitude over the last five, 10 years.
Max Tegmark: Exactly. So we are in a situation where, short of some other kind of cataclysm that we accidentally messed ourselves up with — with a nuclear war that caused nuclear winter or someone 3D prints some virus that kills everybody or whatever — short of something like that, I would definitely put most of my money on that we are going to get to artificial general intelligence, and a fair bit beyond it, in our lifetime. That would be my bet.
Max Tegmark: So you might think, “OK, well, we’ll cross that bridge when we get there. If it happens in a few decades, then we figure out how to make it safe.” But that’s clearly too late to think about it. Because the one thing we’ve definitely learned from AI safety research so far is that it’s hard — really hard — and it might take decades to actually solve this, which means we should start now. Not the night before some folks on too much Red Bull switch it on.
Rob Wiblin: Yeah.
Max Tegmark: Can I just add one more thing for counterarguments?
Rob Wiblin: Yeah, yeah. Go for it.
Max Tegmark: It’s remarkable how often people conflate the claim that something bad might happen with the claim that something bad is guaranteed to happen. Those are logically very distinct statements. And to make the case for doing AI safety research and worrying a bit, you only need to believe that it might happen with a reasonable chance of having a big impact.
Rob Wiblin: For sure. Yeah. I mean, to be honest, I don’t hear a lot of these kind of dodgy counterarguments much these days. Because as AI has become so much more real, and people can see what it can do, I feel like there’s a lot more buy-in from a lot more people to be interested in all kinds of different angles of figuring out how society’s going to deal with this. But sometimes people, I guess 10 or 20 years ago, they’d offer some counterargument for thinking that maybe the risk posed by AI is less than I might have been suggesting — and they feel like their job is done, and now we should just do nothing about it. And I’m like, no. We should definitely be thinking about this, preparing for it, and taking precautions unless we’re absolutely sure that it’s going to be fine. And we absolutely cannot be absolutely sure it’s fine because we have no idea when it’s going to come or how it’s going to play out, because the future is very hard to predict.
Max Tegmark: Exactly.
Rob Wiblin: Anyway, let’s maybe push on from this kind of background understanding of AI as a problem. For people who are interested to get more of that, of course they could read your book, Life 3.0. They might also want to go back in The 80,000 Hours Podcast archives. Probably the best episode actually is episode 92: Brian Christian on the alignment problem, where he talks about his book, The Alignment Problem. For people who’d like a bit more of a skeptical take and want to hear some counterarguments, there’s episode 81: Ben Garfinkel on scrutinising classic AI risk arguments.
Imagining a wide range of possible futures [00:47:45]
Rob Wiblin: Pushing on, in the book Life 3.0, which I read this week, you gave yourself a lot of creative license to consider a really wide range of possible futures, both good and bad, that could await us. You imagined us staying on Earth, as well as spreading through space at virtually the speed of light. You imagined a world where every individual has near-unlimited technology, and other worlds where we’re kind of all laboring under the yoke of a totalitarian state that we can’t get rid of. You thought about worlds where we might try to maximize pleasure, and other ones where we might try to preserve nature and pursue other values. You had some scenarios where one AI or one group comes to dominate everything, and others where there’s tons of different AI systems and none particularly stands out.
Rob Wiblin: I read some reviews of the book on Goodreads, and it seemed like this was maybe the aspect of the book that most divided readers. Some absolutely love this sort of exploration, and others ripped into it as kind of idle and unhelpful speculation. I’m more in the former camp, as people might imagine. You not only do all this, but you really encourage people to go away and think for themselves, and even write up what they would like the future to look like out of all of these imaginable features. What kind of experience did you have trying to get people to think more seriously about ideal worlds that they might want to help create?
Max Tegmark: That’s a great, great question. This approach of just encouraging people to think about the positive future they want is very inspired by the rest of my life. I spend so much time giving career advice to students who walk into my office — and through 80,000 Hours, you have a lot of experience with this. And the first thing I ask is always, “What is the future that you are excited about?” And if all she can say is, “Oh, maybe I’ll get cancer. Maybe I’ll get run over by a bus. Maybe I’ll get murdered” — terrible strategy for career planning, right? If all you do is make lists of everything that can go wrong, you’re just going to end up a paranoid hypochondriac, and it’s not even going to improve your odds. Instead, I want to see fire in her eyes. I want her to be like, “This is where I want to be in the future.” And then we can talk about the obstacles that have to be circumvented to get there.
Max Tegmark: This is what we need to do as a society also. And then you go to the movies and watch some film about the future and it’s dystopian. Almost every time it’s dystopian. Or you read something in the news about what people are talking about, the future. One crisis disaster after another. So I think we, as a species, are making exactly the same mistake that we would find ridiculous if young people made when we were giving them career advice. That’s why I put this in the book.
Max Tegmark: And I also think it’s important that this job of articulating and inspiring positive vision is not something we can just delegate to tech nerds, like me. People who know how to train a neural network in PyTorch, that doesn’t give them any particular qualifications in human psychology to figure out what makes people truly happy. We want everybody in on this one, and talking about the destination that we’re aiming for. That’s also a key reason I wrote the book: I wanted people to take seriously that there are all these different possibilities, and start having conversations with their friends about what they would like today and their future life to be like — rather than just wait to see some commercial that told them how it was supposed to be. That’s the way we become masters of our own destiny. We figure out where we want to go and then we steer in that direction.
Rob Wiblin: Yeah. I think this has actually been shifting over my lifetime, but I’ve had this impression that a kind of background view that people have is that “serious people” think about slightly changing tax rates and what implications that’s going to have for society. And it’s unserious people who think about such big-picture issues as like, what do we want the world to look like in 100 years or 1,000 years? Did you have any luck getting more buy-in for thinking at that scale among, you know, important people who have serious jobs?
Max Tegmark: No, actually, I have to say, eat the humble pie here: I find basically no correlation between people’s level of education or academic credentials and how prone they are to big-picture thinking, or for that matter how nice they are. There are some people who have a very strong moral compass, and some that don’t. There are some people who are very altruistic, some people who are very egoistic, some even psychopathic. I haven’t really in my life seen any correlation between that and how many PhDs or professorships they have.
Max Tegmark: So sometimes you’ll find someone like Freeman Dyson, for example: amazing physics professor, great hero of mine, who loved to think super big picture. He wrote the first really scientific book about the distant future of our universe. I mentioned him a lot in my book, some of the things he concluded. On the other hand, you find a lot of equally super-talented Nobel laureates in physics, who just want to go optimize this little machine here to work better.
Rob Wiblin: Tiny corner of the universe.
Max Tegmark: And actually pride themselves. One professor — I won’t name him — we were discussing some of the big-picture things, about what does quantum mechanics really mean for the nature of reality. And he was like, “Ah, I don’t care about that. And I’m proud that I don’t care about it, because I’m a tinkerer. I just want to think about how to build little things.”
Max Tegmark: Another moment like that that really stuck with me was when I gave a talk at this conference. I was there because we had built a radio telescope to try to make the largest 3D map of our universe. Some other people who were there were building the world’s largest radars to track their nuclear weapons targeting. And you would think someone who’s constantly working on nuclear weapons all day long and thinking about where the hydrogen bombs are going to go would at least be kind of interested in where are they going to land, and “What’s the social impact of my work?” No, they were just bragging about, “Oh, my radar is bigger than your radar. Let me tell you how big my radar is.”
Max Tegmark: Someone who knows more about psychology or evolutionary psychology than I do can maybe explain why that is. But I get the sense that some combination of nature and nurture determines where people are on this spectrum from having a strong moral compass to not, and from thinking about the big-picture consequences of their actions and not. And it really has very little to do with education or stuff like that.
Rob Wiblin: Yeah, yeah. It is a tendency that really worries me. I think one cause is that in domains that are extremely competitive, where most people get weeded out, people who are interested in spending too much time studying the classics, or thinking about moral philosophy and musing about “Who am I, and what’s my place in the universe?” — rather than just spending all of their time tinkering with machines and focusing on their very special area of science and tech, or getting elected and schmoozing with the right people — people who want to take some of their time and spend it on these broader issues that are not providing them a competitive advantage within their field tend to get weeded out. So maybe in order to win the Nobel Prize, you just don’t have time to think about the nature of reality, potentially.
Max Tegmark: Well, yes and no. Just to cheer up some listeners to this podcast who want to go into science and think about the big picture, so they don’t think they’re going to be automatically weeded out, can I give a counterexample also to this?
Rob Wiblin: Absolutely.
Max Tegmark: Albert Einstein: why was it that he was one of the most impactful physicists, ultimately, of all time? Was it because he was better at math? No, it wasn’t. In fact, you can see I have behind me on the wall here some of his most famous equations — and he wasn’t even the first person to write down some of the key equations that he got famous for; they weren’t that hard. But he was the only one who had this obsession about the big picture, and kept asking, “Well, what does this mean exactly? What does it mean that when you’re going at that speed that it looks like the time is running at a different rate?” Other people just dismiss this as stupid philosophy and whatever.
Max Tegmark: But he was obsessed about precisely the big picture, and it was because of that that he made his breakthroughs. He saw the big picture that others had failed to see. So this is actually a very important trait for some of the greatest scientific discoveries. Also some of the most successful business people I think have been successful because they were able to see the bigger picture and see longer term than others.
Max Tegmark: That said, you’re right. In some situations — maybe winning in a lot of elections — not seeing big picture might help, unfortunately. Maybe that says something about why we get the politicians we do. But I don’t want to disappoint those who listen. I think if someone listening to this is obsessed about the big picture: nurture that, is what I say. You’re going to have much more positive impact as a result.
Rob Wiblin: Yeah. I definitely agree that it’s important.
Max Tegmark: And even in your personal happiness, in your life: if you’re just constantly doing what you’re told and learning to do things 10% faster, that’s not the way you’re going to end up living the most happy and fulfilling life. To do that, you need to see the big picture in your own life also, and see what is it that really makes me happy? If I think out of the box, what could I do to have an even more interesting life? I think those are traits that are both actually very helpful in our personal lives for ourselves, and also for safeguarding a good future for humanity and life itself and the cosmos.
Recent advances in capabilities and alignment [00:57:37]
Rob Wiblin: Yeah. So we’re recording this in May 2022, pretty soon after the release of DALL·E by OpenAI — it’s this ML system that can consistently draw really beautiful and usually appropriate images based on a pretty short description of what the user wants. If I was a professional illustrator, I think I’d be pretty worried about losing work to AIs like this. As someone who’s not an illustrator, I’m mostly just very excited to have a lot more art around me, given how much cheaper it’s going to be to create beautiful designs in future.
Rob Wiblin: In a different domain, there’s Google Pathways language model (acronym is PaLM), which has I think a record number of parameters, 540 billion parameters, and seems to be able to frequently answer really complex questions pretty sensibly, and even explain jokes frankly a bunch better than I’d be able to. This other language model, GPT-3, seems to be able to write simple computer programs based on a short description of what the user wants to do — and be able to do that a decent fraction of the time well. Prediction platforms like Metaculus have seen their aggregate forecasts of when we’re going to first see a general AI that meets various natural capability benchmarks fall by five to 10 years just over the last month or so.
Rob Wiblin: On a personal level, I am kind of freaking out, because it seems like we are maybe just getting shockingly close to AI systems understanding the world we live in in some intuitive way, and being able to make logical inferences that seem an awful lot like the way that I do logical inferences. And I don’t really see why if we can produce language models like this now, we might not have language models that can speak more or less in a way that’s indistinguishable from people in a few years. People talk about a few decades, but why not a few years? Or further, why won’t we be able to make AI systems that can act on that understanding in the world to achieve goals relatively soon? Do you have a take or personal view on how striking we should find all of this recent progress in AI?
Max Tegmark: I think it’s very striking. It’s part of a very striking trend of acceleration of progress. When we did the Puerto Rico meeting in January 2015, that was a time when even talking about the things that you’ve been asking me about now was generally considered so disrespectful. You wouldn’t do it except with trusted friends in a bar. And people who were warned about it, or even talked too much about artificial general intelligence being possible, were often dismissed as just crazy philosophers. That’s why we put so much effort into trying to just mainstream the issue and bring together people who were both building the stuff and people who were worried about the stuff — to get to talk to each other and start working together to get them into not just making them powerful, but making them safe.
Max Tegmark: I brought this up because that was 2015. So we did a poll then, seven years ago, asking people to predict when various things were going to happen. And then we’ve been tracking similar polls, year after year. It’s interesting to see that so many other things are happening faster than even the world experts predicted seven years ago.
Max Tegmark: You mentioned a great assortment of language models and amazing systems that have just come out very recently. You can also add to that the Gato system that DeepMind just put out, where one single system can do over 200 different kinds of tasks — from playing computer games to actually moving a robot arm, et cetera. This is obviously part of an ongoing trend.
Max Tegmark: What this tells me, first of all, is that we were definitely right seven years ago when we took this seriously as something that wasn’t science fiction — because a whole bunch of the things that some of the real skeptics then thought would maybe never happen have already happened.
Max Tegmark: And also, we’ve learned a very interesting thing about how it’s happening. Because even as recently as seven years ago, you could definitely have argued that in order to get this performance that we have now — where you can just, for example, ask for a picture of an armchair that looks like an avocado, and then get something as cool as what DALL·E made, or have those logical reasoning things from PaLM… Maybe for the listeners who haven’t read through the nerd paper, we can just mention an example. So there’s this text: “I’m going to go to this mountain with the faces on it. What is the capital of the state to the east of the state that that’s in?” And it gives the correct answer.
Max Tegmark: So back even as recently as seven years ago, I think a lot of AI researchers would’ve said that that’s impossible to do, unless you have developed some fundamental new breakthroughs in logic-based systems, having some really clever sort of internal knowledge representation. You would really need to build a lot of new tools. And instead, what we’ve seen is it wasn’t actually necessary.
Max Tegmark: People have built these gigantic black boxes. You basically take a bunch of simulated neurons like we have in our brain — basically, you can think of them as wires sending voltages to each other in a certain way. And then you have a bunch of knobs, which are called “parameters,” which you can tweak, affecting how the different neurons affect one another. Then you have some definition of what good performance means. Like maybe answering a lot of questions correctly. And then it just becomes a problem of tweaking all these knobs to get the best performance. This we call “training.” And you can tell the computer what it means to be good, and then it can keep tweaking these knobs, which in computer science is called an “optimization problem.”
Max Tegmark: And basically, that very simple thing with some fairly simple architectures has gotten us all the way here. There have been a few technical innovations. There’s an architecture called “transformers,” which is a particular way of connecting the neurons together, whatever, but it’s actually pretty simple. It’s just turned out that when you just kept adding more and more data and more and more compute, it became able to do all of these sorts of things.
Max Tegmark: And frankly, this is to me the worst-case scenario we’re on right now — the one I had hoped wouldn’t happen. I had hoped that it was going to be harder to get here, so it would take longer. So we would have more time to do some AI safety. I also hoped that the way we would ultimately get here would be a way where we had more insight into how the system actually worked, so that we could trust it more because we understood it.
Max Tegmark: Instead, what we’re faced with is these humongous black boxes with 200 billion knobs on them and it magically does this stuff. A very poor understanding of how it works. We have this, and it turned out to be easy enough to do it that every company and everyone and their uncle is doing their own, and there’s a lot of money to be made. It’s hard to envision a situation where we as a species decide to stop for a little bit and figure out how to make them safe.
Max Tegmark: So this is what I think we learned here in summary: both that at least getting to subhuman intelligence, kind of what we have now, ended up being easier than I think a lot of people had thought; and unfortunately, it didn’t require switching to a more interpretable and understandable architecture either.
Rob Wiblin: Just make the black box bigger. More knobs.
Max Tegmark: Yeah. My MIT AI safety research that we do is exactly focused on opening up a black box. This is often called “interpretability” in the community, but I like to call it “intelligible intelligence,” because you want to make it a sort of interpreter on steroids. The most extreme example would be if you think about how human intelligence works.
Max Tegmark: Galileo, for example, if his dad threw him an apple, he would be able to catch it. He had a neural network in his head, which had gradually learned to figure out the shape in which apples fly under the influence of gravity. And then when he got older, he realized, “Wait a minute, all apples, all things always go in the same shape.” The thing we call a parabola, you can write a math equation for it: y = x^2. And he started writing down these physics laws. Now all of a sudden he had taken the information in the black box neural network that he had in his head. That kind of worked, caught the ball, but was very obtuse, and had gotten the knowledge out and written it down in a symbolic way that he could explain to his friends and colleagues.
Max Tegmark: And even when we don’t do math, but when we speak English, it’s the same sort of thing. We distill out the knowledge from our black box and we explain it to other people when we speak to each other, like right now on the podcast. And I personally think that one of the things that we absolutely need to figure out how to do, if we’re going to ever be able to trust superintelligent systems, is to not just put them in charge of human lives while they’re a black box. You can use the learning power of black boxes to have them learn a bunch of cool stuff, but then before you deploy it, you want to extract out what they have actually learned and put it into some system that you completely understand instead.
Max Tegmark: So if you think, for example, about how we send rockets to the International Space Station. We took all the knowledge about how things move under gravity and all of that stuff and distilled it out into this equation that we could then program in to some computer in that spaceship, where we can verify that it’s going to do exactly what we want and it’s not going to crash down on your house in England. And then we send it off. The spaceship performs just as well as it would if it were a black box — in fact, even a little bit better, because we have all these formal methods.
Rob Wiblin: We’re verifying its behavior.
Max Tegmark: Yeah. So this is something we’ve been doing a lot, and with some success, I have to say. For example, my favorite thing we’ve managed to do so far was we just took a bunch of data from different physics formulas. We took the 100 most complicated or famous equations from one of my favorite physics textbook series, The Feynman Lectures on Physics. For each formula — which is always of the form of like, “something equals some complicated formula of some other variables” — we just would make an Excel spreadsheet out of it with each of the variables, and then the last column would be the results. So like Newton’s law of gravity, you would have the force in the last column, and you would have the masses and the distances and blah, blah, blah in the other.
Max Tegmark: We didn’t tell the computer what the formula was. We were like, “OK, here’s the data. Can you figure out what the formula is?” This is a problem which, if the formula is what we in geek speak call a “linear formula” — where you’re just basically multiplying each column by some number and adding it up — it’s called “linear regression” and engineers all over the world do it all the time; it’s very easy. But for general formulas, it’s super hard. Johannes Kepler, for example, about 400 years ago, he had managed to get hold of the data that Tycho Brahe had gathered of how the planets move around the solar system. He spent four years staring at this data until he figured out, “Oh, the formula is an ellipse.” Planets are going in an ellipse.
Rob Wiblin: Oh wow. OK.
Max Tegmark: Our code was able to discover that automatically now in just an hour. And in fact, it was able to rediscover all of these 100 physics equations just from data like this. And this is just a tiny step, I think, in the right direction. But it is hopeful. I think we should be more ambitious than just training black boxes. And not think of that as the last step in our work, but rather: we have a black box that does something smart. Now stage two is, “How do we get the knowledge out and put it in a safer system?” That way we understand it. It’s kind of like, if you’re a scientist, once you figure out how to catch a baseball or something, or once you understand something, now you go write a paper about it and explain it to your friends. You don’t just stop.
Rob Wiblin: Do you have a sense of what state-of-the-art AI isn’t doing that humans do? It seems like at least the language systems, they can be like cross-media, so they’re able to connect words with images. I imagine with sounds as well, so they can make all these linkages that humans do. It’s seemingly able to reason through and solve problems and give answers to stuff. Not the language models, but the ones that play computer games, especially the complex ones, are able to engage in long-term strategic planning, or at least they’re able to learn what strategies work. It seems like humans do stuff that these systems don’t, but I’m not sure that I can quite put my finger on what it is that humans do that machine learning doesn’t.
Max Tegmark: In terms of tasks, it’s pretty easy. Good luck getting Google PaLM to do your next podcast for you.
Rob Wiblin: Yeah. How long until you just train it on tons of podcast transcripts, and then it would ask pretty sensible questions?
Max Tegmark: [robot voice] That is a very interesting point that you are making, Rob.
Rob Wiblin: [laughs] So your point is, it can’t do that now. But doesn’t it seem like the language models, if we just make them better, will be able to have conversations that are kind of insightful?
Max Tegmark: Maybe. I’m hoping there will be some new obstacles that’ll be a lot harder than we thought, so we have more time to make stuff safe. As I said, right now, these models are still fairly dumb. There’s a bias where when you read their papers, you think the models are better than they are — because they cherry-pick the examples, obviously, that they put in there.
Rob Wiblin: Yeah. Do you know how big that effect is?
Max Tegmark: It’s quite big. I was playing with one of these language models yesterday actually, and I asked it, “What’s 11 times 13?” and it couldn’t do it, for example.
Rob Wiblin: Ah, OK. Interesting.
Max Tegmark: But to be a little more nerdy about it, I think one thing where human intelligence is still far ahead of any artificial intelligence is our ability to combine on one hand that black box neural network, intuitive learning stuff (that mice and cats and seagulls are also great at) with the symbolic reasoning that old-fashioned AI was good at (like Deep Blue, that beat Kasparov in chess, or some mathematical theorem proof or whatever). To combine those two, the example of Galileo sort of really gets to the point where you use your intuition, the neural network kind of waits to just get something. It clicks: “Ah, I see the pattern!” And then step two, as you translate it into symbolic reasoning. And I think it’s the ability to combine this neural network, fuzzy black boxy technology with a symbolic technology that has made us humans able to rule the Earth.
Rob Wiblin: Yeah.
Max Tegmark: And part of what we do also much better than artificial intelligence is we have these representations in our brain about knowledge, which are much more sophisticated than typically what a lot of these other models have. They might overcompensate by having an enormous number of parameters or whatever, but it’s pretty obvious if you just decide to mess a little bit with these language models. You can sometimes get them to say really dumb things, which make it obvious that they didn’t really understand things, even in the way a five-year-old did. Whereas for an actual human five-year-old: for them, there is a world out there; they have some representation of it. And so fortunately there’s still quite a ways to go.
Max Tegmark: We’ve geeked out now quite a bit about one of the two technical challenges that I think are most crucial in AI safety: we have to get away from just settling for just black boxes. We have to aim higher. Yeah, black boxes are great, but that should be step one. Step two is now get the knowledge out of the black box and build something safe: a safe AI system, which performs just as well, but that you can trust.
How to give machines goals [01:13:13]
Max Tegmark: The other one we should talk about also is equally important, which is the goals that you give to your system. So Stuart Russell has gone on a long crusade on this and has a great book and a TED Talk that I would recommend anyone to check out if they’re unfamiliar with this. But already Irving J. Good and early thinkers, and also Nick Bostrom and Steve Omohundro and others, have talked about just how catastrophic it can be if you have a system which is both super smart and just has a single goal — because paradoxically, almost whatever goal you give it, something disastrous happens. Which is not obvious at first. You might have your super-smart future self-driving car, and you just tell it to take you to the airport as fast as possible. And then you get there covered in vomit and chased by helicopters. And you’re like, “No no no, that’s not what the goal was that I gave you.” And the car says, “That’s exactly what you asked for.”
Max Tegmark: It’s very hard to specify precisely the goal that you want. If you have any really open-ended goal that you give a sufficiently powerful system — maybe to calculate as many digits of pi as possible, or Nick talks about this silly example of making as many paperclips as possible — then you might think that’s completely harmless. Because the machine doesn’t care even about dying, so it shouldn’t care if you turn it off at all, right?
Rob Wiblin: Yeah.
Max Tegmark: But as soon as you think about it a little bit more, it doesn’t want to be turned off, because then it can’t make any more computations or paperclips. So it’s going to defend itself. And you might think it’s not going to be greedy because you didn’t give it a goal to accumulate resources. But you need resources to accomplish the goal, and it’s going to try to get as many as possible. And then eventually if it says, “Hey Rob, nice atoms. I need them” — and it takes your atoms — you have a problem.
Rob Wiblin: Yeah.
Max Tegmark: The long and short of this is that yeah, you can still take the approach of thinking really hard about what goals you should give your machines. But I think that’s the wrong way to think about it. If you try that, first you have to make the machine understand your goals, and then you have to make it adopt your goals, and then you have to make it retain your goals.
Max Tegmark: Anyone listening to this who has kids knows that when they’re tiny, they don’t understand your goals. And then when they’re teenagers, they do understand, but they don’t want to adopt your goals. But you fortunately have some magic years in between where they’ll still listen to you at least some of the time, and you have your chance to sort of make them adopt your goals. But it’s hard. A machine that’s undergoing some sort of a self-improvement might blow through that malleable phase way too quick for you to be able to do anything with it.
Max Tegmark: And we also know that we don’t keep our goals throughout our lifetime. So if you make this machine’s goal to create this awesome paradise for humans, it might eventually find that as boring as my kids find their Legos now, which are just gathering dust in the basement.
Rob Wiblin: At least if you’ve given it a tendency towards wanting novelty or creativity or stimulation or something like that.
Max Tegmark: Yeah. So I think that ultimately that whole approach of trying to give machines one perfectly defined goal and let’s let it rip is very wrong-headed. And what Stuart Russell has been pushing is just changing the whole foundation of how we do machine learning, where you don’t give it a goal. One strategy is that you build your very intelligent machine and you say, “OK, go execute the goal.” And the machine is like, “What’s the goal?” And you’re like, “Well, I’m not telling you yet, but I’ll let you find out piece by piece.” That way the machine has an incentive to keep coming back and asking questions from you, because it’s really afraid of doing something which…
Rob Wiblin: So it has to be…
Max Tegmark: Open problem.
Rob Wiblin: Yeah, yeah. It has to feel uncertain about what you want.
Max Tegmark: Yeah. There’s all sorts of nerdy names for different parts of this, like “inverse reinforcement learning” and so on. And it’s a very active field of research. I think it’s great that a lot of people are going into it. If there’s someone listening to this who likes computer science and AI, rest assured there is so much to contribute in this space going forward. But not even that’s enough. Even if you can completely solve the problem of making machines have their goals aligned with their owner, you’re absolutely not out of the woods. So if we come back to where we started the whole podcast again: what we really would like to do is create alignment — not just between the incentives that a machine has and the wellbeing of its owner, but we would like to align everything so that the incentives you give to every person, to every corporation, to every government is also aligned with the wellbeing of humanity as a whole.
Rob Wiblin: Creating a good world.
Max Tegmark: That’s right. “Multiscale alignment” is a nice nerdy phrase for this that I like a lot. I used to call it “meta-alignment,” but then my friend Andrew Critch convinced me that multiscale alignment is what it should be called. And if you think about this for a moment, this is something we can all relate to — no matter what our job is, and no matter what our education and background is.
Max Tegmark: For example, why is it that if you walk down the street and there’s a baby lying there, you’re going to instinctively go help it? It’s because those tribes that didn’t have that compassionate wiring got out-competed by the tribes that did. Why is it that if two people get into a bar fight — and if it’s not in Texas and they don’t have guns — then they’re probably not going to kill each other? Because we have a very strong inhibition towards doing too much harm to each other. It’s again because if those tribes didn’t have those genes, they kind of self-destructed and got out-competed.
Max Tegmark: So Darwinian evolution already did a lot of alignment. We’re trying to align sort of very basic incentives that individuals felt with what was good for the community that they were part of. And then as communities got bigger, we also then innovated on top of that. So we invented gossip, for example. If you go to the pub with your friends every Friday, and after 20 times, you never picked up the tab, then mysteriously, you’re not going to get invited anymore because the gossip got out that you’re stingy.
Rob Wiblin: Not a good person to have around.
Max Tegmark: Yeah. That’s another very powerful alignment mechanism, if you think about it. Where if you get a reputation as being a compulsive liar or someone who is very untrustworthy, you get socially punished in various ways. So you have an incentive to be honest and trustworthy and loyal. And why is that exactly? In world religions, it’s often been phrased as the reason that we should do this is because of some higher principle. But from an evolution point of view, you could also understand it as just that this is an alignment mechanism that favored the groups that had it so they could compete better against the other ones.
Max Tegmark: And then if we zoom out more, as we go to the really big picture, when societies got still bigger, we decided to start to codify this stuff. So we invented the whole idea of a legal system. Saying that if in that bar fight, you really did kill the other guy, then you have this incentive that if you do this, you’re going to actually have to be spending many years in a small room eating kind of boring food to think it over as a result. We create these laws to align the incentives of individuals with the greater good. We also have laws to align the incentives of McDonald’s and other companies with the greater good, for the same reason.
Max Tegmark: So the idea of multiscale alignment is as old as civilization itself. And the point I want to make here is that if we want an awesome future with ever more powerful tech, we have to solve multiscale alignment. It’s not enough to just solve it in one level.
Regulatory capture [01:21:03]
Rob Wiblin: So what does it look like for that not to happen? So we create AIs and if they’re aligned with some individual, but not multiscale, then what happens?
Max Tegmark: For example, let’s look at some past failures. So in the 1950s, the first article came out in the New England Journal of Medicine saying smoking causes lung cancer. Twenty years later, that whole idea was largely silenced and marginalized, and it took decades until there was much policy, and warning labels on cigarettes, and restrictions on marketing cigarettes to minors. Why was that? Because of a failure of alignment. Big Tobacco was so rich and so powerful that they successfully pulled off a regulatory capture, where they actually hacked the system that was supposed to align them and bought them.
Max Tegmark: Big Oil did the same thing. They’ve of course known for a very long time that there was a little conflict between their personal profits and maybe what was best for society. So they did a regulatory capture, invested a lot of money in manufacturing doubt about whether what they were doing was actually bad. They hired really, really good lawyers. So even though in the social contract the idea had been that the governments would be so powerful that they could give the right incentives to the companies, that failed.
Rob Wiblin: I guess the companies became too close in power to the government, so they could no longer be properly constrained anymore.
Max Tegmark: Exactly. And whenever the regulator becomes smaller or has less money or power than the one that they’re supposed to regulate, you have a potential problem like this. That’s exactly why we have to be careful with an AI that’s smarter than the humans that are supposed to regulate it. What I’m saying is it’s trivial to envision exactly the same failure mode happening now. If whatever company that first builds AGI realizes that they can take over the world and do whatever the CEO wants with the world — but that’s illegal in the country they’re in — well, they can just follow the playbook of Big Tobacco and Big Oil and take over the government.
Max Tegmark: I would actually go as far as saying that’s already started to happen. One of the most depressing papers I’ve read in many years was written by two brothers, Abdalla and Abdalla, where they made a comparison between Big Tobacco and Big Tech.
Max Tegmark: Even though the paper is full of statistics and charts that I’ll spare you, they open with this just spectacular hypothetical: suppose you go to this public health conference. Huge conference, thousands of top researchers there. And the person that is on the stage, giving this keynote about public health and smoking and lung cancer and so on, you realize that that person actually is funded by a tobacco company. But nobody told you about that: it doesn’t say in the bio, and they didn’t say when they introduced it. Then you go out into the expo area and you see all these nice booths there by Philip Morris and Marlboro, and you realize that they are the main sponsors of the whole conference. That would be anathema in a public health conference. You would never tolerate that.
Max Tegmark: Now you go to NeurIPS — tomorrow is the deadline for my group to submit two papers; this is the biggest AI conference of the year — and you have all these people talking in some session about AI in society or AI ethics. And they forget to mention that they got all these grants from Big Tech. And then you go out to the expo area and there’s the Facebook booth and there’s the Google booth and so on and so forth. And for some reason, this kind of capture of academia that would be considered completely unacceptable at a public health conference, or for that matter a climate change conference, is considered completely OK in the AI community.
Rob Wiblin: That is a really interesting example. Yeah.
Max Tegmark: That just shows how they were already doing this playbook: they’re funding the academics who might otherwise criticize them. And we’ve seen it firsthand. So after the first Puerto Rico conference, we came out with this open letter signed by a who’s who of AI researchers, saying that AI safety is important and yada yada. And then two years later, we got the Asilomar AI Principles, signed by thousands of influential AI researchers, saying, “Here are the ethical principles we should think about.” And we were like, “Yay, this is so great. Everything is going in the right direction.”
Max Tegmark: Since then, things have, I think, largely gone backward. And in hindsight, I think we just woke up the sleeping bear: that the industry started to gradually push back against this and realize maybe they didn’t want to be regulated. And now there are more lobbyists in Washington and Brussels from Big Tech than there are from oil companies even. And we’re seeing more and more people also saying, “Yeah, Max, I agree with you when you talk about how we have to have self-driving cars that are safe. But you should really stop talking about this superintelligence stuff and existential risk, because that just freaks people out.” You can see how they’re trying to very much refocus people who want to do AI safety into anything that’s not bad for their bottom line.
Rob Wiblin: You don’t need it to be as extreme a case as tobacco companies or smoking companies supporting a health conference. Just having any particular interest group, even if it’s not obvious that their interests are in direct conflict with the rest of society in some particular way, being completely dominant within a conversation because they’re the ones who create the forum and decide who gets a platform and who doesn’t. Even if we don’t know how it’s problematic right now, it could be problematic in future, because just like one particular company or one particular person doesn’t necessarily have exactly the same values or personal interests as society as a whole.
Rob Wiblin: I guess I imagine that the folks at Google and all these different companies, they presumably are interested in having these conversations. They live in this world too, and they want to make things go well. But nonetheless, they come in with a particular perspective of people who are actually developing this and want to deploy it for particular goals that they have. And that means that they’re not completely in alignment with the rest of society, even if they have the best of intentions. So it’s at least setting up the potential for the conversation to not be nearly as productive as it could be.
Max Tegmark: Exactly. And I want to be very clear here. You know, I have a lot of friends who work for these Big Tech companies who are great people. I’m not faulting any individuals here. They’re idealistic. They want to do good. But the fact of the matter is, if you talk to people who worked in tobacco companies, or if you talk to some executive right now who works for ExxonMobil or whatever, they’re good people. It’s not that there’s some person who’s like, “Haha, I’m so evil. Now let me ruin the climate.” In some way, a large corporation already is better thought of as…
Rob Wiblin: An agent.
Max Tegmark: An agent. It’s not yet superintelligent, but it’s definitely more intelligent than any individual human being. And you know British Petroleum, I happen to know the former CEO of it. He’s a very nice Swedish guy, I really like him. But the organization itself is much more powerful than any person. If the CEO of a large tobacco company says, “Yeah, I really don’t like this lung cancer thing. Let’s stop selling cigarettes,” the next thing that’s going to happen is he’s going to get fired and be replaced by another CEO, right? And if someone thinks, “Ah, it’s impossible for some other nonhuman entity to ever be smarter than us and cause problems,” just think about what it’s like to go head to head with a large corporation in some legal battle or whatever. Good luck with that, I say. It’s very, very powerful.
Max Tegmark: And the thing that happens is even the companies that work for the corporation themselves also get quite effectively brainwashed by what the corporation wants them to believe. They start to believe the press releases that came out from the PR department of their own department. The CEO often isn’t the guy who is looking into the lung cancer thing. He’s told by some others that this is safe. And there’s also a selection effect that’s super powerful. You’ll always have people with a spectrum of opinion about anything, and you’re not going to get the job to be the CEO of a tobacco company if you think that smoking is the most terrible thing and has to be stopped.
Max Tegmark: So by definition, the people who are there are the people who already hold sincerely the beliefs that the corporation needs them to hold. There’s a hilarious interview with Noam Chomsky on the BBC where they’re arguing about whether BBC is biased or not. And the journalist gets kind of pissed with Noam Chomsky and says, “Do you really believe that I’m not sincere in what I’m saying?” And Chomsky says, “Of course you’re sincere in your beliefs, but if you didn’t have them, you wouldn’t have this job.”
Rob Wiblin: Yeah. It’s a really good observation. I’ve heard this line that we shouldn’t worry so much about superintelligent machines because we already have superintelligent beings in the corporations. And I’ve always thought that’s very insightful, but it seems like it kind of argues the other way. Because big corporations are only somewhat more intelligent than individuals, and they’re pretty aligned with human interests because they’re made up of people, and also they’ve come about gradually over time — such that we had centuries to learn how to keep them aligned with interests, and we’ve had these other very powerful institutions called governments designed to control them. And yet we still struggle. And so if you develop something that’s much more superintelligent, super-capable than a corporation, and it comes in in a few years and it skates ahead, and it’s not even comprised of agents that we understand like human beings, then it seems like we could be in really hot water.
Max Tegmark: All great points you make. And corporations are certainly not anywhere near the definition of superintelligent. They’re more intelligent than any one human as an agent, and that’s why generally, if a corporation wants one thing and you want the other thing, the corporation is going to get its way. We have the advantage with them though that they evolve on the human timescale: the corporation doesn’t get twice as powerful every week or anything like that.
Max Tegmark: So we can learn a great deal for how to deal with artificial superintelligence by trying to first figure out how we can solve multiscale alignment with corporations. And this is actually very much linked to why I’m so interested in the whole media news business. Because I think, again, it’s not sufficient to just figure out how to align machines to their owners.
Max Tegmark: And tobacco companies, they kind of shifted gears. Eventually they acknowledged that it causes lung cancer, and they changed their talking points a bit so that they could still keep selling cigarettes and make enough money. We’ve also seen a big shift in strategy from Big Tech now. So now what Big Tech generally says is, “Yes, please work hard on AI safety to figure out how to make our machines aligned with the goals of our companies. Work on that. It’s great.” And the stuff that you and I talked about earlier there, with getting rid of the black boxes and so on, is all stuff that Big Tech actually loves: they don’t want to have a machine that doesn’t obey the CEO.
Rob Wiblin: I see.
Max Tegmark: But what they don’t want is what, for example, the European Union is trying to do right now, which is pass the EU AI Act, which would actually put some new incentives in place to try to align the behavior of the corporations more with the citizens of the European Union’s incentives. There’s been some very serious lobbying by tech firms against it. And this is actually very exciting, because it’s being discussed and decided now this week, next week, and then in the months ahead.
Max Tegmark: So you might think that the main focus of this new first-ever really powerful AI law would be on the most powerful systems, because that’s the future, right? But lobbyists are putting in this fantastic loophole, saying that there’s a little exemption here: general purpose AI systems are basically exempt.
Rob Wiblin: On what possible basis?
Max Tegmark: Because it’s very convenient. Basically what they’re saying is DALL·E, GPT-3, any of these things, they tend to be built by companies from the US or China and then can be used by smaller companies in Europe to do various tasks with them. And what the exemption says is that all the responsibility falls only on the European companies that use them for something. Even though that’s of course completely unreasonable, because they have no way of knowing what’s inside the black box exactly.
Rob Wiblin: I see.
Max Tegmark: So if this loophole stays, then this EU AI Act is just going to be the laughingstock of the AI community. There’s been a lot of pushback recently. In fact, I was invited to give a talk about this at the European Parliament quite recently. As you can probably guess from what I’m saying now, I was encouraging them to close the loophole. France just came out yesterday actually and said that they support closing the loophole. They want to make sure that the brain of the AI is not exempt.
Max Tegmark: But this illustrates beautifully what we were talking about here, where every player in a game is always looking out for themselves. You can’t blame them for that. So a company will support AI safety to make their computers do what they want, and they’ll generally be opposed to anything that’s trying to change the incentives on the company or have more oversight on the company. And since even some of the major funders in AI safety come from Big Tech and made their money in Big Tech, it’s not surprising if they are more into funding the kind of research that they want to see happen — namely, align machines to their company — rather than policy research or advocacy to try to actually align the corporations with the greater good. Yet, if we fail to do that, I just want to say that then obviously what’s going to happen is eventually Big Tech — many companies or just one company — will become the government.
Rob Wiblin: Yeah. I’ve been happy to see over the last five or 10 years this explosion in AI alignment work that is just trying to make sure that these programs do what the companies want them to do and what the companies want to be able to sell them as being able to do. Because we at least need to be able to do that — that’s a prerequisite, even if it’s not sufficient. Even there, my impression as an outsider is that the capabilities of AI systems is substantially outstripping, as far as I know, the progress on all of these even narrow alignment issues, like interpretability or inverse reinforcement learning. Do you think I’m probably right about that?
Max Tegmark: I think you’re right about that, yeah.
Rob Wiblin: In that case, are we sleepwalking into disaster, basically? Just as a species?
Max Tegmark: Yeah, unless we change course. But that’s of course exactly why so many of us have been working so hard to try to accelerate this AI safety research. It’s not feasible to slow down the progress in capability, but it’s very feasible to accelerate the progress in safety. So I think this is a race, the wisdom race. We just want to make sure we win the race between the growing wisdom with which we manage our technology, and the power of the tech itself.
Max Tegmark: And that’s why we launched, with Elon Musk’s help, the first-ever AI safety research program. I’m super excited to see so much more funding in this space now from many, many actors. That’s also why we just launched these Vitalik Buterin talent pipeline fellowships, where we try to incentivize more talented young people to go in and do PhDs in AI, existential safety postdocs, et cetera.
Max Tegmark: I think this field needs much more talent. Basically every university worth its name has someone doing cancer research. Every computer science department should also have some people working on AI safety, just duh. And the good news there is that the field has become very respectable and a lot more people are going into it. Together, hopefully we can continue to grow the field rapidly. And another good thing is this is aligned with the incentives of the tech companies, the narrow technical alignment, so they also support it.
Rob Wiblin: On that theme, what do you think is the chance that you and I have misunderstood the iterative process by which these really advanced AI agents are going to come into the world? And maybe in fact, ordinary efforts to make AI programs do the stuff that their creators — the corporations that are trying to sell them — directly want as they pass through each stage of capabilities, that that actually might in fact be enough to keep them consistently aligned? It seems like this is the view of some ML researchers, that because that narrow alignment is so intertwined with capabilities, in actual fact, they will just improve in tandem in a way that means that we shouldn’t worry about it too much.
Max Tegmark: Totally possible. I would give it less than 50% chance, but even if it is 50% likely that we’ll get lucky like this, that also means there’s a 50% chance that we’re unlucky and are walking towards a cliff. In which case, we should really focus on it. I think again, this is one of those things where none of us has a crystal ball. If I’ve learned anything as a scientist, it’s to be humble about how little I actually know, and it’s exactly the humility that means we need to do AI safety research — in case there is a problem.
Rob Wiblin: Yeah. I suppose this would seem a lot more plausible if AI capabilities were going to improve very gradually, and each year we can do a little bit better, a little bit more staff, and we have lots of time to learn and observe the behavior and see the failings. I suppose it’s a little bit hard to see if at any point you have a sudden jump or a sudden explosion — because for a few months, even a few weeks, some recursive self-improvement process kicks in — it seems like this marriage of capabilities and alignment, growing appropriately in tandem with one another, could really easily break down.
Max Tegmark: Yeah, for sure. And even if we succeed completely with this and always keep, for the entire future of humanity, the machines aligned with our owners, it’s important to remember that that’s still only half the problem.
How humanity fails to fulfil its potential [01:39:45]
Rob Wiblin: Yeah. So let’s move on from the technical stuff. The situation doesn’t sound amazing, but similarly, the situation might be even more troubling on the policy and deployment side. As we talked about, eight years ago and five years ago, you organized these two really big conferences that brought together a lot of people who might have had influence to shape this.
Max Tegmark: There was 2015, 2017, 2019. So seven years ago was the first one.
Rob Wiblin: Quite a few of them. Can you tell us maybe more about how that side of things has been evolving over time? From your book in 2017, it sounded like you were incredibly positive and optimistic and happy with how things had gone and how many people were bought into the need to do all this policy and deployment thinking ahead of time. And it sounds like now, maybe you think reality has kicked in, and maybe this work isn’t going to be done naturally.
Max Tegmark: Yeah. It’s pretty funny, actually. I think my optimism peaked exactly at the time when I was writing the last chapter of Life 3.0.
Rob Wiblin: Right, yeah. You have this fantastic story where it just seems like, “We’re just making so much progress. We spent a few years on it, everything’s coming together.”
Max Tegmark: Yes. It felt like the AI community was actually acknowledging that this was a real problem. And you had all these talented, idealistic people now working on it. What could possibly go wrong? And then of course, what gradually happened was to a large extent corporate pushback: where some powerful entity started feeling threatened by the idea of maybe getting regulated and so on. And just like Big Oil and Big Tobacco, there was a subtle but palatable counterpush, trying to refocus the AI safety efforts on anything that didn’t threaten the corporate money: “Let’s get everybody talking about racial bias in the training set. Let’s get everybody talking about this and that ethics. Just don’t talk about human extinction or slaughterbots or any of that stuff that we want to sell.”
Rob Wiblin: Or what about mass unemployment? Is that kosher?
Max Tegmark: Well, there, I think they’ve reached a certain equilibrium, where there’s always enough cheerleaders who keep repeating the talking points: that there’ll always be new jobs, that people have warned about this ever since the Luddites. That even though you also have a lot of serious economists saying it’s different this time, they don’t feel particularly threatened. This feeds into the greater debate about whether to change the tax system or have universal basic income, where it’s not really the tech companies versus the rest. It’s more the richest of the rich versus the rest.
Max Tegmark: But if we come back and focus on the tech-specific stuff here, suppose the nerdy community comes up with all these great solutions, so that for the entire future, machines will always obey their owners no matter how powerful they are. Does that mean we’re all set?
Rob Wiblin: No.
Max Tegmark: No. Basically, if we’re in a situation where we go into a kindergarten, where we have all these perfectly nice and friendly kids. And you give them, say, “Hey, here’s a box of hand grenades and chainsaws and circular saws. Well, have fun playing with these.” What could possibly go wrong? They clearly don’t have the wisdom to manage technology this powerful. And this can happen at a larger scale of society as well. I often feel that, name whatever politician you want, give them a box of 13,000 hydrogen bombs. I’m not completely filled with confidence that they have all the wisdom needed to manage this wisely. And now go give them artificial general intelligence, and I feel even less confidence.
Max Tegmark: We’ve seen previous examples, even where an entire human population managed to drive itself extinct, like on Easter island. Someone has a great idea: “Let’s chop down all the trees,” and then, “Yeah, yeah, yeah.” Then by the time they realize their mistake, they just can’t recover. So I would very much like to continue thinking about this multiscale alignment business to avoid that we humans basically get hoisted by our own petard.
Rob Wiblin: Yeah. It’s interesting to think about the alignment thing in the context of nuclear weapons. I suppose an analogy for the technical alignment question is maybe like making sure that the nuclear bombs don’t explode on their own, without being asked to. But we have solved that problem. We have made sure that nuclear bombs probably aren’t going to explode unless we get them to. However, that does not make the situation completely safe. And of course, I think you’ll never get an AI that’s doing all kinds of new things and being asked new questions and to solve new problems all the time to be anywhere near as stable as a nuclear bomb, just sitting there. It’s a far more general technology.
Max Tegmark: Yeah, and that’s a great example actually with nuclear, because that’s quite right. We haven’t had a hydrogen bomb that just randomly blew up. We solved the technical part there.
Max Tegmark: Yet, one of the things we do with the Future of Life Institute is we have this annual celebration of an unsung hero who did something great. And one year we celebrated Vasily Arkhipov, this guy who single handedly averted a Soviet nuclear strike on the US. And I’m very grateful that he was there and happened to do it. It was just a random coincidence that he even happened to be on that submarine. If he hadn’t been there, they would’ve launched because the captain and the political officer had already decided to launch. But he happened to be on that sub out of the four in the fleet, and because he happened to outrank the captain, he could stop it.
Max Tegmark: Another year we gave a Future of Life award to Stanislav Petrov, for again, helping avert another accidental Soviet nuclear attack on the US. And if he had been replaced by an AI, actually we might have had World War III that year. So this already goes to show that it’s not enough to make the technology obey the humans. You also have to give the right incentives to the humans. And when we build really complicated human systems, sometimes they fail.
Rob Wiblin: Something that’s a little bit puzzling here is that it seems like inasmuch as you believe that AI’s going to advance a lot, it’s going to have massive influence in society. It could end up being the most important invention ever. It seems like it’s in the interests of all of the individual staff at Google that things go really well, that we create a flourishing future. It’s in the interests of all the shareholders of the company. It’s even, in a sense, in the interest probably of the company, in terms of maximizing its profits, that the deployment of AI produces a generally flourishing humanity and economy and so on forever.
Max Tegmark: Just like it was in the interests of everybody on Easter Island that they didn’t go extinct.
Rob Wiblin: Yeah. And so to some extent, it seems that it requires not only some potential conflict of interest between Google and society as a whole, but actually a certain narrowing focus of specific people making decisions within a corporation to not think about the centuries-long vision that you might have of a truly flourishing company, and what kind of world it wants to be involved in creating.
Max Tegmark: Absolutely. Yeah. I’ve been thinking a lot about this question of why we keep falling so short of our potential. I have a three-quarters-written paper about exactly this. It’s called, “Why does humanity fall so far short of its potential?” We’ve alluded to two of the key reasons there. I want to rise above the silliest finger-pointing saying it’s because of politician X or whatever, and look at the actual dynamics that are happening instead. How can you end up in a situation where something takes place that nobody really wanted?
Max Tegmark: One mechanism has been very extensively studied in game theory — from the prisoner’s dilemma to the tragedy of the commons — where we can set up these situations where everybody is just following their own incentive, and you end up in a much worse outcome than you could have ended up with if people cooperated, right? For someone listening to this, if they just want to read one thing, I would recommend Meditations on Moloch on Slate Star Codex by Scott Alexander; it’s full of great examples of this.
Max Tegmark: These sorts of things happen a lot in our society. And the nuclear arms race is a perfect example of this, where you have the US and the Soviet Union, now Russia, all following their incentives. And there was a 75% chance that we would’ve had World War III during that Arkhipov incident, because there were four submarines. We’d lucked out he was on that one. If that had happened, it would’ve been that kind of failure, a so-called “multipolar trap” as game theorists would call it. Or a shitty Nash equilibrium if you’re an economics buff. The basic antidote towards those problems is always better coordination. Which is hard.
Max Tegmark: That’s why we still haven’t solved climate change, for example, or for that matter, the nuclear arms race. But it’s even worse, because that’s not the only issue. We’ve already alluded to it a little bit. Suppose someone had solved this multiscale alignment and set up things so that if you just did what was in your interest for your incentives, things would work out great for everyone. You might not be able to figure out actually what’s best for you, because the world is complicated, right? If you had some sort of fictitious, infinitely powerful, super hypercomputer, you could think through every possible action and what impact that would have. But we don’t have that. We are what is known as, in machine learning nerd speak, “an agent of bounded rationality.”
Max Tegmark: We have a finite amount of neurons, a finite amount of time to think, and we have to divide that between doing many different tasks. And because of that, we can be hacked and we can often fail. When we lived in our natural habitat 50,000 years ago, when we were living in caves, then the environment that we had evolved to function in optimally was the same environment we lived in generation after generation. So we developed a bunch of heuristic hacks, like when there’s not enough nutrients, you have this thing called hunger and you eat stuff. Or when you’re thirsty, you drink stuff if it doesn’t taste weird. And if you see someone who you find very attractive and maybe you can make some copies of your genes, then maybe you should fall in love with this person, so you take care of them.
Max Tegmark: In other words, evolution implemented a bunch of shortcuts, which weren’t actually the full computation, but generally really useful rules of thumb. And they usually worked because we applied them in the same situation they had been optimized for. Then we started inventing so much technology that we radically transformed the environment we lived in and our old rules of thumb went from being good to sometimes being counterproductive. Like now this idea that you should always eat anything you find that tastes sweet or fatty caused an obesity epidemic. So you can already see how this fact that we have bounded rationality has started to cause some problems. Also, the high levels of aggressiveness that might have been optimal when we lived in small groups and didn’t meet strangers very often is maybe less helpful now when you have people packed together with nuclear weapons and stuff.
Are we being hacked? [01:51:01]
Max Tegmark: But then something else is happening now, which is making it even worse. Because I would say all of those failures I just listed off, like the obesity epidemic, come from bounded rationality. They kind of happen coincidentally. It just turned out that this instinct we had to always eat sweet stuff went from being beneficial to not being beneficial. It wasn’t like anyone had planned it to be that way.
Max Tegmark: But now we have ever more sophisticated marketing research, powered by machine learning, which has a direct profit incentive to look at you, make a very detailed model of all the rules of thumb that you use that are a little bit off from what’s actually in your self-interest, and then exploit them. Micro-target them. So if you are more likely to buy something that you don’t need just because it’s accompanied by a photo of a sexy person, they will try to exploit that to make you waste your money.
Max Tegmark: And if they realize that you have a rule of thumb to vote for people who say certain things and act in certain ways or whatever, then they will study that in great detail. And maybe there were just some pretty seemingly minor little departures between your rule of thumb there and what’s actually in your self-interest. But they’ll find them and they’ll use all of them, and then they’ll make you do something where you maybe vote for someone who’s actually going to make you poorer, even though you thought they were going to make you richer.
Max Tegmark: So this is something which I think is greatly exacerbated now by machine learning: that we humans are being hacked. Not randomly, like by sweet food previously, but very deliberately — where we’ve been studied to see all the ways in which our rules of thumb are discrepant from our self-interest, and then manipulated with that.
Rob Wiblin: Yeah. People talk about this a lot. It doesn’t super resonate with me on a personal level. I feel like I eat pretty healthy despite the fact that people are kind of trying to sell me unhealthy food. I manage to exercise, even though presumably… Oh, I guess actually companies do make money out of that. So maybe this is a slightly offsetting effect that companies also potentially can make a lot of money by selling you products that are in your self-interest, and indeed that might be even slightly easier. But I can’t think of many cases where I feel like I’m getting exploited to my great detriment by corporations or machine learning algorithms.
Max Tegmark: Can I push back on this a little bit though?
Rob Wiblin: Go for it. Yeah. Tell me what I’m doing wrong.
Max Tegmark: Just like with the Noam Chomsky journalism incident, if you were very easily hacked like this, you probably wouldn’t have been so successful as to be running this podcast right now. And most people in America don’t have your BMI. More than half of all Europeans are now considered overweight or obese. Most people in Europe have not been as successful in their careers as you, right? So it’s a little bit dangerous to just look at yourself and take that as being some sort of representative of how powerful the hacking system is.
Rob Wiblin: Yeah. I worry, I guess, about dangers on both sides. I think what you’re saying is completely reasonable, that presumably some people are more susceptible than others and so possibly I could be someone who’s less manipulated by this. On the other hand, I suppose you always worry about the alternative, where it’s very easy to think, “Oh, other people, they’re idiots, and I’m much smarter than them. And so it’s not affecting me; it only affects all of those chumps.” I suppose a middle ground might be that I am getting hacked to some degree, and I’m just not aware of it. It works better when I don’t know. So of the various cases, I guess probably the most natural one will be Twitter addiction, where it’s arguably not in my interest to use Twitter as much as I do.
Max Tegmark: And if I can insert some more humility, I think we are also hacked in ways that we haven’t even realized so much yet, which are actually quite problematic. These hacks usually work precisely as long as we don’t realize them. So why is it that there are still over 12,000 hydrogen bombs around that have almost gone off and caused nuclear winter many times over? I would say it’s because we’ve been hacked in a different way that’s so subtle. Notice that we almost never read in the news about what would actually happen to London if it got nuked, or the latest research on nuclear winter. We don’t hear much about it.
Max Tegmark: We hear a lot about how important it is to put a no-fly zone over Ukraine right now and why the United States has to spend $1.5 trillion replacing their nuclear submarines and other weapons, buy new ones. But for some reason this is just absent from our Facebook feeds and the front pages of the New York Times. And that’s also, I would say, a kind of hacking. It’s not done this time by the fast food industry. It’s done by the industry that sells these weapons systems, which is one of the most successful industries on the planet, right?
Max Tegmark: I don’t want to pick on you personally, but why is it that there is more outrage right now in the UK about Russia and its invasion of Ukraine than there was about Britain’s invasion of Iraq over the weapons of mass destruction? There are serious historians who argue that maybe a million people in Iraq were killed as a result of this. And is it really because you can make an ironclad case that the British and American invasion of Iraq was so much more morally justified than what Russia did? Or is it because you’ve been hacked?
Rob Wiblin: I guess I would usually put that down to something more boring. Like people tend to be very good at persuading themselves that the things that are in their personal interests, or the things that they perceive as in their personal interests, are just, and the things that other people are doing are selfish and unjust, rather than about being hacked specifically. But maybe you have a very broad conception of hacking that maybe encompasses that.
Max Tegmark: Well, if you’re persuaded to spend a lot of your money on buying lottery tickets, which some people will refer to as “the stupid tax,” in what sense is that more hacked than being persuaded to spend a lot of your tax money on invading Iraq?
Rob Wiblin: So maybe a difference is that at least some of the people who were advocating for the Iraq War were not doing it out of perceived personal interest. I suspect that many of them were perhaps mistakenly thinking that it was just and justified what they were doing. I guess inasmuch as it is someone at an arms company who thinks that it’s a bad idea for the world, but actually does just spend a whole lot of money to persuade people to go ahead and do it because they’ll make money out of it, then I agree that that does sound quite analogous.
Max Tegmark: Wait, are you hypothesizing that the marketing department in an arms company would somehow be much more ethical than the marketing department in a cigarette company?
Rob Wiblin: I don’t know how counterfactually causally responsible arms companies are for causing the Iraq War. It seems like there could be other reasons why that happened, that aren’t primarily related to profit seeking. But I’m just not sure.
Max Tegmark: So in this particular case, there’s a lot of very interesting research that’s been done subsequently by historians and others. And even more recently, if you look at the biggest think tanks that write these foreign policy white papers in the US, they get like a billion dollars per year from arms companies. And why are the arms companies spending a billion dollars a year on that, if they’re not going to get anything back?
Rob Wiblin: I think I buy the broader point in general, that we should probably expect overuse of arms and over-militarization relative to what is in a country’s best interests, because there’s more of a concentrated interest that is able to advocate in favor of that. And I guess maybe I could think of some biases in the other way, but I think that is the tendency that we see. So I agree that is analogous, or at least partially analogous, to the lottery ticket case.
Max Tegmark: Yeah. So good, then we agree on the sign of the effect: that you would expect that we should smoke more as a society than is optimal, and that we should take more risk of nuclear war than is optimal. In both cases, because the products involved are sold by companies that have a marketing department and they want to improve shareholder value. The incentives are there and they’re going to follow them. So when we say that we’re not hacked, I think it’s important to take this broader view, especially when we think about existential risk and these really large things. Because I think, looking at you here on the camera that the listeners can’t see, you look like you’ve been handling the personal risk of dying of heart attacks quite well. You look fit and healthy and that’s all good.
Rob Wiblin: Thanks, Max.
Max Tegmark: But I would say that if the risk of dying of a nuclear war is 1% per year or something like that — and maybe higher if you live in London — on that one, I think you and I and our friend groups have been just as hacked as everybody else has. Because it’s really very hard to justify why humanity should want to adopt this really reckless long-term strategy, where you’re just playing Russian roulette, year in and year out.
Max Tegmark: And I think to understand why it happens, we can’t get a full explanation by just looking at the incentives, of course, that the manufacturers of those who profit from this have. We also have to look at the mechanisms by ways they do the hacking — by making you always fear something which justifies buying these weapons, by always portraying your own country as the righteous one and not the other one. I find it just really interesting to compare the public goodwill enjoyed by Tony Blair versus Vladimir Putin. It’s not the same for the two of them, right?
Rob Wiblin: No.
Max Tegmark: Tony Blair is sort of this elder statesman.
Rob Wiblin: A very large number of people really hate Tony Blair, for what it’s worth, in the UK. There’s a definite contingent that despises him, that’s larger than I would’ve predicted 20 years ago. But no, I agree. Vladimir Putin is held in worse esteem.
Max Tegmark: And also if you accept that, if Vladimir Putin had said that he was just invading Ukraine because he believed that they had weapons of mass destruction, and then he invaded the country — and successfully invaded it actually — and then later said, “Oops, there were no weapons of mass destruction, but I really believed that at the time.” Would you feel great about him then if he was sincerely convinced that there were those weapons?
Rob Wiblin: It’s complicated. You might think this person is an idiot and incompetent and has been grossly immoral. I think you probably would have different expectations about what other countries they might invade and under what circumstances. Inasmuch as they’re just different generating motivations, then they just lead to different forecasts about the kind of risk going forward.
Max Tegmark: Yeah, but if you just analyze this at the systems level. If you just compare the amount of sanctions that were placed on the United Kingdom over their invasion of Iraq, with the amount of sanctions placed on Russia now. Would you say that they’re about equal, or how would you compare them?
Rob Wiblin: I would say that there’s substantial difference, yes.
Max Tegmark: Why?
Rob Wiblin: Well, let’s see. Which countries would have tried to place sanctions on the US? I guess it seems like it would’ve been very costly, because the US is a larger fraction of the global economy. And just many other countries did not super care enough about the Iraq issue in order to pay a massive cost in order to slightly harm the US in order to slightly change their foreign policy. You’re grinning. So I guess you would emphasize a different explanation.
Max Tegmark: I’m just having fun giving you a hard time here about when you said you were never hacked. But the interesting point I’m trying to emphasize here is just how important the role of information is in all of these things, because what we believe about the world has a really major impact on how we act as a species, right? The reason that we’re sanctioning Russia now much more than we sanctioned the United States and UK over the Iraq invasion isn’t because we felt that somehow the invasion of Iraq was incredibly justified for those weapons of mass destruction. It has more to do with the beliefs we had about who was the good guy, who was the bad guy, and various other things. And it shows therefore how valuable it is for whoever can control the general beliefs that people have.
Max Tegmark: And this is also nothing new I’m saying. You can go read Machiavelli from the 1400s, and he lays out how important it is to have good propaganda. What I’m really interested in for this conversation is just how machine learning changes this and how that connects in with existential risk. Because in the old days, if you wanted to take over the world, you basically had to go invade it. Do it with force, right? Today, if you can control the information flow in a country, that’s really all you need.
Max Tegmark: You can keep having elections in Russia and you’re going to win them if you control the state media in Russia. And if you can control the information flow in the West, then you can ultimately win elections there also. The interesting thing that’s happening now is that most people, especially people under 30, get almost all their news from social media, which they get through Big Tech companies. And that gives a really outsized power to these companies — not just to make money, but to actually control the reality that people think that they’re living in.
Rob Wiblin: Yeah. And to shape the direction and choices of a whole society.
Max Tegmark: Exactly. I think you and I both really believe in the democratic ideal that we should have the society going in a direction that’s good for all of us, not just for the owners of some companies (no shade on them). And for that to happen, for us to have and continue to have a healthy democracy, it’s really, really important that the power of this information-gathering system also gets put back in the hands of the majority of the people. Otherwise, I see us being on a fast track back into this really nice book written by your countryman, George.
Improving the news [02:05:31]
Rob Wiblin: Yeah. OK, I have a few more questions about AI, but maybe we’ll find time for them later. You really want to talk about the media thing, so let’s embrace the media stuff for a bit. It sounds like you see, maybe more than me, the AI alignment stuff and disinformation and people being able to form accurate views about the world and understanding the world, and being able to make decisions as groups, as deeply connected. This is a very mainstream concern about disinformation, filter bubbles, tribalism, and disagreement within society. Also the potential mass influence of tech companies over public opinion, should they decide to exercise it fully. Is there anything that you think people or mainstream discourse about this topic is getting wrong?
Max Tegmark: Yes. I think the mainstream discourse about this is already exactly where the tech companies and politicians want it to be. The fact that the first phrase you used to describe something problematic was “disinformation” — that’s what they want you to talk about, right? Disinformation, what does that mean exactly? Even if you go look up the definition in the dictionary, disinformation means information that’s not true. And it’s just usually spread by some nefarious actors, like some foreign country or whatever, and who decides what’s true? Of course, this is also one of the oldest tricks in the book. If you’re an authoritarian government and someone is criticizing you for being corrupt or inept or whatever, you’re going to accuse them of spreading disinformation.
Rob Wiblin: Yeah.
Max Tegmark: It goes by different names in different time periods. Hundreds of years ago, you would be called a heretic for saying things that disagree with the establishment. As a physicist, I can’t help but think of Galileo, if he had tweeted out, “Hey folks, Earth is orbiting the Sun.” That would probably have been flagged by Pope Urban VIII’s fact-checking system, saying, “This violates our community guidelines. Come get the truth instead from Pope Urban VIII.” We’ve learned in science to be really humble about this business, about truth, and we realize that it’s really hard to figure out what’s actually true. If it were so easy to figure out what’s true that we could delegate it to some fact-checking committee run by the government, or some company, or whatever, you should totally fire me immediately from MIT. We wouldn’t need researchers, scientists.
Rob Wiblin: Yeah.
Max Tegmark: Even after the Galileo fiasco, we spent 300 years believing in the wrong theory of gravity — your British one, Newton’s one — until Einstein realized that was also wrong. To really succeed as a species and actually find the truth, we have to be very humble and acknowledge the fact that sometimes the thing that almost everybody thinks is true is actually not true.
Max Tegmark: We’re much better off if we have a more sophisticated system for going about this. The most sophisticated system we do have, so far, with a track record, is science itself. Right? I started thinking a lot about this New Year’s resolution — that I’m not allowed to whine about stuff, unless I do something about it — and now you’ve heard me whine already for several minutes about the sorry state of the media system.
Rob Wiblin: So what are you doing?
Max Tegmark: It’s put up or shut up. So I thought that since machine learning is contributing to making things worse (in my opinion), and since I also believe that technology isn’t evil, but just a tool, then ergo, it would probably be possible to use machine learning in the opposite direction, to help it improve the situation. How can you use machine learning for a more data-driven fact checking of what’s actually true? How can you use it to empower not the big companies or wannabe authoritarian leaders, but empower the individuals instead?
Max Tegmark: When you start thinking about it, you realize that it’s just code. It can be used both ways. And what’s nice about code also, is it’s just bits, so it’s free. If you develop something at a university and you give it away to people, it’s open source. MIT is one of the pioneers in open source. Then maybe people can use this to see through a lot of hacking attempts or manipulation attempts. So this was the impetus for this project, and it very conveniently coincided with when they shut down MIT over COVID, and I had all this time on my hands. It’s been really, really fun actually.
Max Tegmark: First thing I did was just set up some bots to go download 5,000 articles every day from 100 different newspapers, and then started working — first as a student project, and then as a nonprofit — to make machine learning that reads all these articles, and then tries to give people information that’s useful to them to more easily get a nuanced picture of what’s going on. For example, if you go to improvethenews.org, then you’ll see something that looks a little bit like Google News, the news aggregator. We don’t write any of the articles. You can find articles from all sorts of different newspapers, but when you look a little closer it’s also different.
Max Tegmark: For example, we use machine learning to read all these articles all the time and figure out which articles are about the same thing. If you click on something that says, “Boris Johnson got fined for a lockdown party,” there are 52 articles about this. Then you can look at them. We have some sliders, if you want to see what the left is saying, you can see that. If you want to see what the right is saying, you can see that. If you have some article where it’s not so much left/right, but maybe it’s more like big companies versus more establishment-critical perspectives, we have a slider for that.
Max Tegmark: Then, if you’re like most people, you really don’t have time to read the gazillion articles about each topic, because you have a life, right? We also make it easier for you. We have a team of journalists, and they’ll pull out the facts, which are just defined empirically as the things which all the different newspapers agree on across the spectrum. For example, if it is Boris Johnson here, it’s a natural assumption that The Guardian, which is more left-leaning, will be more critical of him, and maybe one of the old Rupert Murdoch newspapers, like The Times is going to be more supportive. So if there’s something that they both agree on, it probably happened.
Rob Wiblin: Right, yeah.
Max Tegmark: You can see fact, fact, fact, fact, this uncontroversially happened, then you can click away and read other stuff, if all you want is the facts and you’re in a hurry. But if you want to know the controversy too, or you want to know what your Tory uncle is going to say, or your Labour neighbor is going to say, whatever, we also pull out the key controversy in the key narratives on the different sides. Rather than get sucked into just being told one side, you can kind of rise above the controversy and look down: Here’s the controversy. This is what happened. Here’s what they’re fighting about.
Max Tegmark: I think about it kind of like, if you have two friends who got divorced or are getting divorced, if you want to know what happened, you probably want to talk to both of them separately. And anything that they both agree on, it probably happened. Right?
Rob Wiblin: Yeah. Yeah. Yeah.
Max Tegmark: It’s also interesting to get the two narratives. Then another thing, if you look at this Improve the News site, sometimes you see also a “nerd narrative.” I don’t know if you’ve come across any of those.
Rob Wiblin: I did see that. Yeah, yeah.
Max Tegmark: This is another thing which is very central in scientific truth finding, where you get trusted as a scientist, not because you have a nice tie, or you’re really rich, or your daddy is the king. You get it because you made predictions in the past that turned out to be correct. Right?
Rob Wiblin: Yeah.
Max Tegmark: Einstein made a bunch of predictions, which sounded completely insane, such as that time would slow down when you move fast, and Mercury would move different from what Newton said. A lot of people said this is nuts. But then they measured. His predictions were true, so his trust score went up. In the same spirit, there’s this project called Metaculus, which was founded originally by my friend and Future of Life cofounder Anthony Aguirre, and Gaia Dempsey, and a bunch of other awesome people, which is exactly this kind of trust tracking system. We’re working together now, to build up trust scores for journalists, for newspapers, for public figures — not based on anything subjective, but just on their actual track record with past predictions.
Rob Wiblin: Right. Yeah. Then on Improve the News, I guess your editors go and look for related prediction markets or aggregated forecasts that are relevant to the news story, and then throw that in as the nerd take on this issue. It often has a more neutral bent.
Max Tegmark: Exactly. We have a bot, of course, that assists with this also, to find matching ones. And it’s been quite entertaining actually. We had an article just recently where a politician was saying that inflation is definitely not going to rise now. Then the nerd narrative was that 67% inflation is going to have gone up by this much, by this date. You see that what the politician is saying is clearly out of sync with a more scientifically based thing. It was also very interesting during the Ukraine invasion to see how the nerd narrative said that there’s a 90% chance that Kyiv will fall by April 1. Pretty quickly went to 80%, 60%, 20%, 5%, 2%.
Rob Wiblin: Yeah. I was watching that very closely, yeah.
Max Tegmark: Cool. These are just a couple of examples of this bigger project we’re doing, where the goal is simply to take all the scientific truth-finding ideas we have and import them into mass media in a way that’s really accessible and easy to use, without any ads, or subscriptions, or any BS like that — just giving it away for free, to make it easier for people to get a more nuanced understanding of what’s actually going on.
Rob Wiblin: Yeah. I took a look around Improve the News, prepping for the interview. I think it seems really cool. You can find different news stories you click through. It’ll aggregate obviously lots of different news stories from many different news sources, including many different news sources that I had never heard of.
Max Tegmark: Awesome.
Rob Wiblin: I think because they’re a little bit off the wall, potentially, or a little bit less mainstream, less the kind of stuff that I typically read, which is probably great to get in my media diet a little bit. Then you’ve got your editors who try to explain what is the different spin that different opinion clusters are putting forward. So here’s the pro–Boris Johnson spin. Here’s the anti–Boris Johnson spin.
Do people actually just want their biases confirmed? [02:16:15]
Rob Wiblin: I guess I’m not quite sure how this website can contribute in a big way to fixing this problem. Mostly because I don’t think that many people are going to read it. Maybe because it’s a little bit drier, a little bit more boring. I think there’s a reason why a lot of news sources tell people very exciting stories and have these strong narratives. I don’t think it’s just because of the marketing department of the arms industry. I think it’s because people find it exciting to cheerlead for one side and hear strong opinions about who’s good and who’s bad. This kind of higher-level thing, I suspect, even for people who really like to understand the world, they might find it a little bit too dry to really get them addicted to opening up this particular tab in their browser. What do you make of that?
Max Tegmark: First of all, I think there is a very common narrative around that people don’t want to hear opinions that disagree with their own; they just want their biases confirmed. I hear it all the time. I would say actually this is what most people seem to think is true at the moment, even though there’s some really interesting work by Professor David Rand at MIT, arguing that it’s false.
Rob Wiblin: Huh.
Max Tegmark: That this is actually just a really convenient narrative that, for example, Big Tech companies are pushing to blame the consumer. In the same way that Big Tobacco liked to blame the consumer for smoking. Rather than saying, “Oh, we maybe shouldn’t put cigarette vending machines in schools and market cigarettes to 10-year-olds,” they say, “People want to smoke. They just want to do these things, and we have no responsibility.” In the same sense, it’s very convenient if you say, “Well, this is just human nature and the algorithms that we have have nothing to do with it.”
Max Tegmark: What David Rand found in particular is that, yes, in the actual media ecosystem out there now, people tend to often gravitate to things that confirm their biases. And we’ve seen this. But he also found something really interesting: that if information is presented to them in a nuanced and respectful way that disagrees with them, they’re often very interested. Think about it. Suppose you have an idea for a new, very different podcast. You tell it to this friend of yours, since many years back, and she says, “Really interesting idea, Rob. But I think it’s going to fail. Do you want to know why?” What would you say?
Rob Wiblin: Yeah, obviously I’d want to know, yeah.
Max Tegmark: Why? Why wouldn’t you just dismiss it? Say, “I just want my biases confirmed?”
Rob Wiblin: Well, in the case where it relates to something that I’m actually doing, I actually stand to lose if I make a bad decision and I produce a podcast that nobody listens to. I’ve wasted my time. When it comes to broader social issues, if I have the wrong views about Boris Johnson’s parties, it makes absolutely no practical difference to my life, because I’m not making any decisions about it anyway. That’s one reason.
Max Tegmark: Well, what if you’re about to vote, and you’re persuaded that if you vote for Boris Johnson, you’re going to get financially much better off. If this old friend of yours says, “There’s this interesting thing you might want to know. I actually think your net take, post-tax income, inflation adjusted is going to go down.” Wouldn’t you want to hear it?
Rob Wiblin: To some degree, but voting versus buying a product are extremely different things. If I spend money on a bike and it’s a rubbish bike, then I’ve wasted all of my money. If I vote for the wrong person, in the seat that I happen to live in in the UK, the probability of ever changing the outcome is zero, because it’s not a marginal seat. There’s no chance that my vote could change what the government is. So in actual fact, it’s neither here nor there really, whether my views about that are right or wrong.
Max Tegmark: But don’t you still put some effort into trying to figure out what the actual impact is going to be of different politicians’ policies? Don’t you feel it affects the country?
Rob Wiblin: To some extent. I mean, the motive that does make sense is if you’re altruistic and you care about society as a whole, and you do live in a marginal seat where your opinions matter, then it could make sense to take the effort of reading things that you find challenging in order to improve your views. I would like to think that I try to do that within reason, but the truth is probably I am more motivated when it comes to concrete decisions in my life, like who to be friends with. The accuracy of my information there bites a lot harder, and I’m more likely to be scared of making a wrong decision. Do you see what I mean?
Max Tegmark: There is of course the issue of voting and whether your particular vote will make a difference. But in the aggregate, I think you’ll agree that Adolf Hitler actually won an election, right? Most Germans thought that voting for Hitler in that election was going to work out great for them. That turned out to be factually incorrect.
Rob Wiblin: Sure.
Max Tegmark: Did not work out so great for those Germans. If they had had access to some less biased information, and realized that it was probably not going to work out so great, maybe they would’ve voted differently.
Rob Wiblin: Yeah. I mean, it just seemed like a collective action problem, where if you could get everyone together to all decide to either do a lot of research or not do very much research at all about whether Hitler is going to be a good leader, then they would like to decide to all put in a bunch of effort. But as an individual, given that your vote is so unlikely to change the outcome — we’re talking one in many millions — it makes more sense just to emigrate if you don’t like it. Now of course, people are more altruistic than that and they do care about their society, but you don’t exactly get feedback when you make a mistake.
Max Tegmark: It feels like we’re conflating two separate problems, which are both interesting. It seems like, on one hand I could take everything you said and turn it into an argument for why you should never vote in the first place, just because it’s a waste of your time. Right?
Rob Wiblin: Well, you only should vote if you place a lot of weight on the wellbeing of others, I think. Or you enjoy it for its own sake, expressing your opinion.
Max Tegmark: This is a very interesting argument we can have: should you vote, should you not? You can also argue — which is an excellent topic for effective altruists — to think of alternative voting systems and other ways of influencing society, where people have more of an interest to actually wield power over society. But I think that is quite separate from the question of the information flow, and whether it’s a good thing or a bad thing to make it easier for people who want to have it, to get accurate information about the world. I think the latter is also quite important.
Max Tegmark: If you go back and look at the big mistakes countries have made, I think, personally, that the Germans made a huge mistake voting for Hitler. I think it was also a huge mistake for the US and the UK to invade Iraq, in hindsight. If you want to make sure that we don’t make these kinds of mistakes in the future — when we have big elections where people are arguing about what you should do with artificial intelligence, or universal basic income, or whatever — part of the solution has to be that there has to be some way of getting reliable information, right? If that doesn’t exist, you don’t even have the luxury of casting a well-informed vote. What particularly motivated me here was my pledge to not bitch about things if I wasn’t going to actually do something about it. It wasn’t that I can earn the right to complain only if I dramatically transform the future of the world. It was just “do something.”
Max Tegmark: I agree with you that a lot of people don’t even read newspapers at all, but the fact that even most of my colleagues where I work — who are very educated and actually read newspapers online all the time — even most of them, I find, have been very hacked in certain ways. They’re usually very ignorant about artificial general intelligence and these issues. Many of them were for the Iraq War before they were against it, et cetera, et cetera. It means that there are lots of people out there who you don’t even have to convince to go look for this kind of information, who just don’t have access to it.
Max Tegmark: What happens instead? This links back to David Rand’s research. Suppose someone says, “I’m a Democrat and I’m kind of curious what those Republicans think, so where do I find out that?” They go click on Breitbart News, and then they look at the front page, and already after the first 20 seconds, they feel just really offended. They see a photo there of their favorite politician, where they clearly looked at 20 photos and picked the ugliest one.
Rob Wiblin: Yeah. I literally had this experience. I just feel like, “I can’t take this, I’m closing this tab.”
Max Tegmark: Yeah.
Rob Wiblin: I think I might have made 30 seconds rather than 20 seconds, but yes.
Max Tegmark: We did a machine learning project actually, where we just tried to predict the political bias of a newspaper by how ugly or good-looking Donald Trump and Hillary Clinton looked, and it worked frighteningly well.
Rob Wiblin: That is very interesting.
Max Tegmark: What we’re doing differently with Improve the News is we have this nuance slider. You can see contrary opinions, but actually presented in a very respectful way. My vision for this was, suppose I’m sitting on an airplane and I realize that the person next to me has a very different political opinion. If I ask them very respectfully to explain a bit what they think, I’m going to get a pretty friendly explanation. Right?
Rob Wiblin: Yeah.
Max Tegmark: They’re not going to try to piss me off. So if I’m going to go read that sort of stuff, I would like something like that also — which is exactly the opposite of what gets the most clicks, so it’s actually quite hard to find people who disagree with you respectfully. The financial incentive in media today, where it’s all about clicks, is to always present the opposing narrative in a mocking fashion, to make it sound as ridiculous as possible.
Max Tegmark: Take some random controversy, like abortion, for example. You can either describe the position of those who are against abortion as saying that they believe that the unborn child has a soul and that needs to be protected or whatever. That’s maybe what they would say. Or you could say that anti-abortionists are total nutcases who just want to completely take away any right that a woman has over her own body. Or on the other side, you could accuse everybody who’s for legal abortion to be a babykiller. It’s the latter kind of a description that we usually face in the media, because that’s what gets the most clicks. And the effect it has is it makes people think that those who disagree with them are basically insane, and just bad people that just have to be destroyed.
Max Tegmark: Whereas we try to do exactly the opposite with our narrative. We always present each one the way that side would present it, so we hire journalists from across the spectrum and we have each narrative written by someone who kind of agrees with it, so that you can come away reading it and be like, “Well, I really disagree with that point of view, but I see where they’re coming from.” That way, we spread less hate and more… not love necessarily, but at least more understanding.
Rob Wiblin: Yeah. It’d be great to get that paper you’re referring to that suggests that people are interested in hearing respectful presentations of views that they disagree with.
Max Tegmark: But you already confirmed that you yourself are interested in that also, right? It’s not such a shocking conclusion.
Rob Wiblin: I think that does make sense. It’s also true that I enjoy reading people who agree with me explaining why I’m right. I think it’s the duality of humanity, right?
Max Tegmark: Yeah.
Rob Wiblin: We have different conflicting tendencies. I wasn’t saying I wanted to read it because I think it’s bad. I’m very interested to see how large that effect is. It’s making me wonder, is there a business model where you start a newspaper that primarily publishes explanations of the Democratic position written in a way that is respectful and designed to be persuasive to Republicans, and vice versa? Where you have these kind of op-eds that are really not aimed at people who already agree with the view, and go out of their way to try to make it appealing to people who naturally wouldn’t agree with the position that’s being advocated?
Max Tegmark: I have good news for you.
Rob Wiblin: Yeah?
Max Tegmark: Yeah. I have good news for you. They already exist. They just don’t make a lot of money and you don’t see them very much. So with this site we can make it much easier for you to find them.
Rob Wiblin: OK, yeah. But people share stuff all the time. There’s a kind of competitive market for content. If it’s the case that people do really love hearing respectful, new information that disagrees with their preconceptions, then why aren’t these websites making bank?
Max Tegmark: You must be familiar with Daniel Kahneman’s System 1 and System 2?
Rob Wiblin: Yeah. Yeah.
Max Tegmark: Media mainly sells itself now through System 1. It’s an impulse thing. You click on this or you click on that. Whenever you select the newspaper with your System 1, it’s going to be the clickbait that wins. Whereas what you really want is to pick out your news diet using your System 2. Just like your food diet, you’re better off picking it out with System 2. You don’t want to go to the supermarket super hungry, because then System 1 gets too much of a vote in what you buy, right? You want to think in advance what sort of diet you want, and plan ahead.
Max Tegmark: The idea with Improve the News is exactly that: that you can think through, “What kind of diet do I really want?” We’ll see how it goes. It’s very early days. It’s been a ton of fun working on it, I have to say though. And it’s been very empowering how quick and easy it’s been to make something that a lot of people use, thanks to machine learning. This project would’ve been completely impossible to do even 10 years ago. Earlier today, for example, right before this, we were looking at using some of the very latest large language models that you and I discussed to further help with this. There’s so much cool stuff one can do, much, much better.
Max Tegmark: The greater value proposition here for society, I think, is to create a truth and trust system that people from across political divides, and also across geopolitical divides, can all have large trust in.
Max Tegmark: It’s really striking how different things are with that — for example, in astrophysics versus politics. If you really want to know the distance to the Andromeda Galaxy, there is a pretty good system where you can do a few clicks and you’re going to get an answer, and you’re probably going to believe it, regardless of whether you’re British, or Chinese, or American, Republican or Democrat. Even when we have disagreements in science, you go to a conference, you might even have a fun debate where two people have very different points of view, but they go for drinks afterwards. If you ask either one of them to articulate the other side’s arguments, they’ll do a really, really good job. They’re not going to go into ad hominem attacks; they’ll actually explain.
Max Tegmark: Imagine how amazing it would be if you could get closer to that place for the existential risk issues that you care about so much, about the future of humanity. Where there is something which the Chinese, and the Americans, and the British, and the Russians, and the Ukrainians all actually have a shared belief in, and people across the political spectrum in our own countries. Then we’d all be working on the same team, instead of spending most of our time fighting against each other.
Rob Wiblin: Yeah. I think one interesting experience that I had looking at Improve the News was that seeing the headlines from places that I wouldn’t normally read made me curious about what people from that perspective think. Where previously it wouldn’t have been very salient. So I’m reading stuff about Ukraine and I’m like, “What do non-interventionists say about Ukraine? I really haven’t really encountered any of that typically.” And just seeing that it’s out there, I’m more likely to go and read something like that than when I’m scrolling through Twitter — where realistically, I don’t have people who have very different views whose feeds I’m checking all the time, which is probably a bad habit. But nonetheless, getting more conflicting ideas into people’s news feeds regularly is probably a good idea, and Improve the News can definitely do that.
Max Tegmark: That’s a very interesting thing you say there. In fact, we also did this nerdy science paper — Samantha D’Alonzo, this awesome MIT student, and I — where we just asked the question: could we map the news bias landscape in a completely data-driven way? Because if you ask human pundits, a lot of people on the left will say that Fox News is biased, but New York Times is not and CNN is not. And then people on the right, they’re going to say, no, Fox is not biased, but CNN is. And it’s Einstein’s relativity, what do you make of this? But we wondered, what if you just listen to the data itself? So we took 1 million articles from 100 newspapers and just gave the machine learning this really simple task: for each article, predict which newspaper wrote it.
Max Tegmark: And it was surprisingly good. So then, of course, that begs the question of how did it do it? What is it about the article? It must be because there’s some kind of bias. What did it pick up on? And we discovered that there was some very simple stuff, which is actually quite entertaining, that it picked up on. It noticed that the frequencies of using certain phrases varies dramatically. So if you take abortion articles, for example: some newspapers would have the word “fetus” a lot, where other articles would talk about “unborn babies.” And if you’d looked at articles about immigration: some would talk about “undocumented immigrants” a lot, and others, you would hardly see that phrase, but you would often see “illegal aliens.”
Rob Wiblin: Yeah. Yeah.
Max Tegmark: And just from that, what the machine learning did was it took all the 100 newspapers and put them on a spectrum. It was actually not a one-dimensional spectrum like you might think. It was a two dimensional spectrum. And one axis, the X axis, left to right, very predictably went the way we humans would interpret as left to right — although the computer had no idea, of course, what to make of it. But you could see how there on one side is CNN and The Guardian and so on — and they talked a lot about fetuses and undocumented immigrants. And on the other side, you have Fox and Breitbart and so on — and they talked a lot about illegal aliens and unborn babies.
Max Tegmark: But there was also the vertical axis. We’re just like, “What’s this?” It turned out that one was basically as significant to the machine learning as the left/right. So we started looking more at the phrases that are different there. And then we noticed that, for example, near the bottom, those newspapers used a lot of phrases like “military-industrial complex,” or Big Oil with capital B, capital O. Whereas on the top, they didn’t talk about military-industrial complex. They talked a lot about the defense industry. And instead of writing Big Oil, they would write “oil producers.” And then looking more at it, we also noticed that the newspapers up at the top were all big newspapers and the ones at the bottom were all small newspapers.
Max Tegmark: So this we interpreted as the establishment axis of bias, where the bigger ones were more close to power. And the example that you brought up there — an anti-interventionist stance on Ukraine or Iraq or whatever — we also have a lot of historical data that’s typically been pushed by the small ones. In fact, it was quite interesting that in the runup to the invasion of Iraq over the weapons of mass destruction, The New York Times was one of the ones that pushed the most for the invasion, and Fox News landed right next to The New York Times on the establishment axis on a lot of these topics. So there’s not much difference actually. And what that means is that kind of bias is so much easier to miss if you’re just a normal person, because all the big newspapers have it in the same way. And the only ones that don’t have it are the ones you’ve never heard of.
Rob Wiblin: Right. Yeah, exactly.
Max Tegmark: But if I’ve learned anything as a scientist, it’s that if you really want to figure out what’s going on, it’s important also to listen to minority scientists. Because you never know quite in advance who’s right. And sometimes some weird guy named Albert Einstein or whoever comes in from left field that didn’t even get a faculty job in physics, but maybe he’s right on this one.
Government-backed fact-checking [02:37:00]
Rob Wiblin: One approach to dealing with people, I guess the term that people seem to use these days is “sense making.” I feel like that one must have come out of some random branch of academia. But one way people are talking these days about how to solve the problem of people having incorrect beliefs about politics in the world is having government-backed fact checking. I think there’s some proposal for some sort of anti-disinformation agency that the US government would operate.
Max Tegmark: Oh, they announced it just the other week.
Rob Wiblin: What happened?
Max Tegmark: Yeah. We have one now. They just announced it the other week.
Rob Wiblin: You have one now. OK. And I think they were going to be able to put information below tweets when they thought it was inaccurate. Is that the idea? So that they could link you off to some page to get corrected about these wrong ideas?
Max Tegmark: We don’t know yet exactly what powers have been vested in them. Presumably quite a bit more than that.
Rob Wiblin: OK. I think you’re against that. Do you want to explain why? I guess above and beyond just the fact that institutions have been wrong in the past.
Max Tegmark: Definitely did not give me great vibes. And I think frankly the people who are pushing this are generally doing it with the best of intentions. It comes from a good place. But you know the saying, the road to hell is paved with good intentions. If we look, for starters, at the history of science: we’ve seen how hard it was to know what the truth was. And physics has been so much better off by being very disrespectful of authority and truth and having people like Galileo publishing things.
Rob Wiblin: Sticking it to the man.
Max Tegmark: Yeah. We talked earlier in this conversation about regulatory capture. So as soon as you have any trusted entity in society, then it’s going to be in the interest of anyone or anything that’s powerful to try to control that entity. Why is it that Big Tobacco funded scientists so much? Because scientists were trusted. So if you can get scientists a bit more on your side, or a little less against you, great. Fund them.
Max Tegmark: If there is any kind of fact-checking entity that people start to trust, of course the politicians are going to now start leaning on it and the companies are going to start leaning on it. It’s just predictable dynamics, right? And even if it starts out very idealistic, unless they’ve built in really good mechanisms to keep it truly independent of power, power will obviously lean on it. And when even in its creation, it’s started by people who already have a lot of power, you’ve really got to worry about that.
Rob Wiblin: Yeah.
Max Tegmark: Sorry about getting a bit on my soapbox about this, but I have a lot of friends who’ve lived in very non-democratic countries. Even my wife grew up in a communist dictatorship. And I think a lot of people in the West have a naive understanding of how those governments actually controlled people, thinking that they generally just shot everybody who engaged in wrongthink. And the sad truth is they normally didn’t have to shoot people. That was very, very rare. Mostly what they just had to do was make sure you knew that you wouldn’t get a promotion if you said the wrong thing. My mother-in-law wanted to become a teacher, but she wouldn’t get admitted to that program in university because they thought she was a bit too bourgeois from her background.
Max Tegmark: And a famous physics professor friend of mine, Alex Vilenkin, didn’t quite do what the communist government in the Soviet Union wanted. So his admission to do his PhD in physics, which was very apolitical, he couldn’t go all of a sudden, because he’d been put on some weird blacklist. Was just these little nudges here and there. That’s usually all it takes. You don’t get the promotion, you get publicly shamed a little bit. And it’s interesting that the people here, where I work at MIT, who are most freaked out about things like this, almost all of them are foreigners. They come from China, they come from Iran, they come from Eastern Europe.
Rob Wiblin: Yeah.
Max Tegmark: They’re like, “Holy shit. I’ve seen these tendencies before. I don’t like it.” Whereas most of my American friends are like, “What are you worried about?”
Rob Wiblin: If I can get on my soapbox for a minute, it strikes me as an absolutely daft idea, I guess for two reasons. So let’s say that you are someone who’s really worried about Trump, the Big Lie, claiming that the election was stolen. Then you should be against this, because the person who’s most likely to be appointing the head of this agency, or at least the most likely presidential winner in 2024, is the person who’s actively promoting the disinformation that you’re against. And obviously if you’re somebody who supports that view of things, who believes that the election was stolen, obviously you’re not going to support it. Because all they’re doing is claiming the reverse. So it seems like from both points of view, this is an incredibly risky and unappealing approach to dealing with this problem.
Max Tegmark: Can I just chime in with an anecdote about this, before you move on from the stolen election? I just saw a tweet the other day from right after Trump won the election against Hillary Clinton, saying that Trump had stolen the election from Clinton. And you know who tweeted it? It was Biden’s new press secretary. So this just goes to show that what goes around comes around.
Rob Wiblin: Yeah. So that’s one issue. It seems like it’s a policy that’s being developed as if the world is going to end in two years’ time and this is it. All we have to worry about is the immediate term. The other thing is, you’ve been talking about how there are particular institutions, corporations that have an awful lot of influence over the information that we consume. They’re potentially far too powerful in their ability to shape public opinion. But almost the only institution that is more powerful than them is the US federal government, or governments in general. You’ve got governments and you’ve got these major corporations. And empowering probably the biggest single player in terms of information and shaping public opinion as an alternative to this other set of organizations, it seems like you need some other third agency or some more distributed process here to establish truth and share information. Creating more incredibly concentrated sources of power and influence almost just cannot solve the problem by definition.
Max Tegmark: I couldn’t agree more. I couldn’t agree more. Lord Acton’s quip, that power corrupts and absolute power corrupts absolutely. So when we champion this democratic ideal that we should have more distributed power, it’s exactly to avoid this sort of power concentration. And frankly, I maybe shouldn’t drop the F bomb, but I remember in Sweden when I was studying political ideologies, the definition they gave of fascism was actually not that you walked around with leather boots in a funny way, but it was simply the merger of corporate and government power.
Rob Wiblin: Yeah.
Max Tegmark: Right? And if you have governments and the most powerful companies working together to decide what’s true or not, again, it feels like, what could possibly go wrong?
Rob Wiblin: Because then, who’s left?
Max Tegmark: So the decentralization, one thing that’s just super exciting about modern technology, IT in general, and another very controversial topic is, of course, cryptocurrencies. DeFi, as the crypto fans call it, “decentralized finance.” A lot of people are talking about DeGov, also how you can have more decentralized decision making enabled by these technologies. I think the jury is out about exactly what’s good and bad and how things are going to work out. But this is a very interesting space. I hope more people with high ideals who like to think big will go into thinking about these things, because I think it’s not obvious that more AI will necessarily and inevitably cause power to get more concentrated. It could also be the opposite. And maybe there are actually some clever social technical innovations that can happen, which cause power to become more decentralized.
Max Tegmark: It was very difficult to have decentralized power when people far apart couldn’t communicate with each other, right? When they had no way of building trust without being part of the same empire or whatever. But a lot of the tools are coming into place now with blockchain and other technologies. And I actually think of Improve the News in that very spirit — where you just empower everybody with a better bullshit detector. Because you mentioned disinformation. If you think about all the different ways in which things can be bullshit in the media ecosystem, pardon my French. Disinformation or misinformation means that there is something that is claimed to be true, which is actually false, right?
Max Tegmark: In the ML analysis we’ve been doing on news bias so far, you see a fair bit about that. And yes, it’s a problem. But it’s not the biggest problem. A much bigger way people bias things is just by omissions, just what you just don’t mention. We talked about that earlier: people talked a lot more about how many sanctions they should put on Russia over Ukraine than they talked about how many sanctions they should put on the US for invading Iraq. We talk much, much more about the terrible tragedy that Putin has caused now in Ukraine than we talk about the terrible tragedy that’s been caused in Yemen, by a government that the UK and the US has been supporting.
Max Tegmark: And these sorts of omissions are things which are just as easy to detect for machine learning as the fake news or the disinformation, but they’re much harder for people to detect. Because we see what we see; we don’t see what we don’t see, right? And if none of our friends are talking about it either, it’s easy to not even know that it’s going on.
Rob Wiblin: Well also, you have to omit almost everything because you can’t go around describing every fact that’s true in the world. So it’s harder to establish that an omission is a mistake because you’ve omitted 99% of stuff, and should this be in the tiny fraction of things that get highlighted or not is hard to say.
Max Tegmark: Although, the cool thing is, this is one of the visions with some of these tools we’re building, if you can find a real gem of an omission. Just like if you go out in the forest and you look for gems, it’s very hard to find one randomly. But if you do find one, it’s really easy to convince your friends, “Look, this is actually a gem. Look at it.” If the machine learning finds some super glaring omission, for example, here is this great article about why wind power is terrible because wind turbines kill birds. And you look at a bunch of facts about how many birds were killed. And it looks like a respectable article. It’s very hard to see what the omission is. For this particular one, we turned up that, first of all, they omitted that windows actually killed 2,000 times more birds than wind turbines.
Rob Wiblin: Seems relevant.
Max Tegmark: You’ve probably seen this kind of accident sometime in your life, where a bird flies into your window and breaks its neck. And cats kill 8,000 times more birds than windows. But the article didn’t talk about banning cats or cats were bad. So once you see it, you’re like, “Oh yeah, that’s kind of a weird omission.” Because if they really wanted to talk about the tragedy of birds getting killed, they could be like, “Here is this problem. Birds are getting killed. We’re effective altruists, we want to reduce this. Here are the top 10 sources of bird deaths.” And then they wouldn’t even get to the wind turbines, because they’re not in the top 10. And then another omission was that this article was actually funded by an organization, which according to Wikipedia, is a fossil fuel lobby group. And they didn’t mention that either. So you see where I’m going with this.
Rob Wiblin: Yeah. Sometimes it’s hard to explain away.
Max Tegmark: Well, once you find it. But the hard thing is finding it. So, wouldn’t it be cool if you can build some machine learning powered tool or crowdsource tool or whatever, where it makes it really easy for you to find real gems. That’s very easy for you also to confirm things, if you want. That empowers you to see through a lot of BS manipulation that it’s easy to miss otherwise. So I’m actually fairly hopeful here that by bringing these technologies, giving them to people for free and having them developed with a sole purpose of decentralizing the power, giving people bullshit detectors, you can actually do a lot. Because there’s no financial incentive to do it. So of course, that’s why it hasn’t happened. But if you’re a nonprofit and you’re willing to just spend some money on it, you can, actually.
Would a superintelligence seem like magic? [02:49:50]
Rob Wiblin: And it’s pretty straightforward. OK, one final question I had, that I definitely want to get out before we wrap up. I suppose a lot of uncertainty about how the AI revolution might play out seems to rest on this question of: what is an agent that is much more intelligent than humans going to actually be able to accomplish? Because it seems like you could have a mind that’s much larger and much more advanced than that of humans that still won’t necessarily be able to do things that are completely magical to us. It might be able to advocate very persuasively on behalf of its interests, but only somewhat more persuasively than a very persuasive person, such that it can’t trivially use its persuasiveness to outwit us.
Rob Wiblin: And it seems like this is a very hard question to answer because, almost by definition, it’s extremely hard to predict exactly what a being that’s much smarter than you would do in a domain, because if you could, then you would be as smart as that and you’d be able to do it yourself. Do you have any view on this question of whether or not a superintelligent AI’s behavior and achievements would seem like magic to us? Or would it just be more recognizably like a person but more skilled?
Max Tegmark: So that’s partly a question about human psychology and partly a question of physics. Let’s do both of them. If, for some reason, it just wants to take over Earth and for some reason can’t persuade us, I mean, they could just build a drone army and kill us all, or just get rid of the oxygen from the atmosphere or something that we couldn’t prevent. It’s kind of like if humanity somehow set its mind on killing all the horses on the planet, I’m sure we could figure out a way, even if we couldn’t persuade them to commit suicide voluntarily or whatever. That, I think, is a given.
Max Tegmark: On the other hand, the laws of physics, the limits set by them, you can’t surpass no matter how smart you are, right? And that’s why I actually ended up having a lot of fun when I wrote the book, writing a whole chapter thinking through precisely this. So we know there’s a speed limit, the speed of light. We know there’s a limit to how much mass you can put in one place before it turns into a black hole and ruins whatever we were trying to do.
Max Tegmark: There’s also a fundamental limit on computation that Seth Lloyd once worked out, et cetera, et cetera. Now the good news there is that each of those limits are just orders and orders of magnitude above where we are today on computation. Like how much you can compute with a kilogram of matter, if I remember correctly, we’re a million, million, million, million, million times away from that limit right now. So I think for all practical purposes, it would seem like total magic to us if we could experience that kind of technology.
Rob Wiblin: Are there any examples of that? Because when I think about the areas where AI is already surpassing people, it’s better than us at chess, at Go, computer games, in some cases, I think, computer chip design, predicting what movies you’re going to like and things like that, I guess actually also recognizing things, so its visual recognition. In all those cases, it’s clearly very impressive and better than us. I’m sure it plays Mario super well and comes up with Go strategies that we didn’t. But it’s not unrecognizably different even though it’s had quite a bit of time to surpass us. Is that maybe just because those domains are so restricted in what you could do? I mean, ultimately in Go all you do is stick down a stone at a point, so how impressive can it be? And maybe if you had the full action set of a person or an agent in the world then it would be able to think of much cleverer stuff?
Max Tegmark: That’s a really fun question. I mean, first of all, it’s important to remember that the AI we have today is still pretty dumb. So if you’re underwhelmed by some aspect of it, you should be. Even though your laptop can multiply numbers together about a billion times faster than I can, it’s not that profound. A rocket can go a lot faster than I can go, but it’s not qualitatively different.
Max Tegmark: There’s probably a limit to how impressed we can get. Just because you have to be pretty musical to really hear the difference between the world’s best musician and the world’s thousandth best musician. Similarly, if you’re trying to impress a squirrel with your intelligence, then you, Einstein, whoever would probably all impress the squirrel by about the same amount, right? Because it’s just so far above, the squirrel can’t even appreciate a lot of the subtleties. You’re making these weird noises with your mouth, whatever. Maybe the things which would impress us the most is technology that the AI then goes and builds that we can actually see.
Rob Wiblin: That makes sense. I guess a squirrel could appreciate a plane, even if it doesn’t understand all of the subtleties.
Max Tegmark: Although maybe not as much as you do, because it might think it’s just a weird bird.
Rob Wiblin: That’s true. That’s one potential issue, that even when ML algorithms are doing stuff that’s extremely impressive, we might actually struggle to perceive it. It’s just a little bit alarming. I guess, hopefully we will continue to be able to stay alive long enough to see some of those things. And hopefully they’re operating in our interest rather than against us.
Max Tegmark: If I can just have 20 more seconds, as we’ve talked about some negative stuff, I just want to end on a positive note.
Rob Wiblin: Yeah. Go for it.
Max Tegmark: It’s so natural to end up talking more about the negative, because we humans are always better at imagining bad things than good things, right? That’s why we have much more elaborate descriptions of hell than heaven in religious texts. But the fact of the matter is that what we’ve learned from science and technology so far is that if we can actually get this right, we have such a mind-blowingly amazing potential, where we could help life flourish like never before — living healthy, wealthy, inspiring lives. Not just for the next election cycle here on Earth; we could go on for billions of years. We could help life spread, if we’re so inclined, into the cosmos and do even more amazing things. Not even the sky is our limit. It’s just so inspiring and exciting to think about this, that I want to encourage everyone listening to this to really ask themselves what positive future vision they’re really on fire about — because the more we can articulate this and share with our friends, the more likely we are to actually live in that future.
Rob Wiblin: We’ll have a whole lot more about what’s to be done and potential career options for people who want to make a difference to this in coming episodes. My guest today has been Max Tegmark. Thanks so much for coming on The 80,000 Hours Podcast, Max.
Max Tegmark: Thank you so much.
Rob’s outro [02:56:09]
Rob Wiblin: If you’re interested in working at Max’s Future of Life Institute, there are a few roles up at the moment on the 80,000 Hours job board.
One is for a Program Manager for Autonomous Weapons (Europe), where your responsibilities would include tracking positions of European countries on an autonomous weapons treaty, and monitoring UN discussions in Geneva.
They’re also hiring a Social Media Manager to represent their organisation across a range of social channels and to enhance their online presence.
And finally, one that might be especially interesting to listeners of this show — the role of new Podcast Host and Director for the Future of Life Institute Podcast.
You can find all of those at https://80000hours.org/job-board/
And just wanted to add a quick plug for our other podcast — 80k After Hours.
We’re ramping up content over there, and over the next six weeks you can expect to hear interviews I’ve done with Kuhan Jeyapragasan on EA community building, and Andrés Jiménez Zorrilla on Shrimp Animal Welfare, as well as audio versions of our articles on space governance and founders of new projects tackling top problems.
It’s a place where we feel both more free to experiment, and more comfortable with making content for narrower audiences — but if you’re a fan of this show, I’m sure you’ll find something you enjoy.
In case you missed our earlier releases, you could also go back and listen to Alex Lawsen on his advice to students, Michelle and Habiba on what they’d tell their younger selves and the impact of the 1-1 team, Clay Graubard and Robert de Neufville on forecasting the war in Ukraine, and me and Keiran on the philosophy of The 80,000 Hours Podcast.
You can find that show anywhere you listen to this one — just search for ’80k After Hours.’
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
Audio mastering and technical editing for this episode by Ben Cordell.
Full transcripts and an extensive collection of links to learn more are available on our site and put together by Katy Moore.
Thanks for joining, talk to you again soon.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.