Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere
By Robert Wiblin and Keiran Harris · Published October 12th, 2023
Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere
By Robert Wiblin and Keiran Harris · Published October 12th, 2023
On this page:
- Introduction
- 1 Highlights
- 2 Articles, books, and other media discussed in the show
- 3 Transcript
- 3.1 Cold open [00:00:00]
- 3.2 Rob's intro [00:01:07]
- 3.3 The interview begins [00:04:01]
- 3.4 The risk of autocratic lock-in due to AI [00:10:02]
- 3.5 The state of play in AI policymaking [00:13:40]
- 3.6 China and AI [00:32:12]
- 3.7 The most promising regulatory approaches [00:57:51]
- 3.8 Transforming the world without the world agreeing [01:04:44]
- 3.9 AI Bill of Rights [01:17:32]
- 3.10 Who's ultimately responsible for the consequences of AI? [01:20:39]
- 3.11 Policy ideas that could appeal to many different groups [01:29:08]
- 3.12 Tension between those focused on x-risk and those focused on AI ethics [01:38:56]
- 3.13 Communicating with policymakers [01:54:22]
- 3.14 Is AI going to transform the labour market in the next few years? [01:58:51]
- 3.15 Is AI policy going to become a partisan political issue? [02:08:10]
- 3.16 The value of political philosophy [02:10:53]
- 3.17 Tantum's work at DeepMind [02:21:20]
- 3.18 CSET [02:32:48]
- 3.19 Career advice [02:35:21]
- 3.20 Panpsychism [02:55:24]
- 3.21 Rob's outro [03:03:47]
- 4 Learn more
- 5 Related episodes
If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space?
That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions.
My concern is that if we don’t approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope — and all of a sudden we have, let’s say, autocracies on the global stage are strengthened relative to democracies.
Tantum Collins
In today’s episode, host Rob Wiblin gets the rare chance to interview someone with insider AI policy experience at the White House and DeepMind who’s willing to speak openly — Tantum Collins.
They cover:
- How AI could strengthen government capacity, and how that’s a double-edged sword
- How new technologies force us to confront tradeoffs in political philosophy that we were previously able to pretend weren’t there
- To what extent policymakers take different threats from AI seriously
- Whether the US and China are in an AI arms race or not
- Whether it’s OK to transform the world without much of the world agreeing to it
- The tyranny of small differences in AI policy
- Disagreements between different schools of thought in AI policy, and proposals that could unite them
- How the US AI Bill of Rights could be improved
- Whether AI will transform the labour market, and whether it will become a partisan political issue
- The tensions between the cultures of San Francisco and DC, and how to bridge the divide between them
- What listeners might be able to do to help with this whole mess
- Panpsychism
- Plenty more
Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore
Highlights
The risk of autocratic lock-in due to AI
Tantum Collins: A prompt that I think about a lot that sometimes helps frame this is: If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space? That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions.
Now, that’s obviously a thought experiment that’s removed from the real world. Here, things are messier. But my concern is that if we don’t approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope — and all of a sudden we have, let’s say, autocracies on the global stage are strengthened relative to democracies.
Rob Wiblin: Yeah, I guess it’s very natural to worry that countries that are already autocratic would use these tools to engage in a level of monitoring of individuals that currently would be impractical. You could just constantly be checking all of the messages that people are sending and receiving, or using the fact that we have microphones in almost every room to have these automated systems detecting whether people are doing anything that is contrary to the wishes of the government. And that could just create a much greater degree of lock-in even than there is now.
Are you also worried about these kinds of technologies being abused in countries like the United States or the UK in the medium term?
Tantum Collins: I’m certainly not worried in the medium term about existing democracies like, let’s say, the US and the UK, becoming something that we would describe as autocratic. Perhaps another way of reframing it would be: I worry that we’ve already left opportunities on the table, and that the number of opportunities we will end up leaving on the table could grow. Both to make government more effective in a sort of ideology-agnostic sense — doing things on time and in ways that are affordable and so on — and secondly, missing opportunities to make these institutions more democratic, say, to bind them to reflective popular will.
And we can look at even contemporary democracies: the bitrate of preference communication has remained more or less the same for a long time, while government capacity has expanded significantly. In that sense, we’ve sort of already lost some level of democratic oversight. And this is something that, no, I’m not worried about these countries becoming the PRC — but I do think that there’s lots of stuff that we could do to improve the degree to which we instantiate the principles of democracy.
What do people in government make of AI x-risk concerns?
Tantum Collins: Government is always very big and distributed, so I wouldn’t say this is an authoritative recounting, but I’m happy to give my take from the sliver of things that I’ve seen. Obviously the institution I’m most familiar with is the US government. I think that to some extent this can generalise, especially for other closely allied governments — so let’s say Five Eyes and G7 and so on.
It is certainly the case that the amount of attention dedicated towards AI in government has increased significantly over the past couple of years. And that includes the amount of attention even just proportionately: that there has been an increase in the consideration given to risks. That includes things that people would categorise as x-risk, as well as all kinds of regular, active, ongoing harms: algorithmic discrimination, interpretability issues and so on. So all of this is getting more attention.
I think in particular, some of the things that people who are worried about x-risk tend to focus on actually fit into preexisting national security priorities pretty well. So for instance, if we think about cyber-related x-risk having to do with AI or AI-related biorisk: these are categories of harms that, although previously not mainly through lens of AI-related stuff, there are plenty of institutions and people in the US government and in other governments that have been worried about those types of risks for a long time. And I think it’s safe to say that almost everyone, for almost all of these, in almost all these domains, AI risk is now top of mind.
That doesn’t mean that if you go into the White House, you will see people doing Open Phil-style very rigorous rankings based on importance and neglectedness and tractability and so on. And it doesn’t mean that the people whose work ends up reducing these risks would even articulate it in the way that someone in, let’s say, the EA community would. But I do think that increasingly a lot of this work in government sort of dovetails with those priorities, if that makes sense.
Rob Wiblin: Yeah, that does make sense. I think it’s also quite sensible that this is maybe the extinction risk or the catastrophic risk that governments are turning their attention to first, because it seems like the potential misuse of large language models or other AI models to help with bioterrorism or development of bioweapons or to engage in cyberattacks seems like something that could be serious in the next few years. I guess we don’t know how serious it will be; we don’t know how many people will actually try it, but it does seem like an imminent threat. And so people should be thinking about what we can do now.
Tantum Collins: Yeah, exactly. So I’m happy to say that there are absolutely smart, capable people working in government who are worried about those things. If that helps you and the listeners sleep easily at night.
Misconceptions about AI in China
Tantum Collins: I think that a lot of news coverage of China tends to fluctuate between extremes. And this is true in AI as well as more broadly, where there will be a hype cycle that is, “The rise of China is inevitable,” and then there will be a hype cycle that says, “A centralised system was never going to work. Why did we ever take this threat seriously?” And of course, almost always the reality is somewhere in the middle.
I think that when China began making investments in AI, in particular with the 2017 release of this report from the State Council that was AI focused, I think that that, for a lot of people, was the first time that they began paying attention to what might China-related AI things look like. And in some critical ways, China is, of course, different than the US and other Western countries. I think that initially there was a moment of not fully appreciating how different the research culture, and in particular the relationship between companies and the government, is in China, relative to Western countries.
But now I worry that things have gone too far, and that people have these caricatured versions in their mind of what things look like in China. There are many of these, but to list a few, there is a meme that I think is inaccurate and unhelpful: that China can copy but not innovate. And obviously, historically, China has been responsible for a huge amount of scientific innovation. And also recently, again, the number of papers coming out of China on AI has increased significantly, it’s more than any other country. I mean, there are loads of examples of very impressive, innovative scientific achievements coming out of China.
In particular, to cite Jeff again, he has written an interesting paper that describes the difference between innovation and diffusion of technologies in terms of the effect it has on national power. And so there are some countries that historically played a big role in early industrial technology, but didn’t manage to diffuse it across the country effectively enough to, for instance, make the most of it from a military perspective or economic growth. And there are others that did the inverse. So in the 1800s, there was a lot of commentary that the US was terrible at creating things, but was pretty good at copying and diffusing.
Jeff has a take — he has some interesting ways of measuring this — that today China is actually, in a proportional sense, better at innovation and worse at diffusion. And that one big strength that the United States has from a competitive angle is that it’s actually much better at diffusion. I think that will strike a lot of people as being the inverse of what they think, based on having read some stories about high speed rail and solar cell production and so on. They have this meme of China can copy these things and suffuse it across the country and reduce the cost of production, but it isn’t able to create things.
I think that’s an area where I can see where that stereotype came from, based on the specific economic investments that China made in the 1990s and 2000s, but I think it is inaccurate — and people risk misunderstanding how things look if they buy into that.
The most promising regulatory approaches
Tantum Collins: I think there are a few like level-zero, foundational things that make sense to do. One is, at the moment, especially in the West, there are major disconnects between just the way that people in government use language and think about things relative to the way that people in the tech world do. So one important step zero is just increasing communication between labs and governments, and I think recently there’s been a lot of positive movement in that direction.
A second and related thing, and this somewhat ties back to these democratisation questions, is that even under the most competent technocracy, I would be worried about a process that doesn’t involve a significant amount of public consultation, given how general purpose these systems are, and how pervasive the effects that they could have on our lives will be. And so I think that government has a lot of work to do — both in terms of reaching out to and engaging the AI community, and also in terms of engaging the general public.
There’s been a lot of cool work in this direction recently. I’d highlight what The Collective Intelligence Project has undertaken. They’ve led a series of what they call “alignment assemblies“: essentially exercises designed to engage large, ideally representative subsets of the population with questions about what kinds of AI things worry you the most.
Also, recently from labs there’s been some interest in this stuff. OpenAI has this democratic input for AI grant that people have just applied for. And then also there are several labs that are working on projects in the vein of, in particular, LLMs: how can we use these to facilitate larger-scale deliberative processes than before? And one of the projects I worked on when I was at DeepMind — and I’m actually still collaborating with some of my former colleagues at DeepMind on — is something in this direction.
So that would be some sort of very basic, before even landing on a policy stuff, that I think is important. Beyond that, I think that there are some areas that are relatively uncontroversially good. So to the extent that we think that AI will, at some level, be a public good, and that private market incentives will not sufficiently incentivise the kind of safety and ethics research that we want to happen, I think that allocating some public funding for that stuff is a good idea. And that’s the full gamut of x-risk alignment things — like more present-day prosaic ethics impact considerations, interpretability research: the full list.
And a final thing that I think is as close to a no-brainer as you can get is that clearly some kind of clearer benchmarking and standards regime is important — because right now it’s sort of the Wild West, and these things are just out there. And not only is it difficult to measure what these things can and cannot do, but there is almost nothing in the way of widely known trusted intermediary certifications that a nonexpert user can engage with to get a feel for how and when they should use a given system.
So there are a whole bunch of different proposals — some involve the government itself setting up regulatory standards; some involve some kind of third-party verification — but having something. And that could be model cards, it could be the equivalent of nutritional labels. It’s kind of a whole range of options there. But at the moment I think a lot of people are sort of flying blind.
Who's ultimately responsible for the consequences of AI?
Rob Wiblin: It’s very unclear to me how responsibility for the consequences of AI is split across various different parts of the US government. It feels a bit like there’s no identifiable actor who really has to think about this holistically. Is that right?
Tantum Collins: Yes, this is true. And in part this gets back to this issue of AI is, A, new; B, so general that it challenges the taxonomy of government stuff; and C, something that government has not until recently engaged with meaningfully in its current form. So various government research projects throughout time have used some kind of AI, but government was not really in any meaningful way driving the past decade of machine learning progress. And all of this means that there are a tonne of open questions about how government thinks about AI-related responsibilities and where those sit.
Rob Wiblin: Who are the different players though, who at least are responsible for some aspect of this?
Tantum Collins: So within the White House, the sort of main groups would be the Office of Science and Technology Policy, where I worked before. That, within it, has a number of different teams, several of which are quite interested in AI. There is one small group that is explicitly dedicated to AI; there is a national security team, that was where I sat, that handles a lot of AI-related things; and then there is the science and society team, that was the team that produced the Blueprint for an AI Bill of Rights. Each of these groups work together quite a fair bit, and each one has a slightly different outlook and set of priorities related to AI.
Then you have the National Security Council, which has a tech team within it that also handles a fair amount of AI stuff. At the highest level, OSTP historically has been a bit more long-run conceptual research-y, putting together big plans for what the government’s position should be on approach to funding cures for a given disease, let’s say. And the NSC has traditionally been a bit more close to the decision making of senior leaders. And that has the benefit of, in some immediate sense, being higher impact, but also being more reactive and less long-run thinking wise. Again, these are huge generalisations, but those are sort of two of the groups within the White House that are especially concerned about AI.
These days, of course, because AI is on everyone’s mind, every single imaginable bit of the government has released some statement that references AI. But in terms of the groups that have large responsibility for it, then of course there is the whole world of departments and agencies, all of which have different AI-related equities.
So there’s NIST, which does regulatory stuff. There’s the National Science Foundation, which of course funds a fair amount of AI-related research. There’s the Department of Energy, which runs the national labs. And the name is slightly misleading because they don’t just do energy stuff.
The Department of Energy is actually this incredibly powerful and really, really big organisation. [Before I came into government] I thought they do wind farms and things. But it turns out that, because they’re in charge of a lot of nuclear development and security, they actually, especially in the national security space, have quite a lot of authority and a very large budget.
Of course, in addition to all the stuff in the executive, then there’s Congress — which has at various times thrown various AI-related provisions into these absolutely massive bills. So far, I believe both the House and the Senate have AI-focused committees or groups of some kind. I’m not super clear on what they’re doing, but obviously there is also the potential for AI-related legislation.
Anyway, the list goes on, as you can imagine. Obviously the Department of Defense and the intelligence community also do various AI-related projects. But yeah, at the moment there isn’t a clear coordinating entity. There have been a number of proposals. One that’s been in the news is Sam Altman suggested during his testimony that there should be a new agency created to focus specifically on AI. I think it remains to be determined whether that happens and what that looks like.
How technical experts could communicate better with policymakers
Tantum Collins: This is a great question. It’s actually one area that I think LLMs could be very valuable, to go back to this parallel between translation across actual languages and translation across academic or professional vernaculars. I think that we could save a lot of time by fine-tuning systems to do better mappings of “explain this technical AI concept to someone who… is a trained lawyer.” And often then you can actually find that there are sort of these weird overlaps. Not necessarily full isomorphisms, but a lot of the conceptual tooling that people have in really different domains accomplishes similar things, and can be repurposed to explain something in an area that they’re not too familiar with. So this is an area where I think that there is a lot of cool AI-driven work that can be done.
In terms of practical advice to people trying to explain things, this is tricky, because there are many ways in which you want to frame things differently. I’m trying to think of a set of principles that capture these, because a lot of it is just very specific word choice.
Maybe a few off the top of my head would be: One, just read political news and read some policy documents to get a feel for how things are typically described, and that should be a decent start. Two, I think in general, in policy space, you obviously want to reduce the use of technical language, but even sort of philosophical-type abstraction that can be helpful in a lot of other domains. And so the more that things can be grounded in concrete concepts and also incentives that will be familiar to people. In the policy space, a lot of that has to do with thinking about what the domestic and foreign policy considerations are that are relevant to this.
I mean, obviously it depends on the group — like, is it a group of senators or people at OSTP or something — but broadly speaking, if you read global news, you’ll get a sense of what people care about. A lot of people are really worried about competition with China, for better or worse. So to ground this, one example here would be: to the extent that the framing of China competition is inevitable, one can harness that to make the case that, for instance, leading in AI safety is an area that could be excellent for the scientific prestige of a country, right? And it could improve the brand of a place where things are done safely and reliably, and where you can trust services and so on. You can take something that otherwise a policymaker might dismiss as heavy techno-utopianism, and if you are willing to cheapen yourself a little bit in terms of how to sell it, you can get more attention.
Obviously this is a sliding scale, and you don’t want to take it too far. But I think a lot can be accomplished by thinking about what the local political incentives are that people have.
Tension between those focused on x-risk and those focused on AI ethics
Tantum Collins: I have a few low-confidence thoughts. One is that there are some areas where there is, I think, the perception of some finite resource — and maybe that’s money or maybe it’s attention. And I think there is an understandable concern on the AI ethics side that there is sometimes a totalising quality to the way that some people worry about existential risks. At its best, I think that x-risk concern is expressed in ways that are appropriately caveated and so on, and at its worst it can imply that nothing else matters because of running some set of hypothetical numbers. Personally, I’m a bit of a pluralist, and so I don’t think that everything comes down to utils. I think that the outlook of, “If you reduce existential risk by X percent then this so dwarfs every other concern” is something that I can see why that rubs people the wrong way.
A second thing that I think sometimes brings some of these views or these communities into conflict is the idea that there are some types of behaviour — whether that’s from labs or proposed policies — that could help. I’m in particular thinking of things that would have some security benefits that people who are concerned about x-risk value very highly, but that might come at the cost of other things that we value in a pluralistic society — for instance, openness and competition.
A lot of the policies that we haven’t talked about yet — because so far we’ve been focusing on no-brainer, almost everyone should get behind these — there are a lot of very tricky ones that you can see a case for and you can see a case against, and often they pit these values against one another. If you’re really, really, really worried about existential risk, then it’s better to have fewer entities that are coordinating stuff, and to have those be fairly consolidated and to work very closely with the government.
If you don’t take existential risk that seriously — and if instead, you are comparatively more worried about having a flourishing and open scientific ecosystem, making sure that small players can have access to cutting-edge models and capabilities and so on; and a lot of these things historically have correlated with the health of open and distributed societies — then those policies look really different.
I think that the question of how we grapple with these competing interests is a really difficult one. And I worry that, at its worst, the x-rist community — which broadly, I should say, I think does lots of excellent work, and has put its finger on very real concerns — but at its worst, there can be this sort of totalising attitude that maybe refuses to grapple with a different set of frameworks for assessing these issues. And I think that’s sometimes exacerbated by the fact that it is on average not a super-representative community, geographically or ethnically and what have you. I think that means that it’s easy to be blind to some of the things that other people, for good reason, are worried about.
That would be my very high-level framing of it. But the bottom line is that I very much agree with your sentiment that most of the conflict between these groups is counterproductive. And if we’re talking about the difference between pie splitting and pie expansion, there’s a huge amount of pie expansion and a whole bunch of policies that should be in the collective interest. And especially since I think the listenership here is probably a little bit more EA-skewed, I’d very much encourage people to engage with — this sounds so trite — but really to listen to some of the claims from the non-x-risk AI ethics community, because there is a lot of very valuable stuff there, and it’s just a different perspective on some of these issues.
Articles, books, and other media discussed in the show
Tantum’s work:
- Democracy on Mars — essays about applications of machine learning to democratic systems
- White House Operator Tantum Collins on AI regulation and geopolitical impacts — interview on the Unsupervised Learning podcast
Recent policy developments in AI:
- Iconic Bletchley Park to host UK AI Safety Summit in early November
- Initial £100 million for expert taskforce to help UK build and adopt next generation of safe AI
- UK to invest £900m in supercomputer in bid to build own ‘BritGPT’
- Blueprint for an AI Bill of Rights from the US Office of Science and Technology Policy (OSTP)
- A Defense Production Act for the 21st century by Jamie Baker for CSET
- OpenAI CEO Sam Altman testifies before Senate Judiciary Committee
- AI Risk Management Framework from the US National Institute of Standards and Technology
- The EU AI Act
- Trends in AI development:
- Investment trends and compute trends from Epoch AI
- AI and compute from OpenAI
- The AI index report: Measuring trends in artificial intelligence from Stanford University
AI and policy developments in China:
- Jeffrey Ding’s research and ChinAI newsletter, including:
- Jordan Schneider’s research, ChinaTalk newsletter and ChinaTalk podcast
- From the Center for Security and Emerging Technology (CSET):
- Comparing US and Chinese contributions to high-impact AI research by Ashwin Acharya and Brian Dunn
- Counting AI research: Exploring AI research output in English- and Chinese-language sources by Daniel Chou
- All of CSET’s work on supply chains
- China and the United States: Unlikely partners in AI by Edmund L. Andrews
- China now publishes more high-quality science than any other nation – should the US be worried? by Caroline Wagner
- Has China caught up to the US in AI research? An exploration of mimetic isomorphism as a model for late industrializers by Chao Min et al.
- China’s AI regulations and how they get made by Matt Sheehan
- Full translation: China’s ‘New Generation Artificial Intelligence Development Plan’ (2017) from Stanford University’s DigiChina project
- Decoding China’s escalation of the chip war by Megha Shrivastava
- Choking off China’s access to the future of AI by Gregory C. Allen
- China builds world’s fastest supercomputer without US chips by Patrick Thibodeau
- China spends more on controlling its 1.4bn people than on defense from Nikkei Asia
- The truth about China’s social credit system — video from PolyMatter
- 80,000 Hours career review: China-related AI safety and governance paths — plus, check out Jordan Schneider’s early-career guide to getting started in China policy
Democratic principles in AI policy work:
- Alignment assemblies at The Collective Intelligence Project
- GETTING-Plurality (Governance of Emerging Technology and Tech Innovations for Next-Gen Governance through Plurality) — a multidisciplinary research network at Harvard University
- Hélène Landemore’s work, including an interview on The Ezra Klein Show: A radical proposal for true democracy
- Danielle Allen’s work, including an interview on The Ezra Klein Show: This philosopher wants liberals to take political power seriously
- Democratic inputs to AI — OpenAI Inc.’s grant programme to fund experiments in setting up a democratic process for deciding what rules AI systems should follow
- Fine-tuning language models to find agreement among humans with diverse preferences by Michiel A. Bakker et al.
- Opportunities and risks of LLMs for scalable deliberation with Polis by Christopher T. Small et al.
- Taiwan is making democracy work again. It’s time we paid attention by Carl Miller
- How Taiwan’s unlikely digital minister hacked the pandemic by Andrew Leonard
- Futarchy: Vote values, but bet beliefs by Robin Hanson
- The Extinction Tournament on Astral Codex Ten
Effect of AI on the labour market and economy:
- Erik Brynjolfsson’s research, including work coauthored with Daniel Rock and Chad Syverson:
- The dynamo and the computer: An historical perspective on the modern productivity paradox by Paul A. David
- Engines of power: Electricity, AI, and general-purpose, military transformations by Jeffrey Ding and Allan Dafoe
Next steps for working in US AI policy and governance:
- Horizon Fellowships
- American Association for the Advancement of Science Fellowships
- TechCongress Fellowships
- 80,000 Hours career review: AI governance and coordination
- Policy Entrepreneurship at the White House: Getting Things Done in Large Organizations by Tom Kalil — and also check out Tom’s podcast episode
Other 80,000 Hours podcast episodes:
- Ezra Klein on existential risk from AI and what DC could do about it
- Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite
- Lennart Heim on the compute governance era and what has to come after
- Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI risk and the most important century
- Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less
- Tom Davidson on how quickly AI could transform the world
- Ajeya Cotra on accidentally teaching AI models to deceive us
- Jeff Ding on China, its AI dream, and what we get wrong about both
- Miles Brundage on the world’s desperate need for AI strategists and policy experts
- Allan Dafoe on trying to prepare the world for the possibility that AI will destabilise global politics
- Audrey Tang on what we can learn from Taiwan’s experiments with how to do democracy
- Tom Kalil on how to have a big impact in government & huge organisations, based on 16 years’ experience in the White House
- Bruce Schneier on how insecure electronic voting could break the United States — and surveillance without tyranny
- Robert Long on why large language models like GPT (probably) aren’t conscious
- David Chalmers on the nature and ethics of consciousness
- Joe Carlsmith on navigating serious philosophical confusion
Everything else:
- How to count animals, more or less by Shelly Kagan
- Can we talk to whales? by Elizabeth Kolbert
- Say hi to the river: Conversational AI for non-human entities by Shu Yang Lin
- If materialism is true, the United States is probably conscious by Eric Schwitzgebel
Transcript
Table of Contents
- 1 Cold open [00:00:00]
- 2 Rob’s intro [00:01:07]
- 3 The interview begins [00:04:01]
- 4 The risk of autocratic lock-in due to AI [00:10:02]
- 5 The state of play in AI policymaking [00:13:40]
- 6 China and AI [00:32:12]
- 7 The most promising regulatory approaches [00:57:51]
- 8 Transforming the world without the world agreeing [01:04:44]
- 9 AI Bill of Rights [01:17:32]
- 10 Who’s ultimately responsible for the consequences of AI? [01:20:39]
- 11 Policy ideas that could appeal to many different groups [01:29:08]
- 12 Tension between those focused on x-risk and those focused on AI ethics [01:38:56]
- 13 Communicating with policymakers [01:54:22]
- 14 Is AI going to transform the labour market in the next few years? [01:58:51]
- 15 Is AI policy going to become a partisan political issue? [02:08:10]
- 16 The value of political philosophy [02:10:53]
- 17 Tantum’s work at DeepMind [02:21:20]
- 18 CSET [02:32:48]
- 19 Career advice [02:35:21]
- 20 Panpsychism [02:55:24]
- 21 Rob’s outro [03:03:47]
Cold open [00:00:00]
Tantum Collins: There is a meme that I think is inaccurate and unhelpful: that China can copy but not innovate. And both historically, China has been responsible for a huge amount of scientific innovation. And also the number of papers coming out of China on AI has increased significantly, it’s more than any other country. I mean, there are loads of examples of very impressive, innovative scientific achievements coming out of China.
Jeff has a take that today China is actually, in a proportional sense, better at innovation and worse at diffusion. And that one big strength that the United States has from a competitive angle is that it’s actually much better at diffusion. I think that will strike a lot of people as being the inverse of what they think, based on having read some stories about high speed rail and solar cell production and so on. They have this meme of China can copy these things and suffuse it across the country and reduce the cost of production, but it isn’t able to create things.
And so I think that’s an area where I can see where that stereotype came from, based on the specific economic investments that China made in the 1990s and 2000s, but I think it is inaccurate — and people risk misunderstanding how things look if they buy into that.
Rob’s intro [00:01:07]
Rob Wiblin: Hey listeners, Rob here, head of research at 80,000 Hours.
I often tell people that we tend not to interview folks working in policy roles because even if they know interesting things they usually can’t tell you them.
So we’re very lucky to have caught Tantum ‘Teddy’ Collins during a brief window in between two stints at the White House working on security issues and emerging technology — in this case, frontier AI models.
Teddy is one of those people who keeps a fairly low profile but is super respected by people in the know, as well as being the kind of person you’d love to grab a beer with.
He’s accumulated a lot of hard-won knowledge from personal experience both at the White House and at DeepMind, which he was happy to share in this interview.
Teddy and I talk about:
- How AI could strengthen government capacity, and how that’s a double-edged sword
- How new technologies force us to confront tradeoffs in political philosophy that we were previously able to pretend weren’t there
- To what extent policymakers take different threats from AI seriously
- Whether the US and China are in an AI arms race or not
- Whether it’s OK to transform the world without much of the world agreeing to it
- The tyranny of small differences in AI policy
- Disagreements between different schools of thought in AI policy, and proposals that could unite them
- How the US AI Bill of Rights could be improved
- Whether AI will transform the labour market, and whether it will become a partisan political issue
- The tensions between the cultures of San Francisco and DC, and how to bridge the divide between them
- What listeners might be able to do to help with this whole mess
- And finally, panpsychism
Before that, I just want to flag a pretty notable slate of job openings that are likely to be of particular interest to big fans of this show, and indeed big fans of this episode. Those are 16 new roles at Open Philanthropy’s Global Catastrophic Risks team across AI Governance and Policy, AI Technical Safety, Biosecurity and Pandemic Preparedness, Global Catastrophic Risks Capacity Building, and Global Catastrophic Risks Cause Prioritization.
For those who don’t know, Open Philanthropy is a large charitable foundation and much of their work focuses on the problems we discuss the most on this show. I should add a conflict-of-interest disclaimer that Open Philanthropy is one of 80,000 Hours’ biggest donors. Some recent guests who work or worked there include Holden Karnofsky, Joe Carlsmith, Ajeya Cotra, and Tom Davidson. I’ll say more about those roles at the end of the episode, but if you’d like to look directly just go to openphilanthropy.org and click ‘We’re hiring!’ at the top right of the page.
Without further ado, I bring you Teddy Collins.
The interview begins [00:04:01]
Rob Wiblin: Today I’m speaking with Tantum Collins. Teddy, as he’s often called, was Assistant Director for Technology Strategy at the White House Office of Science and Technology Policy back in 2021 and 2022. Before that, he was a research lead of the Meta-Research team at DeepMind, and a nonresident Fellow at the Centre for Security and Emerging Technology in DC. He received a BA in global affairs from Yale, and as a Marshall Scholar, earned a master’s in international relations at Cambridge, and studied philosophy of science at LSE — a true Renaissance man. In 2014, together with General Stanley McChrystal, he coauthored the bestselling book Team of Teams.
Thanks for coming on the podcast, Teddy.
Tantum Collins: Thank you so much for having me.
Rob Wiblin: I hope to talk about how AI is currently seen among policymakers, and to what extent the US is in a race with China over AI advances. But first, we’re in London at the moment, where you’ve been living recently. But I understand you’re just about to head back to DC to start work in the White House yet again — is that right?
Tantum Collins: Yes, that’s right. I’m heading back in just a few weeks. I will be returning for another AI-focused policy role. But today I’m speaking 100% as a private citizen. None of this is reflective of the US government position, and I have not yet begun the role. Obligatory disclaimer.
Rob Wiblin: So you left in 2022, right?
Tantum Collins: Yeah, about six months ago. It will end up having only been a short stint outside of government.
Rob Wiblin: I think you weren’t quite intending to come back quite so quickly, but people have persuaded you to return?
Tantum Collins: Yes.
Rob Wiblin: This is sort of an Avengers situation. They need to bring the team back together for one last policy formation.
Tantum Collins: Yeah, exactly. One last job. If you imagine the nerdiest possible version of that, then something along those lines. But I have not yet been shown to a flying aircraft carrier, so we’ll see.
Rob Wiblin: There’s still time. What work is this trip to the White House interrupting? What were you doing up to now?
Tantum Collins: So an area I’m really interested in, and where I’ve been focusing my efforts for the past six months — I was planning on working on this in an academic capacity for the coming year, so I had tentatively accepted an academic fellowship — is essentially looking at the effect that a number of technologies, including machine learning, can have on the way that governments function. In particular: Are there ways that machine learning can increase our ability to realise the goals of democracy?
At the highest level, this is a problem that I think is quite comparable to AI safety considerations, actually: in AI, people are quite worried about the greater the AI capabilities, the more important it is to have effective alignment. And I think we see a similar situation in government: there is a balance between, on the one hand, expanding state capacity, and on the other hand, aligning the actions of the state with the democratic consensus or popular sovereignty, whatever you want to call it.
I think there are a lot of ways that AI, among other technologies, could significantly strengthen state capacity. That could be great. It could also be awful — and open the door to autocracy — if we don’t figure out ways to increase the degree to which that is aligned with the popular will. So I’m interested in a series of questions that are at the intersection of technical and political philosophical considerations there.
Rob Wiblin: OK, let’s see if I’ve understood that. So you’re doing an analogy between one concern that we might have about very advanced AI is that it’s going to be incredibly capable — and given that it will have enormous capabilities, it will be extremely important that those capabilities be turned towards goals that we think are desirable. And you’re saying that similarly, AI advances could increase the capabilities of the state, increase what governments are capable of doing — which could be good, but it also could be bad if the goals that the government is then pursuing, for whatever reasons, are not conducive to wellbeing or whatever outcomes we desire.
Tantum Collins: Yeah, exactly. So one way of thinking about this is, you could have a heuristic of: How many decisions is the government making versus how rich is the information flow — the sort of binding oversight flow — from the general public to those decisions?
And already, over time, we’ve seen that government capacity has increased significantly. This is, in most cases, a really good thing. But it has also to some extent decreased this ratio: people still provide today roughly the same amount of guidance that they did decades ago, and in some cases even centuries ago. We vote at the same frequency. Ballots, from an information-theoretic perspective, are incredibly compressed. That kind of makes sense if you’re making decisions for a local community and things move slowly. Now that we have these governments where their remit has expanded significantly, but the sort of bandwidth of information flow — through which the general public guides that — has remained more or less the same, with some exceptions.
AI could take that remit much, much, much further and significantly expand the set of things that government could do. And I think that there are many ways that AI could also, in parallel, enrich the ability of citizens to guide those actions. But unless we pursue that in a proactive way, I worry that we’ll end up in a situation where things get really out of balance.
Rob Wiblin: Right. Studying history, you realise that governments in the past, in the 18th century and earlier, were just extraordinarily incapable by the standards of the modern world. That they had no idea what was going on through most of the country. Sometimes they would kind of know how many people were in the capital city or something like that, but beyond that range, it was kind of delegated. Things that we now think of as central government functions — related to education or healthcare or the welfare state, say — those weren’t on the table for the state to do, because it was just inconceivable that they would actually be able to deliver such services.
I guess you’re saying there could be a similar massive step up in what the state in principle would be capable of doing, due to artificial intelligence just increasing the amount of monitoring and observation and data processing that’s possible?
Tantum Collins: Exactly. All of those things, being able to make better predictions, possibly being able to better control complex distributed systems that have previously fallen outside of the government remit. There are all kinds of ways that this could expand.
And also the stuff that I think is most interesting are ways that AI could strengthen things on the other side of this equation. Which is to say: How can we provide richer, more regular information about people’s all-things-considered political preferences? But I think that is something that, A, even in the best case creates all kinds of thorny philosophical questions, and B, it’s by no means guaranteed that those capabilities will emerge. And so I think it merits proactive research investment.
The risk of autocratic lock-in due to AI [00:10:02]
Rob Wiblin: The main thing I wanted to talk about today was the standard risks that people talk about from artificial intelligence, in particular focusing on the worst-case scenarios. To help frame many of the things that you might end up saying over the next couple of hours, I’m curious to know — maybe setting aside extinction for a minute — what possible negative outcomes from advances in AI most worry you. Which ones do you think about the most?
Tantum Collins: I should give the disclaimer that, as anyone who’s worked in this space will know, of course, it’s one thing in principle to say we’re trying to rank concerns by expected impact, and another thing in practice to put numbers on those things. And so I wouldn’t die on the hill of this is going to be, in expectation, the most harmful thing short of x-risk that we will face.
But one thing that I do care about a lot is essentially a variant of what we were just talking about, which is to say the risks of autocratic lock-in. And a prompt that I think about a lot that sometimes helps frame this is: If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space? That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions.
Now, that’s obviously a thought experiment that’s removed from the real world. Here, things are messier. But my concern is that if we don’t approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope — and all of a sudden we have, let’s say, autocracies on the global stage are strengthened relative to democracies.
Rob Wiblin: Yeah, I guess it’s very natural to worry that countries that are already autocratic would use these tools to engage in a level of monitoring of individuals that currently would be impractical. You could just constantly be checking all of the messages that people are sending and receiving, or using the fact that we have microphones in almost every room to have these automated systems detecting whether people are doing anything that is contrary to the wishes of the government. And that could just create a much greater degree of lock-in even than there is now.
Are you also worried about these kinds of technologies being abused in countries like the United States or the UK in the medium term?
Tantum Collins: I’m certainly not worried in the medium term about existing democracies like, let’s say, the US and the UK, becoming something that we would describe as autocratic. Perhaps another way of reframing it would be: I worry that we’ve already left opportunities on the table, and that the number of opportunities we will end up leaving on the table could grow. Both to make government more effective in a sort of ideology-agnostic sense — doing things on time and in ways that are affordable and so on — and secondly, missing opportunities to make these institutions more democratic, say, to bind them to reflective popular will.
And we can look at even contemporary democracies: as we were talking about before, the bitrate of preference communication has remained more or less the same for a long time, while, as you noted, government capacity has expanded significantly. In that sense, we’ve sort of already lost some level of democratic oversight. And this is something that, no, I’m not worried about these countries becoming the PRC — but I do think that there’s lots of stuff that we could do to improve the degree to which we instantiate the principles of democracy.
The state of play in AI policymaking [00:13:40]
Rob Wiblin: OK, let’s now get a bit of an update on the state of players that exist now, in terms of what people are involved in policymaking in the US and UK, and I guess to some extent the EU as well; and what people are thinking about AI — again, with a focus on extinction risk and worst-case scenarios, rather than maybe regulation of consumer products as they exist right now.
I guess the high-level thing that I’m really curious to know is, because of who I am, because of the level of interest that I have in existential risk and extinction risk specifically, I just constantly see stuff about that all the time, and I hear from people who are worried about that. But that means that I have an extremely non-representative sample of people talking to me and content being sent to me. So I find it very hard to tell, if I were to step outside that bubble, what do people who are involved in government in various different capacities — from politicians to bureaucrats to people in think tanks and so on, in the US and UK — what do they make of this whole set of concerns that have become much more prominent this year? Did you have your finger on the pulse of any of these groups at all?
Tantum Collins: Well, government is always very big and distributed, so I wouldn’t say this is an authoritative recounting, but I’m happy to give my take from the sliver of things that I’ve seen. Obviously the institution I’m most familiar with is the US government. I think that to some extent this can generalise, especially for other closely allied governments — so let’s say Five Eyes and G7 and so on.
It is certainly the case that the amount of attention dedicated towards AI in government has increased significantly over the past couple of years. And that includes the amount of attention even just proportionately: that there has been an increase in the consideration given to risks. That includes things that people would categorise as x-risk, as well as all kinds of regular, active, ongoing harms: algorithmic discrimination, interpretability issues and so on. So all of this is getting more attention.
I think in particular, some of the things that people who are worried about x-risk tend to focus on actually fit into preexisting national security priorities pretty well. So for instance, if we think about cyber-related x-risk having to do with AI or AI-related biorisk: these are categories of harms that, although previously not mainly through lens of AI-related stuff, there are plenty of institutions and people in the US government and in other governments that have been worried about those types of risks for a long time. And I think it’s safe to say that almost everyone, for almost all of these, in almost all these domains, AI risk is now top of mind.
That doesn’t mean that if you go into the White House, you will see people doing Open Phil-style very rigorous rankings based on importance and neglectedness and tractability and so on. And it doesn’t mean that the people whose work ends up reducing these risks would even articulate it in the way that someone in, let’s say, the EA community would. But I do think that increasingly a lot of this work in government sort of dovetails with those priorities, if that makes sense.
Rob Wiblin: Yeah, that does make sense. I think it’s also quite sensible that this is maybe the extinction risk or the catastrophic risk that governments are turning their attention to first, because it seems like the potential misuse of large language models or other AI models to help with bioterrorism or development of bioweapons or to engage in cyberattacks seems like something that could be serious in the next few years. I guess we don’t know how serious it will be; we don’t know how many people will actually try it, but it does seem like an imminent threat. And so people should be thinking about what we can do now.
Tantum Collins: Yeah, exactly. So I’m happy to say that there are absolutely smart, capable people working in government who are worried about those things. If that helps you and the listeners sleep easily at night.
Rob Wiblin: Well, I’m sure they’ve got it covered. I guess my perception is that if you’d gone into the White House two years ago and said that alignment problems with artificial intelligence — that artificial intelligence systems could develop their own agency and their own goals that would be in conflict with humanity, and they might want to disempower humanity in order to be able to achieve their goals — people maybe would have laughed. Maybe they wouldn’t, but they would have thought that this was a very quirky concern and not something that really needed to be addressed by the national security community. What would the reaction be if you raised that today?
Tantum Collins: Overall, clearly we have seen a shift in the Overton window: there are all kinds of things that seemed outlandish and sort of sci-fi, tinfoil-hat concerns a couple of years ago that are now increasingly mainstream. I still think it’s the case that in most government settings, much of this is just semantics, but if someone frames these things in ways that echo these sci-fi tropes, they will probably be taken on the margin less seriously than if they ground it in the specific types of concerns that governments have taken seriously for a long time.
So for instance, and this is totally hypothetical: Could a US adversary or a terrorist group harness specific capabilities to generate specific harms that could kill a huge number of people? Obviously, post-COVID there is increased awareness of pandemic potentials, so that’s something that is top of mind for people. Anything that fits into sort of the types of weaponisation concerns that people have had, in many cases, for decades. Think about biological and chemical warfare agents and stuff like that: that’s something that the US national security community has had a longstanding interest in trying to track what foreign entities have the capacity to produce that kind of stuff, and the interest in doing so. Obviously, post-9/11 with the War on Terror, the US invested hugely in this apparatus for tracking not just nation-state actors but non-nation-state groups.
And some of that aligns quite nicely. It’s probably still the case that the best framing to get attention is not going to be the language that is used in other communities — including, let’s say, the EA and AI communities, broadly speaking. But yeah, that set of concerns, if you frame it the right way, I think that now people will listen to it.
Rob Wiblin: So it sounds like misuse is squarely inside the Overton window now. Whereas misalignment or AI going rogue, perhaps people are open to it, but it’s not something that people are necessarily thinking of that they need to work on sometime soon. Is that kind of right? Is there any kind of way of framing the misalignment thing that makes it more acceptable and feel more normal to people? Because in some ways I think it is just an extension of things that we’re extremely familiar with. I think people often overrate how weird it is in some ways.
Tantum Collins: Yeah, for sure. So at one level there are principal-agent problems — which are almost a perennial challenge of democracies and interstate relations. At the other end, you have the paperclipper — where I think if you said that to a policymaker who wasn’t really deeply enmeshed in this community, and you tried to explain things by way of this thing that’s going to kill everyone in order to produce paperclips, they would probably not take you very seriously.
But as you noted, the structure of the concern is something that people have encountered loads of times. And to tie it back tangentially to the things we were talking about before, there are all kinds of ways in which concerns about institutional function mirror these things. So the principal-agent problem: lots of people have discussed the firm and capitalist incentives writ large as having these similar problems resulting from optimising for a metric that doesn’t fully instantiate all of the things that we actually care about. So in some very abstract way it’s something that people are quite familiar with.
I’ll also say there are absolutely people who are concerned about alignment problems. And I would say there is a very large number of people who are concerned about specific problems that have already been instantiated or that are very plausible. For instance, you have trained something, and you didn’t want it to be racist, and it ended up being racist. Or the ways that, for instance, ChatGPT and other LLMs can not do exactly what you want them to do. So I think that the number of people who are sympathetic to those types of risks is many times larger than the subset of people who would be able to say in a sentence, “Alignment risk is a serious AI concern in a general way and we should prioritise it.” So there’s a lot of grey area there, if that makes sense.
Rob Wiblin: Yeah. I’ve been thinking this year about all kinds of different analogies you can have for misuse and misalignment, like other things that we can compare it to.
One that’s been salient to me recently is the comparison with a coup. So why is it that an AI system might want to disempower another group? It’s kind of similar to the reasons why a military group that has all the weapons and can take over an organisation relatively straightforwardly, if it has goals that are not the same as the people who currently run that organisation, then they might decide to just walk in with guns pointed at them and take the thing over wholesale. That doesn’t seem weird to us. We’re familiar with how humans would do that when they have different goals than the people who are currently leading a broad apparatus.
Imagine, just for the sake of the hypothetical, that there were an AI system that were as capable of taking over the government as the entire military would be, if it was completely united with that goal — so it was really, in some ways, quite straightforward, because they basically have the overwhelming amount of ability to do violence. Then it is kind of obvious why someone might opt to do that, if they didn’t have some very strong value that told them that it was an extremely wrong thing to do.
Does that analogy stand out in your mind as reasonable?
Tantum Collins: Yeah, absolutely. I think in many ways Goodhart’s law shows up in sort of all principal-agent challenges. At the macro level, we could think of a coup. At the micro level, we can think of the manager-managee relationship: say, you give some incentives to write as many reports as you can, or achieve these performance goals. And almost inevitably, in optimising for something that’s parsimonious, we end up choosing something that is not complete as a characterisation of the objectives that we actually want this person or this team or this entity or this organisation to achieve.
And this shows up in government all the time, because governments are absolutely massive. And it shows up also not just within government, but in terms of governments attempting to set incentives for external actors like corporations and individuals and so on. In many ways, the whole regime of incentives created by legal systems to try to get individuals to behave in certain ways is a case study in this. And sometimes it works OK, and sometimes it backfires catastrophically.
This is a bit of a tangent, but for a while, when I was in school, I did some intensive language study, and there is this cliché that language is essential as a window into culture and so on. And this is true, of course, not just between actually different languages, but between different professional vernaculars and so on. The way that people speak about something in SF and the way that they speak about it in DC will often be really different. And one thing that I’m very glad to see has changed in recent years — and that I was much more worried about five years ago — was that there were just very few people who could take something that had been formulated in, let’s say, SF speak and articulate it in a way that would be digestible to someone who is a native DC speaker.
So the kinds of stuff that you’re mentioning are exactly correct. I think that there are many, many, many of those parallels. And in some ways, that’s what has motivated my interest in this question of the design of governance institutions, because I think there are underexplored parallels there. But I won’t torpedo this too much by getting back into that.
Rob Wiblin: Yeah, it’s hard to track all of the news related to this topic at the moment. I guess in London, I hear more about UK government things organically. So the UK is planning to have this AI safety summit in November, where I think extinction and other risks are going to be on the table. And they also earmarked £100 million in funding for some sort of AI safety research, although I think it was a little bit hard to find out exactly what the details were. It sounds like you’re aware of both of these things. Do you have any opinions? Or do you have any idea of what the specifics are?
Tantum Collins: I definitely don’t know anything about the specifics beyond what’s been mentioned publicly. My general position is, I guess, the weak prior would be I think all of these are directionally good. I think that mainstreaming concerns about safety and also about a whole other set of AI ethics issues — the kinds of things, for instance, brought up in the Blueprint for an AI Bill of Rights: I think that those are unequivocally good actions.
In terms of the actual efficacy of a given programme, because AI is so general purpose, and research is so unpredictable, and different institutions conduct it in very different ways, it’s so execution dependent. So I think it’s very hard to say. I mean, £100 million sounds like a lot of money. There’s lots of good stuff that that could do. It’s also possible that very smart people will allocate it in ways that made sense a priori as a research bet and end up not paying off. So we’ll wait to see what the actual impact is.
But I would say that from an optics perspective, it is certainly good as a reflection of the fact that government is taking these things more seriously — and hopefully that also reflects the fact that the general public is taking these things more seriously.
Rob Wiblin: Yeah. Something that occurred to me in the shower last night, thinking about this programme: If you’re a small country — or a relatively small country; say, Sweden or something — and in general, the government there is worried about what effect AI might have on the world. Sweden has relatively little regulatory power to shape what OpenAI does, but Sweden could fund, for a tiny fraction of the government budget, a whole lot of research into ways to direct AI more safely, trying to detect ways that things might go wrong and heading them off early. Sweden, at least if it’s willing to fund people in other countries, if it’s willing to fund people actually doing the work overseas, is in as good a position roughly to do that as almost any other group. So I guess it’ll be interesting to see whether any other countries get in on this activity.
Tantum Collins: Absolutely. I think another thing that remains very underdetermined at present is what types of roles will countries that are not currently major AI hubs adopt. I think that a possible failure mode, which happens a lot in lots of areas of national policymaking, is that everyone wants to sort of have it all. But the clustering effects are such that, especially in something that requires as much collaboration as AI, realistically, it would be very difficult for Sweden to build a world-class competitive AI ecosystem in Stockholm. Whereas, as you say, they absolutely could decide to funnel their money to a specific subset of research. If they’re open to funding foreign researchers, or being part of some collaboration, let’s say some EU-wide thing, then there are lots of possibilities there.
There was this phase in 2017 to 2019 where seemingly every government released some national AI strategy, which varied hugely in terms of the amount of resourcing that came with them and the amount of specificity, and almost always involved sort of echoing the same very hand-wavy principles. I think that now we’re seeing this phase where the rubber is hitting the road, and governments are making real decisions about how to resource things.
And just from the outside, it’s a fascinating question for any of these countries, because in almost none of them has the government itself played a major role in AI research recently. So they all have this question of what tools can they use to, via indirect means, influence the direction that things go in? And especially for entities that are fairly wealthy, but wouldn’t be considered top-five AI players, how do they decide to do that? It will be interesting to see.
Rob Wiblin: Yeah. I think was it six months ago or something that it seemed like the UK government was really interested in creating a British version of ChatGPT.
Tantum Collins: Was it BritGPT? I remember something, I don’t think they…
Rob Wiblin: I don’t think they called it BritGPT; that was what the press called it. So people were making fun of it on that basis. I haven’t heard as much about that recently, so I wonder whether…
Tantum Collins: I mean, soft power, that’s one big thing the UK has going for it, right? I’m sure they could produce a more charming version.
Rob Wiblin: Yeah, I think I saw a satire where basically the point of BritGPT was to have GPT, but with British spelling. £100 million well spent!
While we’re on recent announcements, a few days ago, OpenAI said that they would spend 20% of their compute budget on not just alignment research, but alignment research related to superintelligent systems specifically. So in the announcement, it sounded like they thought we have some decent leads on how to align systems — like ChatGPT as it exists right now — but if you had something that just had substantially more capabilities, had broader capabilities than a human by far, and was just much better at lots of different things than us, then it seems like it’s going to get substantially more challenging.
And they feel like they’re a bit out of their depth right now. It sounded like they were happy to concede that, and so they wanted to put a lot of resources into that before having those systems, which I think is a smart move. Did you have any reaction to that announcement?
Tantum Collins: My reaction, I think, would be similar to the UK funding allocation decision, with similar but slightly different caveats. At the high level, I think it is a very good sign that major labs take these risks seriously, seem to be fairly honest about the stuff that they can and cannot do, and are willing to back that up from a resourcing perspective.
Maybe the caveats I’ll throw in are: first, it’s very hard to say, in an ideal world, how much of an OpenAI budget should be dedicated to what types of risks. So it’s entirely possible that maybe an outside audit would conclude that in fact they should put more of it towards near-term risks versus superintelligent risks and so on. I have no really strong views on that, but broadly speaking, I think directionally it’s very good to reserve resourcing for safety and ethics work.
The other thing I’ll say is that obviously there are many circumstances where the market creates adverse incentives, and where companies feel some tension between their PR priorities and their profit incentives. And in many historical instances, this has resulted in companies making pledges that seem great — and then when you dig into them and see how they unfold, actually it’s much more complicated.
I have no reason to think that’s the case here. I also know Jan [Leike] quite well, who leads safety at OpenAI. I think he’s fantastic. I think he does excellent work. Everything I’ve seen from OpenAI suggests that these concerns are genuine. But of course it’s worth flagging that, as with the sort of government allocation of money, the proof is in the pudding at some level.
Rob Wiblin: That’s pure Marxism, Teddy. I can’t believe it. You’ve really gone off the deep end on the left there…
I think my concern with the government funding is that it could be that because the government is relatively far away from, or it doesn’t necessarily have the AI expertise, that they might end up funding stuff that’s really quite daft from a scientific point of view. And then I suppose with OpenAI, you might worry that it’s going to get diverted into things that are more commercial, that are more profit motivated one way or another. Or at least that might only make sense from a view that’s somewhat biased towards maybe the activities that OpenAI is engaged in one way or another.
Tantum Collins: Exactly. That would be, I’d say, sort of the template of concern that I would bring to any private-sector commitments like this. That’s not to say that they’re not good. The commitment is better than the absence of a commitment if you’re going to choose one. And so I think that this is all very good. I’m just putting in the footnote the fact that…
Rob Wiblin: The devil will be in the details.
Tantum Collins: Yeah, the devil will be in the details.
China and AI [00:32:12]
Rob Wiblin: A topic that’s got a lot of airplay this year, of course, is China and AI. And there’s a couple of different angles there.
One is the arms race idea: that maybe the US should, even if it’s very nervous about all of this, just go full steam ahead with AI capabilities because otherwise they’ll lose their lead to China and that would be so bad. A second fear is the one that you raised earlier about how China might use AI to monitor its population and control Chinese people in a way that we would find repugnant. And a third one is how China might use AI in military applications to gain a strategic advantage.
So those are the three big categories that I’ve heard. I’m curious to hear potentially about what you might have to say about all of those, but the first one in particular, because I understand this is something that you’ve spent a bit of time looking into over the years. I think you were looking into China-related things specifically as part of your non-resident fellowship at CSET. Is that right?
Tantum Collins: Yeah. So I did actually in all three of my AI-related positions — which is to say, when I was at DeepMind, when I was a non-resident fellow at CSET, and also in government. Parts of all of those roles involved researching Chinese capabilities and intentions and legislation and so on, related to AI.
Rob Wiblin: To put the question simply, would you say the US is in an arms race with China over AI?
Tantum Collins: So I hate to sound like an academic — which, for the record, I’m not — but to some extent, this does boil down to a semantic distinction. There are some ways in which it resembles an arms race and some ways in which it doesn’t.
The ways in which it does are the obvious ones: The US and China, by most metrics, are, by a significant margin, the leading producers of AI-related research and products in the world. And in both places, AI capabilities are advancing quite rapidly. And in both places, there is some level of rhetoric around how do relative capabilities stack up, and we shouldn’t slow down — because if we slow down, then we’ll lose to the adversary.
There are also a number of ways in which it does not resemble a traditional arms race dynamic. The first is that, because very little of this is being done by governments — it’s mostly being done by private labs and, to some extent, universities — it’s quite distributed, and it’s more open than typical arms race development is. And most Western labs are more concerned, on average, about competition with other Western labs than they are with competition about Chinese labs. Obviously, that could change, but there’s relatively little coordination between these labs, and at the moment, there’s relatively little coordination between labs and government. Obviously, that is at some level increasing, but I would say the ecosystem is much more distributed than what we typically think of as an arms race setup.
Another thing that’s worth noting is that the US and China remain — by a significant margin — each other’s leading collaborators on AI research.
Rob Wiblin: That’s true even now?
Tantum Collins: Yeah. And in fact, US–China AI collaborations are, maybe predictably, by far the dyad that produces the most AI research. And not only that, but the US–China AI collaborations have increased proportionally since 2010, more than almost any other pairing. In part, that’s just because both have taken AI very seriously, and China in particular during that period has advanced quite rapidly from a STEM perspective.
But all of that is quite telling. And especially in the national security community, obviously some people have concerns about proliferation of knowledge and so on. In other ways, it’s very hopeful, because it suggests that maybe there is a foundation for, for instance, collaboration on safety stuff: setting standards and so on. So that would be my “on the one hand, on the other” characterisation here.
Rob Wiblin: One argument that I’ve heard for why of course there is a bit of a structure of a race here, but you might say it’s not a very hot race right now: A bunch of arguments that I’ve heard are basically just that the US, in terms of capabilities, is a decent number of years ahead, and that China is probably going to fall behind just because of, I suppose, two reasons. One is, they’re finding it more difficult to get cutting-edge chips — so they might fall behind in terms of the availability of compute — and perhaps also they just don’t have access to the same level of human capital as some of the best labs in the West have now.
And maybe also that China has this year been announcing or contemplating quite serious regulation of AI domestically — such that, again, we wouldn’t expect China to realistically catch up or overtake the US anytime soon. It’s kind of accepting with those regulations that it’s going to remain in second place, and it’s comfortable with that, so the race isn’t very intense at least. What do you think of those arguments?
Tantum Collins: I’m more sympathetic to some of those than others. In general, I would say I’m wary of depictions that the US is far ahead in AI, because A, things are so unpredictable, and B, the AI landscape is so broad and varied.
To give a couple of concrete things to back that up: From a research volume perspective, Chinese AI output has been greater than US AI output for several years, in terms of total number of papers and patents and so on. And I should say, by the way, there’s a lot of great research on this. In particular, I would recommend Jeff Ding’s work, also a former 80K interviewee I believe, and Jordan Schneider who does a lot of China analysis, and recently has focused increasingly on AI. I think both of them produce really good stuff. Also CSET has a lot of good, very data-informed takes on how to size up something as broad as the research output of two countries in an enormous field.
But at a high level, the volume of Chinese output is very high. It’s bigger than anywhere else. If you include Chinese language research, then I think it’s like actually four or five times the number of papers that get produced in the US. Often people will subset that: the average quality of a Chinese paper as measured by citations is lower than that of a US paper. People will often subset things by saying within the top 10%, within the top 5%, within the top 1% of papers, as measured by citations, who is producing more of them. China has now surpassed the US in all of those; it recently surpassed the US in top 1%. Obviously, if macro conditions right now in China hold, it’s possible that this will shift back in the coming years, but that has been the direction of trend so far.
There are several subdomains of AI where China is unequivocally more sophisticated. The most obvious of these is computer vision. It is the case that in terms of the currently dominant or most hype-laden paradigm — which is to say LLMs, and more broadly, big transformer-based architectures — the labs that jumped into the lead initially, and retain that lead now, are Western labs.
But I would say two things on that. The first is that, again, largely because of the openness of AI research compared to other domains of dual-use significance, the potential to catch up is significant. There are other factors pulling the other way — I think there are a lot of questions about recursive self-improvement at the lab level — but it’s certainly feasible to see leading Chinese labs catching up quite quickly.
Also, for reasons having to do with the general composition of Chinese labs versus US labs, when you look at big Chinese AI research institutes, headcount wise, they tend to be more engineering heavy relative to scientists than Western labs. The turn that research has taken over the past three or four years has been a little bit more in the engineering-y direction. It’s less about this, let’s say, creating AlphaZero — which, if you don’t have the 10 best people in the world at the same whiteboard, maybe you never create that — and it’s a bit more about these engineering considerations: How can you parallelise one of these huge models across a large number of chips, and so on.
I know this is a bit of a ramble, but the second thing I’ll say on foundation models in particular is that I think whenever we’re in the midst of one of these research paradigms, it’s easy to underestimate how quickly the dominant paradigm can shift. And so not only is it totally feasible that Chinese labs could catch up on the LLM and foundation model front, it’s also totally feasible that in two years’ time the new paradigm will be something totally different, and maybe something that plays to Chinese capabilities more.
Rob Wiblin: What about the idea that the US will have an increasing lead in terms of compute availability? Does that seem plausible?
Tantum Collins: It definitely seems plausible. I think it’s too early to say what the long-run effect will be of various measures. Including, obviously, there’s the CHIPS Act, which is significant domestic investment on the US side, as well as the export controls that try to limit Chinese access to not just the chips themselves, but also this kind of stack of material and machines that produce them. So it’s certainly possible that that will be an inhibition.
China has invested a tonne in indigenisation. One area where I personally have updated is a few years ago, I would have thought that that lead would be very difficult to defend, just because, in general, Chinese manufacturing capacity is significant. It’s an incredibly sophisticated ecosystem; they have the capability to make lots of stuff. You know, I’m not a hardware person by training, the more that I’ve learned about this space, the more I’ve appreciated how difficult it is to machine this stuff at the level of precision necessary, and the degree to which most of those components come from a small number of countries that are pretty tightly allied with the US. So it is undoubtedly difficult. China has invested a tonne in trying to build out its own production capabilities with only middling results. There’s good reason to be sceptical that marginal investment will make a big difference, especially given that the macroeconomic situation in China at the moment is sort of challenging.
That said, it’s completely possible that they manage to catch up. In some ways this resembles the LLM thing: A, they could catch up on our terms, as it were, by building comparable machines, or B, in semiconductor production — as in AI algorithmic development, the paradigms shift relatively frequently. Not quite as frequently, because hardware just takes sort of longer to gestate, but it’s totally possible that some of these more off-the-wall bets — neuromorphic computing, optical computing, something like that — takes off, and in five years’ time we’re talking about a totally different supply chain. Maybe one that the US and its allies don’t control; maybe one where China has a significant advantage.
And the final thing I’ll say is that the main costs that are imposed by not having the latest and greatest chips is not so much an absolute threshold on what you can do; it’s more just that the energy expense goes up, and so you’re getting fewer FLOPS per watt.
Rob Wiblin: So you’re just throwing more of the ordinary chips at it for longer and spending more money on the electricity.
Tantum Collins: Exactly. And in fact, there’s an interesting precedent here: When the US restricted Chinese access to Xeon chips made by Intel to try and inhibit Chinese production of supercomputers. And China responded by saying, “Fine. We’re going to make a supercomputer that’s just as good, and we’re going to do it with homegrown chips just to prove a point.” So obviously that raises the cost. And on the margin, raising the cost is a good way in expectation of dissuading an adversary, because it makes it harder for them to do it. But it doesn’t make it impossible.
Rob Wiblin: Well, the thing is, the cost of these supercomputers as a fraction of national GDP is just not very high. So if you’re willing to spend 10 times as much as you were because this is just the number one thing for you, then maybe it’s not a true impediment.
Tantum Collins: Exactly. Two quick reflections there. The first is that indeed the budgets for AI-related stuff have gone up a tonne recently relative to where they were before, and have reached a level that is very significant even for large companies. But the spending capabilities of major governments are typically much, much bigger. I have no idea where we’ll end up, but one can imagine a world where the amount of funding that gets poured into building these facilities and/or doing the biggest training runs continues to increase quite significantly. I think that there is a sense right now that these numbers have gotten astronomical — and in a relative sense they have — but that doesn’t mean that we’re close to the ceiling, depending on which actors decide to really jump into it.
The second point I’ll make — and I’m not a historian; I’m always wary of making sort of broad inferences — but there are lots of examples from Chinese history, both modern and prior, of a willingness to throw a huge amount of resourcing at massive projects, and an ability to do coordination at a scale that almost any other government would find really challenging. So that’s just a long way of saying that it’s certainly not the case that raising the cost makes it impossible.
Rob Wiblin: I guess there’s backyard steel smelting, which I think managed to kill a few million people.
Tantum Collins: Sometimes with greater economic efficiency than other times. But even more recently, there are a series of sort of science and engineering megaprojects that have been remarkably impressive.
Rob Wiblin: Yeah, totally. We’ll stick up a link to the iron-smelting project for people who want to get that historical reference. It didn’t work out. But no, that’s a cheap shot, because I’m sure more recently they’ve managed to accomplish some megaprojects that are really very impressive. I mean, we absolutely know they have.
Tantum Collins: Yes. And also, historically, there are all kinds of infrastructure projects that at the respective times in history would have been unfathomable to any other civilisation, whether that is building a river, building the Great Wall of China, all kinds of absolutely massive endeavours. And loads of stuff more recently in the STEM space.
Rob Wiblin: What about the other point, about China’s domestic AI regulation? Do you know very much about what they have decided to do just yet?
Tantum Collins: Yeah, I have followed this. Again, I’ve been out of government this year, so I know only what’s been published publicly. And again, it’s always difficult to predict what the impact of this stuff will be. And in the China case in particular, it’s often hard to get a sense of the relative motivations for different stuff.
So for instance, in the case of LLMs, it seems clearly to be the case that to some extent, these restrictions are grounded in fears about these systems perhaps spreading messaging that is contrary to the messaging that the Chinese Communist Party would like to see. It seems as though, to some extent, it’s also grounded in the kinds of concerns that are shared in the Western world, which is to say: these systems could be dangerous, they’re somewhat erratic, and we should eventually get to a better place than we are now in terms of having some guarantees that users can rely on about how and when to trust these models to do different things.
Yeah, it’s quite early to say. Again, I would say Jeff Ding has done some very cool work, some cool translations of documents in this space. So just a couple days ago, he released a translation of something that had been put together, I think by Alibaba Research, together with this basically government think tank that is comparing Chinese approaches to regulation to Western approaches to regulation. And in fact, it’s advocating for something that looks a bit more like the approach that is taken in the US, which they generally describe as being slightly heavier on sort of “soft law” — which is to say, norms that are maybe enforced by third parties and that involve consultation with the companies producing these things and so on — as opposed to what they describe as a slightly more traditional “hard law” approach that’s been taken in China.
But yeah, the short version is China is absolutely doing AI regulation. It’s probably too early to say how effective it will be or what they actually want to get out of it. But in broad brushstrokes, my inner optimist wants to say that is maybe a sign that, A, there is perhaps a foundation for some kind of international coordination, though that’s complicated for a whole host of reasons; and B, at the very least, even if there isn’t formal coordination, it’s more feasible to avoid race conditions if you have multiple entities that say, “We will make the decision to impose some regulation.” Again, the devil’s in the details, so I don’t want to sound too naive, but it could be good news.
Rob Wiblin: Is there anything else you wish more listeners knew about AI in China?
Tantum Collins: There are a whole bunch of things, and in a funny way, I think that a lot of news coverage of China tends to fluctuate between extremes. And this is true in AI as well as more broadly, where there will be a hype cycle that is, “The rise of China is inevitable,” and then there will be a hype cycle that says, “A centralised system was never going to work. Why did we ever take this threat seriously?” And of course, almost always the reality is somewhere in the middle.
I think that when China began making investments in AI, in particular with the 2017 release of this report from the State Council that was AI focused, I think that that, for a lot of people, was the first time that they began paying attention to what might China-related AI things look like. And in some critical ways, China is, of course, different than the US and other Western countries. I think that initially there was a moment of not fully appreciating how different the research culture, and in particular the relationship between companies and the government, is in China, relative to Western countries.
But now I worry that things have gone too far, and that people have these caricatured versions in their mind of what things look like in China. There are many of these, but to list a few, there is a meme that I think is inaccurate and unhelpful: that China can copy but not innovate. And obviously, historically, China has been responsible for a huge amount of scientific innovation. And also recently, again, the number of papers coming out of China on AI has increased significantly, it’s more than any other country. I mean, there are loads of examples of very impressive, innovative scientific achievements coming out of China.
In particular, to cite Jeff again, he has written an interesting paper that describes the difference between innovation and diffusion of technologies in terms of the effect it has on national power. And so there are some countries that historically played a big role in early industrial technology, but didn’t manage to diffuse it across the country effectively enough to, for instance, make the most of it from a military perspective or economic growth. And there are others that did the inverse. So in the 1800s, there was a lot of commentary that the US was terrible at creating things, but was pretty good at copying and diffusing.
Jeff has a take — he has some interesting ways of measuring this — that today China is actually, in a proportional sense, better at innovation and worse at diffusion. And that one big strength that the United States has from a competitive angle is that it’s actually much better at diffusion. I think that will strike a lot of people as being the inverse of what they think, based on having read some stories about high speed rail and solar cell production and so on. They have this meme of China can copy these things and suffuse it across the country and reduce the cost of production, but it isn’t able to create things.
I think that’s an area where I can see where that stereotype came from, based on the specific economic investments that China made in the 1990s and 2000s, but I think it is inaccurate — and people risk misunderstanding how things look if they buy into that.
Rob Wiblin: A thing that’s tricky about that is if you’re behind the technological frontier, then it makes sense to try to copy in order to catch up, regardless of whether you could do your own innovation or not, because that’s just the fastest route to getting there.
Tantum Collins: Completely. I mean, obviously, economic growth in China over the past couple decades has been quite remarkable. And I think people often mistake a decision to do thing A with the inability to do thing B. And in this case, I think it’s exactly what you said. Maybe I’ll also just list one or two other China misconceptions. There are loads out there.
Another one is this idea that all of these companies are completely aligned and they take all of their marching orders from the government. Again, it’s easy to see where this came from. Almost all major companies do have a party committee within them. There is, in many ways, a tighter relationship between the tech community and the government — definitely more than in the US, where, for a whole host of reasons, there’s been historical mistrust — but there is still intensely fierce competition between companies.
And the society writ large, I think, is much more pluralistic than it is often depicted as being in the US. So we’ll get these sensationalised stories about, for instance, the social credit system and so on. And it often ends up being grounded in something that it’s not woven out of whole cloth, but it’s miscontextualised to a very worrying degree.
Rob Wiblin: I’ll stick up a link to something about the social credit system. As far as I can tell, Western reporting on that has been basically complete bollocks.
Tantum Collins: Almost total nonsense. And the problem is, it just kind of ties into this theme, which is there are pieces that are not wrong. Like, China does have a huge surveillance state. And to somewhat contradict myself now, one fact that people often don’t appreciate is that the Chinese internal security budget for the past several years has been higher than its defence budget. So people worry about that. They compare the Chinese defence budget to the US defence budget, and it’s often easy to lose track of the fact that even more money is being spent on domestic stuff, a lot of which is very repressive. So there’s plenty of reason to be concerned. But often the bits and pieces that get pulled out in news coverage are inaccurate, and the social credit system would be one of those.
And I think that leads to a simplified representation of China as this sort of unitary actor — and being an autocracy is not really the same thing. Everyone has principal-agent problems, to bring it back to what we were talking about before, and there’s plenty of internal competition within the government, between companies; lots of differences in terms of the goals that companies would like to strive for versus what the government wants them to do.
One way of interpreting the tech crackdown that Xi Jinping has led is that those incentives clearly were not aligned to the satisfaction of the CCP, or at least the Politburo Standing Committee. That would be another misconception. Also, the idea that there’s no AI safety concern in China. There are people who are worried about this. It’s probably a smaller community, respectively, than it is in Western countries. But I think often we end up with this sort of cartoon impression of the country.
Rob Wiblin: Yeah. Another thought that I’ve felt myself coming back to repeatedly this year, in terms of US–China racing over AI, is just that given that cybersecurity is as bad as it is, it seems trying to win the race by training a model in your country rather than the other country seems incredibly perilous. Because it seems very likely that if it’s widely known that this model that you’ve trained potentially could provide a decisive strategic advantage or has enormous geopolitical implications, then probably it’s just going to get nicked by the other country because offensive hacking in general dominates defence at the moment. Does that seem like a legitimate worry?
Tantum Collins: Yes, absolutely. I think this is a huge concern. I think that in general, yes. Yes.
Rob Wiblin: Cool. All right, good. I guess overall it sounds like you think that there is a decent amount of a race dynamic between the US and China, or that at least we can’t set that aside completely and say there’s no worry here. So this is a consideration that needs to be weighed against everything else.
Tantum Collins: Yeah, definitely. In general, there are a host of questions that listeners have probably come across previously, having to do with the tension between the benefits that come from openness and the security risks that that brings. One of the things that historically has been really nice about the AI world has been that it has instantiated a lot of values that in general are great — around openness, and collaboration, and open sourcing code, sharing models, publishing results, opening things up to peer feedback, et cetera.
Rob Wiblin: Yes. I have found it very ironic that the scientific discipline that I am most nervous about, maybe with the exception of gain-of-function research seems to be the only one that is extremely healthy in all of these respects.
Tantum Collins: Yes, it’s true. And part of that, I think there are a whole host of cultural reasons. There are also, of course, some innate reasons — which is that anything that’s digital, the marginal cost of sharing is just much lower than with physical stuff. Recently, of course, as the cost of training these models has gone up, that’s been difficult. But of course, inference cost is still pretty low. So the other day I was running Whisper, I was running the biggest version of Whisper, just on my MacBook and it works fine. And of course, now there are a bunch of generative models that you can run locally as well. So yeah, I think that it helps openness, and in some ways harms security, that these tools are so easily and readily shared.
And that creates lots of tricky questions. I do not think that there is a magic bullet here. I think that there are a lot of difficult decisions that governments and labs will have to make around where on that frontier between sort of openness and security they want to live.
This is one of the things that was the most visceral to me when I transitioned from DeepMind to government: at DeepMind, and in the AI community in general, there is this major emphasis on trying to make sure that your work is shareable and publicly visible. And it’s reflected in office layouts and trying to make it easy for people to collaborate, and it also exists at a more macro level: the types of achievement that people care about are getting papers published at conferences. It’s an innately public undertaking. And in government, this is a bit of a cliché, but it’s exactly the inverse, insofar as the cooler your work is, the less you can tell people about it. And so everyone wants to be in the innermost compartments and go through 10 locked doors in order to get to their office.
This is painting things in broad brushstrokes, but clearly there’s a big cultural difference there, and it will be very interesting to see how the AI community evolves in that respect.
The most promising regulatory approaches [00:57:51]
Rob Wiblin: Yeah. OK, let’s talk about what we might hope governments will do on AI regulation over the next few years, dealing with that incredibly difficult tradeoff, among many other things. Do you have a view on which regulatory approaches are most promising, all things considered, for governments to try to pursue over the next couple of years?
Tantum Collins: I think there are a few like level-zero, foundational things that make sense to do. One is, at the moment, especially in the West, there are major disconnects between just the way that people in government use language and think about things relative to the way that people in the tech world do. So one important step zero is just increasing communication between labs and governments, and I think recently there’s been a lot of positive movement in that direction.
A second and related thing, and this somewhat ties back to these democratisation questions, is that even under the most competent technocracy, I would be worried about a process that doesn’t involve a significant amount of public consultation, given how general purpose these systems are, and how pervasive the effects that they could have on our lives will be. And so I think that government has a lot of work to do — both in terms of reaching out to and engaging the AI community, and also in terms of engaging the general public.
There’s been a lot of cool work in this direction recently. I’d highlight what The Collective Intelligence Project has undertaken. They’ve led a series of what they call “alignment assemblies“: essentially exercises designed to engage large, ideally representative subsets of the population with questions about what kinds of AI things worry you the most.
Also, recently from labs there’s been some interest in this stuff. OpenAI has this democratic input for AI grant that people have just applied for. And then also there are several labs that are working on projects in the vein of, in particular, LLMs: How can we use these to facilitate larger-scale deliberative processes than before? And one of the projects I worked on when I was at DeepMind — and I’m actually still collaborating with some of my former colleagues at DeepMind on — is something in this direction.
So that would be some sort of very basic, before even landing on a policy stuff, that I think is important. Beyond that, I think that there are some areas that are relatively uncontroversially good. So to the extent that we think that AI will, at some level, be a public good, and that private market incentives will not sufficiently incentivise the kind of safety and ethics research that we want to happen, I think that allocating some public funding for that stuff is a good idea. And that’s the full gamut of x-risk alignment things — like more present-day prosaic ethics impact considerations, interpretability research: the full list.
And a final thing that I think is as close to a no-brainer as you can get is that clearly some kind of clearer benchmarking and standards regime is important — because right now it’s sort of the Wild West, and these things are just out there. And not only is it difficult to measure what these things can and cannot do, but there is almost nothing in the way of widely known trusted intermediary certifications that a nonexpert user can engage with to get a feel for how and when they should use a given system.
So there are a whole bunch of different proposals — some involve the government itself setting up regulatory standards; some involve some kind of third-party verification — but having something. And that could be model cards, it could be the equivalent of nutritional labels. It’s kind of a whole range of options there. But at the moment I think a lot of people are sort of flying blind.
Rob Wiblin: Yeah. Could you say more about that last one? Is this the evaluations and monitoring thing, or certification?
Tantum Collins: Yeah, I’m kind of lumping together a couple of different things here. Basically the idea is that, in many other areas when we have technology that will be in some way consequential, there are either hard-and-fast regulatory requirements that systems have to clear — so before a car is road-worthy or a drug can be used legally — or there are third-party bodies that have some kind of standard-setting, so that a user who doesn’t understand the field in detail will be able to look at something and say, “Yes, I can use this, without huge risk to my health,” let’s say.
And there are all kinds of different ways that this is interpreted in different industries. But at the moment, largely just because the widespread use of powerful machine learning systems is so recent, we don’t really have an equivalent thing for AI.
Rob Wiblin: So the idea is that potentially it might be premature — because we just don’t know what sort of regulatory regime we want — for the government to have strict requirements about exactly what sort of specifications LLMs need to meet before consumers can use them. Maybe we want to get to that at some point. But the thing that we could do earlier is just build an infrastructure, potentially a private infrastructure, of people who are testing these models, and understanding their strengths and weaknesses, and basically providing a health warning to users so that they understand what they’re getting into when they use these models. And potentially then, once we have a much better idea about what we actually should require, we can go ahead and do that. But that might be down the road.
Tantum Collins: Yeah, exactly. I actually don’t really have strong thoughts from a sequencing perspective. Maybe we do want to begin with something that is binding government regulation. But I think at the very least there should be something that users can use as a foothold to get a sense of when it’s safe and responsible to use tools in certain contexts.
And often there are sort of middle categories. So for instance, a lot of the guidance issued by NIST, which is the National Institute of [Standards] and Technology in the US, is not legally binding, but it is an agreed-upon standard. For instance, NIST released this AI Risk Management Framework, which a lot of companies have found very helpful, because they have wanted to be able to say, “We checked all of the boxes on this thing before using AI in a certain way.” Again, it’s not legally required that they do this. But I think a lot of entities were really concerned that they wanted to do something good, and when they googled it, they found like 1,000 different papers about AI ethics — and some of them had to do with user responsibility and some of them had to do with the paperclipper. And to someone who is doing compliance at a mid-sized company, that’s just a lot to have to digest and deal with.
So whether it is something that is like binding regulation, soft guidance from the government, or some sort of third party — obviously there, you’d have to make sure that it wasn’t industry capture — I think that there needs to be something that the average user can interact with, without reading a treatise on how to interpret the activations in the hidden layers of neurons in these systems, so that they can have confidence when they interact with these things. Just like we do with all the other sophisticated technology that we use all the time: planes and cars and computers and phones and so on.
Transforming the world without the world agreeing [01:04:44]
Rob Wiblin: OK, turning back to the first thing you mentioned, which is democratic consultation or getting public feedback. There is something that is quite bananas about the situation that we’re in right now. So there’s this community of about 10,000 scientists, many of whom believe that they are on the verge of transforming the world completely, and potentially bringing about luxury gay space, fully automated communism within our lifetimes — and no one’s really been asked whether they want this. I mean, I guess this work is mostly happening in the US and UK, but as far as I can tell, voters haven’t really been told about this plan, or asked how they feel about it in those countries — let alone anywhere else.
So what about you’re a subsistence farmer in Mozambique, and now your world might be completely turned upside down because it might not be possible for humans to make money anymore? Or at least science might advance at such a massive rate that the world will become incomprehensible, potentially, within your lifetime. And at what point are you going to be asked whether this is what you want for your species?
Do you have any more thoughts on what we ought…? It’s not necessarily a criticism of OpenAI, or that research community, because I suppose this is a very weird circumstance. Mostly scientists don’t have to do this, because what they’re doing is more straightforwardly good — and if you’re doing research to come up with better treatments for cancer, in a sense the public does know that you’re doing this and kind of has been consulted. We’ve just barreled forward into quite a different situation with AI, and it’s much less clear what consequences are going to result, and things haven’t been kept up at all.
Tantum Collins: Yes. So I completely agree with that characterisation. I would say there are many factors that make this unusual. One is that things progressed so quickly. So if you were a machine learning researcher in 2016, maybe you were just doing the nerdy thing that you wanted to do because you’d always enjoyed programming. And now all of a sudden, you think that you are potentially part of a small community that’s on the brink of building this potentially world-changing thing. And the community is not representative, and it doesn’t really know what most people want, and so on. So all of the problems that you identified are accurate.
Another thing that’s challenging is that, from a government perspective, because AI is so general, it doesn’t tidily map to the taxonomy of issue areas that government has grown around. So it’s hard to say. There’s no clear agency —
Rob Wiblin: No agency of “Do we want to completely transform society in the next 20 years?”
Tantum Collins: Exactly. And this will affect all of these areas of products, and science, and daily life, and so on. It’s almost too broad and deep to grapple with using the tools that we’ve traditionally used. The punchline that I find most interesting is this idea that perhaps AI itself can help us build some of the tools that are needed to have these conversations.
But I’ll say maybe one other thing from a framing perspective, which is that in some ways this problem is just a more extreme manifestation of things that are actually quite pervasive now. As the world has become more sophisticated and more specialised and government remit has grown, all of us use things all the time that we basically have no idea how they work. And that creates a whole bunch of questions —
Rob Wiblin: Like magnets, for example.
Tantum Collins: Yeah, magnets: What’s going on there? [laughs] And so AI is a particularly extreme example here — but I would say it is a difference in degree, rather than a difference in kind. And there are many ways that potentially AI could help us have deep and meaningful conversations at scale about a host of topics — including things like how AI should be developed, and what use cases should be prioritised, and what dangers are the most worrisome, and so on. And I’m happy to get into detail on any of that if it’s useful.
Rob Wiblin: I suppose it’d be great if AI could help us with this challenge, because there’s a lot of work to be done, potentially, and not very long to do it in. But it seems like we should maybe also have a track of doing it the normal way — of finding out what people think — in case the AI thing is maybe a bit of a red herring.
Tantum Collins: Yes.
Rob Wiblin: I mean, people could push back and say that actually, this is better left to the scientists, because they have a better sense of what the tradeoffs are here. And just involving ordinary people — who have no particular expertise in AI and haven’t thought about this at all — maybe it could just produce worse decisions. You could always make an argument that this is very specialised work. I can see that.
I also think, given how much people are going to be affected, it doesn’t feel legitimate to me to do these things without consulting with people — even if that might slow things down substantially, or even if there is some risk that they might make mistakes, even by their own lights after the fact. Is there anything more prosaic that you think should be done in terms of public consultation?
Tantum Collins: Yeah, absolutely. Maybe I jumped too quickly to my techno-utopian vision. And I should say, for the record, that part of why I’m interested in working on that stuff is because I think there are about a million ways that well-intentioned techno-utopian approaches to this can go wrong, and even the best ones raise some tricky questions.
So absolutely, there are loads of more prosaic things that can be done: for instance, these alignment assemblies that The Collective Intelligence Project has run so far. I mean, they’ve used an online platform, but mostly they’re held the way that citizens’ assemblies have been held for a very long time. There’s kind of been this resurgence in interest in citizens’ assemblies. Hélène Landemore has done a bunch of cool work there. I’d also point to Danielle Allen, who’s done some really cool things.
I think that you previously interviewed Audrey Tang, who has led a bunch of projects in Taiwan — some of which are more sort of tech-forward and some of which are less tech-forward — that involve sort of broadening the avenues for public participation in decision making, including decisions about complicated technical topics like disease spread. There was a big consultation about exactly how to regulate ride-sharing services like Uber. All of these things are in some ways more complicated than the ways that we in our daily life will interact with these topics.
I think a lot of this is execution-dependent: there are unproductive ways to run conversations like this and there are productive ways to run conversations like this. There’s been a lot of cool research showing how to increase the likelihood of falling into camp two: How do you make it healthy? How do you set up an environment where people can engage with topics in which they don’t have expertise, but come out of it having expressed preferences that, on reflection, they would actually endorse?
And that does not need to involve any sort of fancy technology; there are lots of approaches. The promise of technology is maybe you can do this in perhaps a deeper and more scalable way. But again, those are differences of degree, not differences of kind. And so absolutely there are all kinds of ways that I think we can and should be doing more public consultations.
Rob Wiblin: Yeah, the mechanism here that’s always made the most intuitive sense to me is citizens’ juries. I guess the nightmare scenario that you might envisage for a big public discussion about AI is having all 330 million people in the United States spend five minutes thinking about AI and then offer their opinion in a comments thread.
The thing that does sound quite good to me, and probably would lead to useful insights, would be if we got 1,000 or possibly even 100 randomly selected people from across the US population. You’d have some sort of jury summons, basically, and you would pay those people a salary to think about this full time for months or years — to learn everything, and to hear from all of the different people and their different views about what should happen and why. And then talk among themselves in order to produce some idea of what a representative sample of people across the country [would want].
I mean, ideally, maybe you’ll do this across the whole world as well. You could sample 1,000 people from all countries across the world, in proportion to their population, and then find out what they think that the tradeoff we ought to accept should be, across safety versus risk versus delay versus speed up versus a potential reward. I have no idea what would come out of that, but I’m very curious.
Tantum Collins: Yeah, absolutely. Maybe an abstract framing here — that I think is imperfect, but would be a starting point — is that people tend to have a pretty good sense, if you can paint a picture for them, of what possible future worlds look like. People will have relatively well-informed preferences on how much they would value each of those. The challenge comes in figuring out what policy and technical tooling will mediate the achievement of those worlds.
So this is where you have a lot of depressing results about the fact that people vote for policies that are not in their interests — not out of a sense of nobility, but because they mistakenly believe that they are in their interests. And this, I think, fuels a lot of historical antidemocratic arguments, many of which take the general form of, “Maybe once upon a time, democracy made sense, but now the world is too complicated and you simply cannot trust a randomly selected citizen or group of citizens to make decisions about something as complicated as AI policy, nuclear policy, regulating the economy,” et cetera, et cetera.
Personally, I’m not nearly that fatalistic. I think that there are lots of ways — some of which involve using fancy AI things and some of which don’t — that you can set the stage for conversations that enable people to do that mapping in a well-informed way, so enable you to extract truly well-informed reflective preferences from people even about domains in which they don’t necessarily have preexisting subject matter expertise. Part of the promise of AI in these settings is the idea that it’s easy to imagine, for instance, non-hallucinating LLMs that are relatively unbiased and well-informed on the relevant areas of scientific expertise, helping individuals navigate this thicket of mediating levers, if that makes sense.
Rob Wiblin: Yeah. I think the argument that the world is too complicated technically for people to understand it is quite a compelling argument against direct democracy. I think it’s a much weaker argument against representative democracy, where you do then have people working full time in order to try to build enough expertise in these areas in order to make decisions. Of course there’s still lots of challenges there, and weaknesses, and mistakes are going to be made. But we have representative democracy in many countries now, and it seems to work reasonably well, and so we might hope that it could work reasonably well in this case as well.
And the citizens’ juries is an alternative structure for representative democracy that I guess is aiming more to represent the values of ordinary people, and potentially also to bring the expertise that they do have to the table. You could imagine, if the concern is that a lack of technical understanding is going to hold this group back, what if you had that 100 people and they just summoned expert witnesses constantly? Like for years, Tuesdays, Wednesdays, Thursdays, they have people giving them briefs; they call in experts from all across the ML field. I think what they would come away with is that there’s an awful lot of disagreement here, and they can’t tell who has the right read on how great the risks are, and they would have to make a decision on that basis — which is the same situation that in fact, I think we are basically all in: that there’s an enormous amount of expert disagreement and that leaves us in a difficult situation. Nonetheless, we have to act.
Tantum Collins: Yeah, I think that’s right. So maybe to step back and give an abstract framing. Even in areas of high uncertainty, there are some approaches or institutions that seem to have better predictive track records than others.
Robin Hanson has this proposal for “futarchy,” where the tagline is “Vote on values, but bet on beliefs.” The idea is that you elect representatives, and what they’re in charge of doing is essentially creating the national objective function. And that’s going to be some combination of, let’s say, GDP and Gini coefficient and environmental wellbeing and the quality of education and all of this. And you elect people based on the values that they profess, and they sort of hash out how to integrate all of these things into a single metric. And then policies are proposed, and you defer to prediction markets which policies are adopted. Essentially the question that you’re posing to the market is something along the lines of, “Will this increase this national objective measure better than the alternative?” — where the counterfactual is not implementing it.
And so in principle this is a nice idea, in that it separates this values component and this prediction component. In practice, I think that actually those two things are almost hopelessly intertwined, and untangling that requires a lot of very fine-grained work: To what extent do you want something to happen because it reflects a value of yours versus to what extent do you want it to happen because you think it’s the means to another end? I think that act of unravelling those factors is a very difficult one, and is one that productive conversations can achieve pretty well.
So this is where the value of something like citizens’ assemblies shows up — or conceivably, something where you’re interacting with a scalable automated system that also helps you digest which things you want and why, and then perhaps handles the mapping of that to expectation over a set of policies that may or may not produce those outcomes.
AI Bill of Rights [01:17:32]
Rob Wiblin: Coming back to the here and now, do you have any suggestions for how the AI Bill of Rights proposal in the US or the EU AI Act might be improved, or areas where you’d like to see them changed?
Tantum Collins: I very much liked the Bill of Rights. The challenge there, of course, is that there is no component of it that is legally binding. And the reason why it’s titled the Blueprint for an AI Bill of Rights is because it is in some ways an aspirational document that outlines the kinds of things that people should be able to expect from these systems. But no producers of AI products are currently in any way bound by this stuff. And so in the long run, I suspect we will want to, in a legal or regulatory way, instantiate those aims more fully.
Another thing I’ll say about both the Blueprint for an AI Bill of Rights and the EU AI Act is, as is the case with almost any of these documents, they are not comprehensive solutions to all AI-related harms. So both of those focus overwhelmingly on questions of: With capabilities that exist today, or will exist in the very near future, what are the kinds of day-to-day challenges that people in society have or will have in using them? As opposed to questions about, let’s say, alignment of superintelligence — that’s almost entirely absent from these frameworks. So I think that they are addressing a subset of problems.
I think that the Bill of Rights does it quite well, with the caveat that it isn’t super binding. When I was in the White House, I did do a fair amount of coordination work with the EU on AI issues, but the AI Act is still under construction, and I think that just this past month they either proposed or approved a new slate of amendments — and in total, the number that’s being proposed is, I think, several thousand — so I cannot claim expertise in having read all of this stuff. But the big concern that some people have flagged with the AI Act is the classic regulatory concern of will this discourage innovation and so on. I haven’t really seen any definitive signs that would be the case, but I don’t have huge confidence when speaking about the AI Act.
Rob Wiblin: Is the plan with the AI Bill of Rights that we’re starting with this kind of voluntary framework, or basically foreshadowing what the legal requirements might be in the future, with the goal of seeing how that goes? And then at some future time, once we have a better understanding of what’s technically possible, then that could be made binding?
Tantum Collins: I am not sure if there is an official and universally agreed-upon US government position on that. I know a lot of people, both within government and outside, hoped that was the trajectory it would take. I also know other people who are much more regulation-shy, who are much more in favour of something that’s very laissez-faire and just let the market figure things out and so on. Personally, I’m a little bit more in the former camp than the latter, but I am not sure what the latest consolidated government position is, if there is one.
Who’s ultimately responsible for the consequences of AI? [01:20:39]
Rob Wiblin: It’s very unclear to me how responsibility for the consequences of AI is split across various different parts of the US government. It feels a bit like there’s no identifiable actor who really has to think about this holistically. Is that right?
Tantum Collins: Yes, this is true. And in part this gets back to this issue of AI is, A, new; B, so general that it challenges the taxonomy of government stuff; and C, something that government has not until recently engaged with meaningfully in its current form. So various government research projects throughout time have used some kind of AI, but government was not really in any meaningful way driving the past decade of machine learning progress. And all of this means that there are a tonne of open questions about how government thinks about AI-related responsibilities and where those sit.
Rob Wiblin: Who are the different players though, who at least are responsible for some aspect of this?
Tantum Collins: So within the White House, the sort of main groups would be the Office of Science and Technology Policy, where I worked before. That, within it, has a number of different teams, several of which are quite interested in AI. There is one small group that is explicitly dedicated to AI; there is a national security team, that was where I sat, that handles a lot of AI-related things; and then there is the science and society team, that was the team that produced the Blueprint for an AI Bill of Rights. Each of these groups work together quite a fair bit, and each one has a slightly different outlook and set of priorities related to AI.
Then you have the National Security Council, which has a tech team within it that also handles a fair amount of AI stuff. At the highest level, OSTP historically has been a bit more long-run conceptual research-y, putting together big plans for what the government’s position should be on approach to funding cures for a given disease, let’s say. And the NSC has traditionally been a bit more close to the decision making of senior leaders. And that has the benefit of, in some immediate sense, being higher impact, but also being more reactive and less long-run thinking wise. Again, these are huge generalisations, but those are sort of two of the groups within the White House that are especially concerned about AI.
These days, of course, because AI is on everyone’s mind, every single imaginable bit of the government has released some statement that references AI. But in terms of the groups that have large responsibility for it, then of course there is the whole world of departments and agencies, all of which have different AI-related equities.
So there’s NIST, which I mentioned earlier, which does regulatory stuff. There’s the National Science Foundation, which of course funds a fair amount of AI-related research. There’s the Department of Energy, which runs the national labs. And the name is slightly misleading because they don’t just do energy stuff.
Rob Wiblin: They mostly do nuclear, right?
Tantum Collins: Exactly. It’s sort of funny because this is actually something I didn’t really appreciate until I came into government. My understanding of the US bureaucracy was pretty limited, I think in part because I didn’t do all of my schooling in the US. So the basic civic education stuff — a lot of it that people learn when they’re like eight — I didn’t learn until I was living it in this job. And one of these things is that the Department of Energy is actually this incredibly powerful and really, really big organisation. In my mind, I thought they do wind farms and things. But it turns out that, because they’re in charge of a lot of nuclear development and security, they actually, especially in the national security space, have quite a lot of authority and a very large budget.
Of course, in addition to all the stuff in the executive, then there’s Congress — which has at various times thrown various AI-related provisions into these absolutely massive bills. So far, I believe both the House and the Senate have AI-focused committees or groups of some kind. I’m not super clear on what they’re doing, but obviously there is also the potential for AI-related legislation.
Anyway, the list goes on, as you can imagine. Obviously the Department of Defense and the intelligence community also do various AI-related projects. But yeah, at the moment there isn’t a clear coordinating entity. There have been a number of proposals. One that’s been in the news is Sam Altman suggested during his testimony that there should be a new agency created to focus specifically on AI. I think it remains to be determined whether that happens and what that looks like.
Rob Wiblin: I think I asked this question of Holden a couple of interviews back: Imagine that there was some kind of warning shot, there was some disaster in which AI were involved — it could be a misalignment issue where an AI goes rogue on a medium-sized scale, or potentially a misuse case where AI is used to help someone to do a lot of damage or kill a lot of people. And then it was very broadly agreed that we really needed the government to, say, get into the labs that were producing this very dangerous software, basically, and lock it down or change it in some way. Basically, action had to be taken because we’d realised that it was substantially more dangerous than what we had realised the previous week.
Who in government has the authority or the responsibility to do that? Does anyone have that ability?
Tantum Collins: I should say this isn’t an area that I did any direct work on, so don’t read too much into my responses. I’ve read a few public analyses of what authorities exist under various doctrines. So, for instance, there’s the Defense Production Act: it’s not quite nationalisation, but it enables the president to direct existing entities to work on a specific thing. So this was used, if I remember correctly, during World War II, where a bunch of car manufacturers were told to make military kit. And Trump actually invoked it during COVID to get a couple of companies to make ventilators.
I don’t remember all of the details, but I think there are some questions about could the DPA be used to, in some forceful way, direct AI companies to do certain types of research or not do certain types of research? If I remember correctly, the paper that I read said probably not. When some people say that the government could just “nationalise” the AI companies, it seems that the DPA authorities are a little bit limited.
Rob Wiblin: Could the government direct everyone at one of the research labs to just play solitaire all day? It’s an interesting legal case.
Tantum Collins: Exactly. This sounds like a fun one for the Supreme Court to deal with.
But yeah, jumping back to the Holden question, I think to a significant degree, given the current taxonomy of issue areas within government, it would depend on what the nature of the incident was. So is it an incident that first and foremost is a cybersecurity vulnerability? Is it a lab that is doing something that is interacting with the real world in ways that cause physical harm — which might be the remit of anything from the local fire department to the FBI or the National Guard or something?
So in part, I don’t think that the answer is there is no capability to respond, or no set of authorities. But I think that at the moment, the relevant authorities are totally fragmented, because they are not built around a foundation of thinking about AI-related risks. So I am relatively optimistic that if something really bad happened, or seemed like it was going to happen, there is something that could be done. But under current infrastructure, there will probably be a fair amount of confusion in the beginning about which authorities to use, and how and why, and what kind of coordination would be needed.
Rob Wiblin: How would it be different if a lab was enthusiastic to have government involvement? Then maybe this becomes a lot easier, because you can just have conversations and then things get done without you having to necessarily have legal authority to force something to happen. And you might imagine that in this situation, people might be very happy to volunteer. They might think that, actually, we’re a little bit more out of our depth than what we appreciated, so we want to get more people into the meeting here.
Tantum Collins: Exactly. I think one big question at the moment is, in terms of these questions about how can you keep models safe, there are a number of industries — including areas that are sort of direct, let’s say, defence contractors, but also just other domains of research that are understood to be sensitive — where the government in some way assists with security to prevent the kind of thing we were talking about earlier. So theft of IP that could be dual use IP by foreign governments, for instance, or for-profit hackers, or what have you. I think there’s a whole spectrum of ways that government and industry could collaborate on things like that, and I’m not especially well versed in what the precedents are.
Policy ideas that could appeal to many different groups [01:29:08]
Rob Wiblin: Within this broader ecosystem of thinking about ways that AI might change the world, worrying about ways that could go wrong, and trying to push us towards better directions, there’s many different interest groups or many different specific concerns that partially overlap with one another but are different. Of course there’s the people who are worried about extinction, there’s people who are worried about misalignment, there’s people worried about misuse, there’s people who are worried about consumer protection now. There’s people who are worried about ways that AI could be deployed that would change society in subtle ways that would be bad and add up over time — I guess analogous perhaps to ways that social media might have gradually worsened things. There’s the national security folks, there’s people concerned about cybersecurity.
And I think all of them have some legitimate points, legitimate concerns. One doesn’t have to pick one of these and say, “This is the issue and the others aren’t.” There could be many different potential problems at different points. It would be really nice to find policy ideas that are appealing to a whole lot of these different groups at once. To some extent, we might have an issue where there are so many different potential problems that the ecosystem is incredibly fragmented, all trying to address different legitimate worries, but never quite coalescing enough around any particular proposals to get them over the line.
So potentially you need a bit of coalition building here, saying, “Here’s a new policy that would help with three out of these 10 groups, so let’s talk about this for a bit. And this is good by all of our lights.” And also maybe the other seven don’t object to it; they all think that it’s, at worse, neutral. And then you can try to push that over the line politically and then move on to something that can address the other seven together.
One that came up in an interview with Ezra Klein was, I asked what’s something that could be put into the AI Bill of Rights or some piece of legislation today that’s about consumer protection from AI right now, that could also be relevant to extinction risk. And the one that seems most plausible is interpretability: we would both really like to understand what these models are thinking now, in order to make sure that they don’t fail in bad ways and harm users or companies today, but improving the technology of doing that will probably help us with all kinds of other issues down the road.
Are there any others that stand out now as potentially being useful in a cross-cutting way now, or any other proposals down the line that might seem like they address multiple different concerns simultaneously?
Tantum Collins: I definitely agree on interpretability. I think you’re right, both that there is maybe tension between some of these proposals and communities at the moment that need not be as fierce as it is, but also that at some level there may be tradeoffs if you have a pot of money and it’s X number of dollars, and that needs to be split across a couple of different priorities. Obviously, at some level, a dollar that goes to one thing will come at the cost of another. I think that obviously one approach is just to expand the pie in some way. And so I think that increased public funding for the full gamut of alignment issues, ethics research, better benchmarking and so on is kind of a no-brainer. Obviously there’s such a thing as spending too much money on that, but I think we’re just so far away from that at the moment that you could go very far without it becoming a problem.
I think everyone can agree on the need to expand the pie. And then the question of how do you slice it up, that is an area where — to invoke the democratic principle — you might be able to agree on something procedurally, even if you don’t agree on it at the object level. So I think that’s an area where there is probably more legitimacy attached with saying everyone agreed that we wanted to expand the pie, and then we made allocation decisions based not just on some set of government technocrats, or on some monied interests that were lobbying ,or on some very visible PR campaign — but on a process widely seen as legitimate. Maybe that’s something that involves a citizens’ assembly, like we were talking about earlier, that tries to gather public perceptions about which things they’re actually the most worried about.
Rob Wiblin: Yeah. One other suggestion I’ve heard for something that should be viewed as good by the lights of many of these different interest groups is cybersecurity. Cybersecurity looks really good from a national security point of view, because you really don’t want dangerous technology leaking out to terrorists, or to adversary states potentially. It’s really good from an extinction risk point of view. I would guess that most people either think that that’s really good or at worst neutral. So that seems like one where you could potentially build quite a broad coalition around requiring extremely high cybersecurity for frontier AI models potentially.
Tantum Collins: Yeah, I think that’s totally right, actually. Yeah, I agree with that. I guess there’s a long list of AI application areas that almost nobody would say are bad. So for instance, using AI to help develop cures for really dangerous diseases, using AI to improve cyberdefences, stuff like that. But I think that’s maybe a slightly different level of abstraction than the category of policies that we’re talking about.
Rob Wiblin: Well, I guess improving the methods that we have for keeping models on track, in terms of what goals they’re pursuing and why, as they become substantially more capable. That is kind of one of the main thrusts of the extinction-focused work that I think that at least NatSec people would also think is probably good, and I would think that AI ethics people would also think is probably good. I guess I don’t know of anyone who really objects to that line of research. At worst, they might think it’s a bit of a waste of resources or something, if they don’t think it’s promising.
Tantum Collins: Yeah, I think that’s right. Maybe another one that would be interesting is improvements to privacy-preserving machine learning infrastructure and techniques. These are techniques like differential privacy, homomorphic encryption, secure multiparty computation, and federated learning that give you some level of privacy guarantee about saying you can train a model on data in ways that don’t have to give the model owner repeatable interpretable access to the data, nor give the data owner repeatable interpretable access to the model, but still you get the benefits of model improvement at the end of the day.
And historically, these have usually had a big additional cost, computationally. There are some interesting groups, most notably OpenMined, which is an open source community that is developing a library called PySyft that is trying to reduce the computational cost of those systems. There are a lot of areas where we would like to have the public benefits of stuff that is trained on sensitive data, but we don’t want to compromise individuals’ personal information.
Some of the most obvious ones here are medical use cases: it would be great to have systems that are trained on the distributed, let’s say, mammography data in order to improve tumour detection — but, completely understandably, people don’t want to hand their sensitive healthcare information over to for-profit companies. There are also a whole bunch of other applications, including actually model governance itself: How can you deploy systems that will assess the capabilities of certain models, or the size of training runs, or what have you, without getting any other creepy surveillance data from that stuff?
So there is a whole world of possibilities in this direction that could be helpful for developing models that would have to be trained on sensitive data and/or handling governance responsibilities that might otherwise bring surveillance implications that make us uncomfortable. I think that is a technical stack that is under development now, but where lots more work could be done. And the overwhelming majority of the use cases that it enables, at least the ones that I have come across, are quite positive. So that might be another area that there would be shared interest in developing.
Rob Wiblin: Maybe another possible option is something around immigration reform? I suppose not everyone would be in favour of this, but it’s pretty crazy if there are people who can do very useful work who are stuck in India and can’t get a visa to come to the United States to collaborate with the people they need to collaborate with.
Tantum Collins: Absolutely. I think that I put high-skilled immigration reform as one of the sort of no-brainer policies. And this is an area that I don’t claim any expertise in. I knew basically nothing about this before starting this job at OSTP, and then ended up working on a few issues related to this. So I know just enough to be dangerous.
But overall, it’s just astounding the degree to which the United States does not make use of this latent superpower, which is the fact that a huge number of people want to move there. And unfortunately, the immigration system is so baroque and limited and confusing that a very high proportion of people who would in so many ways contribute to the United States economically, culturally, and so on, are effectively prohibited from going there.
And this is something that, again, I can imagine conceivable reasons why someone would be opposed to it.
Rob Wiblin: It could speed things up.
Tantum Collins: Exactly. Will it accelerate capabilities research? So it’s not robust to every single worldview, but I think it is quite close to being a no-brainer policy. Not least because there’s this kind of meme of brain drain, but it turns out that the reality is actually quite complicated, and often goes the opposite way — which people will sometimes call “brain gain.” There are a whole bunch of different mechanisms, but you have knowledge remittances, actual remittances, all kinds of ongoing collaborations and so on that mean that quite often it’s the case that the country that people leave will often actually end up benefiting economically from those people’s departure because of the things that they subsequently bring back or send to people who are still in their home country.
Tension between those focused on x-risk and those focused on AI ethics [01:38:56]
Rob Wiblin: Let’s talk a bit about the relationship between people who are focused on extinction risk and people who are focused on AI ethics.
I brought this up in the interview with Ezra Klein, and I’ll just set it up a little bit again for people who didn’t hear that. That interview was quite brief, so we certainly didn’t get to explore it in full detail. But I said there that I actually have a rule that I don’t read people bickering online, almost especially not on Twitter, so I actually have not encountered any of the harsh words that I’ve heard have been exchanged between people who have these different focuses. But I don’t really understand why there would be much tension between these groups in principle. Because they’re working on different problems, but many people in the world are working on different problems — and people working on extinction risk or AI ethics or present-day AI concerns are not angry with people who are working to cure cancer because they’re working on a different problem; they’re just working on different things that all could be useful.
I haven’t looked into the AI ethics stuff in great detail, but my guess is that there’s just really legitimate worries there that make complete sense from any ordinary standpoint, and that government should be legislating in that area in order to try to address that. Or at least AI labs should be trying to address those concerns — just for all of the normal prosaic reasons that we try to make products better, and try to reduce bias and prejudice and so on.
The place where I could see where you could have tension between different groups is when group one has some policy proposal which group two thinks is actively harmful, or there is some competition over a fixed amount of resources. I guess I’m not aware of cases of the former, where the extinction-focused community is pushing for some policy that the AI ethics community thinks is actively bad or vice versa. And you can imagine that now or in the future, there could be competition over grants and so on. But at the moment it seems like there’s just a lot of degrees of freedom on how much funding there might be — and if you can point to more different ways, more different problems related to AI, then you could just potentially get more funding to address all of these different concerns. If there was only one stream of concern, then I imagine that there would be less interest and less funding than there would otherwise be.
So I suppose I would beg people to ask this question at all points in time. This is particularly notable with these two groups right now, but also just with any other communities that you’re interfacing with: Are they proposing stuff that I think is merely useless at worst? In which case, it’s just like everything else that you’re not particularly passionate about. There’s no reason why you have to be talking with them about it, because presumably out in the world there’s many people doing stuff that you don’t love but you mostly just walk on by and don’t email them about it. Or are they proposing stuff or potentially are there policies that both of us would be in favour of? In which case they become allies, even if you have different empirical views or different views about what would be optimal.
Do you have a take on this overall situation?
Tantum Collins: I have a few low-confidence thoughts. One is that there are some areas where there is, I think, the perception of some finite resource — and maybe that’s money or maybe it’s attention. And I think there is an understandable concern on the AI ethics side that there is sometimes a totalising quality to the way that some people worry about existential risks. At its best, I think that x-risk concern is expressed in ways that are appropriately caveated and so on, and at its worst it can imply that nothing else matters because of running some set of hypothetical numbers. Personally, I’m a bit of a pluralist, and so I don’t think that everything comes down to utils. I think that the outlook of, “If you reduce existential risk by X percent then this so dwarfs every other concern” is something that I can see why that rubs people the wrong way.
A second thing that I think sometimes brings some of these views or these communities into conflict is the idea that there are some types of behaviour — whether that’s from labs or proposed policies — that could help. I’m in particular thinking of things that would have some security benefits that people who are concerned about x-risk value very highly, but that might come at the cost of other things that we value in a pluralistic society — for instance, openness and competition.
A lot of the policies that we haven’t talked about yet — because so far we’ve been focusing on no-brainer, almost everyone should get behind these — there are a lot of very tricky ones that you can see a case for and you can see a case against, and often they pit these values against one another. If you’re really, really, really worried about existential risk, then it’s better to have fewer entities that are coordinating stuff, and to have those be fairly consolidated and to work very closely with the government.
If you don’t take existential risk that seriously — and if instead, you are comparatively more worried about having a flourishing and open scientific ecosystem, making sure that small players can have access to cutting-edge models and capabilities and so on; and a lot of these things historically have correlated with the health of open and distributed societies — then those policies look really different.
I think that the question of how we grapple with these competing interests is a really difficult one. And I worry that, at its worst, the x-risk community — which broadly, I should say, I think does lots of excellent work, and has put its finger on very real concerns — but at its worst, there can be this sort of totalising attitude that maybe refuses to grapple with a different set of frameworks for assessing these issues. And I think that’s sometimes exacerbated by the fact that it is on average not a super-representative community, geographically or ethnically and what have you. I think that means that it’s easy to be blind to some of the things that other people, for good reason, are worried about.
That would be my very high-level framing of it. But the bottom line is that I very much agree with your sentiment that most of the conflict between these groups is counterproductive. And if we’re talking about the difference between pie splitting and pie expansion, there’s a huge amount of pie expansion and a whole bunch of policies that should be in the collective interest. And especially since I think the listenership here is probably a little bit more EA-skewed, I’d very much encourage people to engage with — this sounds so trite — but really to listen to some of the claims from the non-x-risk AI ethics community, because there is a lot of very valuable stuff there, and it’s just a different perspective on some of these issues.
Rob Wiblin: Yeah, on the non-representativeness, all of these groups seem just phenomenally unrepresentative of the global population as a whole. I mean, to start with, basically everyone has a degree or a higher degree. Everyone speaks English.
Tantum Collins: Yes, exactly.
Rob Wiblin: It’s actually extraordinary once you think about that these things have such global implications, and yet the people making it, basically the differences between them are so small. The differences between them, that seem large to them, are tiny on a global scale.
Tantum Collins: Yes, absolutely. But as is so often the case, those are the tyranny of small differences or whatever it is. I always found that the people that get the most on my nerves are the people that I’m worried I am. I think that it is often that there is some uncanny valley of people who are doing something that seems very similar to what you’re doing, but maybe it’s slightly different. Or you’re worried that’s how you yourself come across or something like that.
Rob Wiblin: On the point about that, there does just seem to be potentially quite a tough tradeoff between the security and the openness. That does just seem like a real tradeoff there, that it’s going to be very hard to satisfy groups who prioritise each of those things at once.
I mean, there’s going to be some win-wins. I’m sure there’ll be some cases, potentially cybersecurity would be one, where one group thinks that’s good and the other group thinks that’s fine or maybe good as well, for instance. But there will be some cases where what’s better on one view is worse on the other pretty often. It seems very hard, in deciding which way to go on that, to put aside the technical questions of how likely is misalignment, how likely is extinction on our current path? If you think that the odds of extinction from the status quo is 1 in 10,000, then it makes sense to really prioritise openness, and going quickly, and making sure that no individual group becomes too powerful. If you think it’s 1 in 2, then that seems crazy, and instead you should be treating this with extreme caution, and these other issues around competition law seem minor by comparison to 4 billion expected deaths.
I’m not sure. I just don’t know that there’s going to be any way of settling this, other than empirically investigating the question of how likely are we all to die, and getting at least to the same order of magnitude, roughly, of understanding how likely that is.
Tantum Collins: Yeah, I think that this is a kind of thing that crops up in lots of policy areas: things will be framed in some discrete way, when in reality they’re continuous and a matter of degree.
And so we can look at, for instance, a big debate in the US has to do with gun rights. It’s often framed as this question of liberty versus security. In reality, it’s clearly a spectrum: almost nobody thinks a private citizen should be able to buy a hydrogen bomb, and almost nobody thinks that nobody should be able to buy a kitchen knife. And somewhere in between that, when you press on it, these things do, like you were just describing, become empirical issues, right? How much harm is done by this thing versus how much… I mean, freedom is a little bit harder to quantify, but you can give people different scenarios in terms of how they’d feel about having access to this, or how much worse it would make their life if they couldn’t buy thing X. So almost inevitably, these are tradeoffs that exist along some kind of spectrum.
And I think there are similar questions here. The challenge here, as opposed to the issue around gun rights, is that there are reams of data on how many people guns kill — whereas when it comes to AI risk, there are all of these very difficult epistemic questions. Like what does it even mean when we put a probability on something that hasn’t happened yet? And that’s not to say that there is no rigour that can be brought to it, but obviously that makes it more difficult, and it makes it easier for people to never converge in their estimates.
Rob Wiblin: Yeah. I guess everyone who gets very head up about these issues, I’ve been using this technique where if I read something that I find very frustrating, I think, “Would this make sense if I thought the risk of extinction due to AI was 1 in 10,000?” And then I go like, “OK, yes, then it would make sense.” And the reason that I find this very frustrating is that I think the risk is many orders of magnitude larger than that, and so it sounds absolutely batshit. So it sounds absolutely insane.
And basically I imagine the exact same is true on the reverse, where people read something and think this is so oppressive, or, “This would involve changes to the law that are completely unacceptable to me.” Then they think, “Well, what if I thought the risk of me and my family being killed by this thing was 1 in 2? Then maybe I would be willing to accept that tradeoff.”
And I don’t think this is going to lead people to agree necessarily, but would at least lead people to understand roughly where other people are coming from.
Tantum Collins: Yeah, I think that’s right. I do like the idea of the Ideological Turing Test: before relegating an idea to the dustbin, one should be able to come up with “What is the strongest justification of this?” And I think often the difference does come down to the estimates that you put on different possible scenarios.
Maybe one other thing here is that there is this question: Holden has written this stuff on how likely we should think it is that this is the most important century. On the one hand, we have these fundamental reasons to think, yeah, there’s a lot of crazy stuff happening. On the other hand, the prior should be incredibly low, because loads of people historically thought they were in the most important century, and most of the time we look back and we think that thing actually really didn’t matter that much.
And I think similarly here, there are these sort of competing claims: On the one hand, we look at these curves of capability and it seems like there is crazy stuff on the horizon, and there are so many ways that it can go wrong — and for machine learning in particular, compared to lots of other technologies, we have so few of the types of guarantees that we want to have about performance. But on the other hand, if every time that people thought they were inventing the most important thing ever, we gave them free rein and said, “OK, we can put these pluralistic considerations aside because this matters so much,” then we’d probably be in a much worse world today than we are.
So yeah, it’s a really tricky tension, and I don’t think that there will be a magic bullet. I think it is a question of continuing to update based on evidence from the research frontier and really making ourselves stare these difficult tradeoffs in the face, and trying as much as possible to bring people together who have wildly different perspectives on the relative likelihood and importance of these issues.
Rob Wiblin: I have some bad news on that point. I’m not sure whether you saw it, but there’s some research that came out recently. I think it was one of Philip Tetlock’s groups; I actually can’t remember the name of the organisation. They brought together superforecasters — so people who have a good track record of forecasting things — and experts in artificial intelligence and extinction risk, and got them to talk a long time to try to get on the same page about the risk of extinction. There was no convergence.
Tantum Collins: Really? Oh yeah, that’s not good.
Rob Wiblin: Yeah. I mean, something strange is going on, because I think that’s not typical. I think in most areas you do get convergence through conversation and sharing of information. It suggests to me that most of the work is being done by priors, and there is not shared empirical information that forces people to update in some particular direction — and that will just be a millstone around our necks until we can get sufficient empirical information that people with different initial positions can converge at least somewhat on their concerns.
Tantum Collins: I wonder if, in some ways, this may get back to some of these questions around standards and benchmarks. Obviously in the research community, there are, in fact, loads of well-developed benchmarks. But in particular, I don’t think that we yet have a widely agreed-upon set of safety and alignment benchmarks that we think are anywhere close to comprehensive. I know there are groups like ARC that are looking into improving the way that we think about accreditation in that space.
Actually, one of the people that I worked with on my team at DeepMind is a certified superforecaster, and one of the projects we worked on was building an internal prediction competition. I enjoyed learning from him quite a bit about how these things usually work. And it seems like they place quite a lot of emphasis on having very interpretable, agreed-upon metrics to measure whatever the thing is in question. And I’m not by any means convinced that this would resolve the lack of convergence in this case, but I can imagine that getting a better set of safety benchmarks — that you can use all the way from toy problems up to really big and powerful things — that everyone agrees kind of, and understands how they sit on this spectrum, that would maybe help resolve some of this. Maybe reduce the deference to priors a bit.
Communicating with policymakers [01:54:22]
Rob Wiblin: Yeah. What’s one thing policymakers should understand about current AI technology that many of them don’t?
Tantum Collins: One is how quickly research paradigms at the frontier change. A few years ago, almost everything was focused on deep RL in simulated environments, and that had worked really well on a set of problems. And now it’s all about LLMs or multimodal foundation models that are more or less just like souped-up versions of a transformer — and this happens to be the paradigm that has radically expanded public awareness of AI.
I think as a result, for a lot of people in the general public, including policymakers, that’s all there is. Like, AI equals machine learning, equals foundation models, equals LLMs. Obviously, in fact, that’s a concentric set of things. And moreover, the track record is that this changes very rapidly. I worry that at the moment, if you have policies that assume that a foundation model is synonymous with all of AI, you can end up on the back foot if you pass something — and then in two years’ time, we’re in a different research paradigm; even a different research paradigm within machine learning, akin to the transition from deep RL to LLMs.
So that would be one thing: becoming just a bit more familiar with the historical trends in this space.
Rob Wiblin: Do you have any advice for technical experts on how to communicate better with policymakers? If you’re someone with technical expertise in AI, and you’re called by some group to come to DC and help them understand things and come up with ideas, how should you be thinking about how to communicate?
Tantum Collins: This is a great question. It’s actually one area that I think LLMs could be very valuable, to go back to this parallel between translation across actual languages and translation across academic or professional vernaculars. I think that we could save a lot of time by fine-tuning systems to do better mappings of “explain this technical AI concept to someone who… is a trained lawyer.” And often then you can actually find that there are sort of these weird overlaps. Not necessarily full isomorphisms, but a lot of the conceptual tooling that people have in really different domains accomplishes similar things, and can be repurposed to explain something in an area that they’re not too familiar with. So this is an area where I think that there is a lot of cool AI-driven work that can be done.
In terms of practical advice to people trying to explain things, this is tricky, because there are many ways in which you want to frame things differently. I’m trying to think of a set of principles that capture these, because a lot of it is just very specific word choice.
Maybe a few off the top of my head would be: One, just read political news and read some policy documents to get a feel for how things are typically described, and that should be a decent start. Two, I think in general, in policy space, you obviously want to reduce the use of technical language, but even sort of philosophical-type abstraction that can be helpful in a lot of other domains. And so the more that things can be grounded in concrete concepts and also incentives that will be familiar to people. In the policy space, a lot of that has to do with thinking about what the domestic and foreign policy considerations are that are relevant to this.
I mean, obviously it depends on the group — like, is it a group of senators or people at OSTP or something — but broadly speaking, if you read global news, you’ll get a sense of what people care about. A lot of people are really worried about competition with China, for better or worse. So to ground this, one example here would be: to the extent that the framing of China competition is inevitable, one can harness that to make the case that, for instance, leading in AI safety is an area that could be excellent for the scientific prestige of a country, right? And it could improve the brand of a place where things are done safely and reliably, and where you can trust services and so on. You can take something that otherwise a policymaker might dismiss as heavy techno-utopianism, and if you are willing to cheapen yourself a little bit in terms of how to sell it, you can get more attention.
Obviously this is a sliding scale, and you don’t want to take it too far. But I think a lot can be accomplished by thinking about what the local political incentives are that people have.
Rob Wiblin: Yeah, I suppose we might have an AI alignment race. And the UK, where there’s £100 million of funding, it’s going to win the AI alignment race, unless…
Tantum Collins: Win the alignment race, exactly.
Is AI going to transform the labour market in the next few years? [01:58:51]
Rob Wiblin: Is AI going to transform the labour market in the next few years? Are lots of people going to lose their jobs?
I find myself going back and forth on this. Yesterday I was reading about this new extension to GPT-4 that allows it to do coding and data analysis. You just say, “I’d like to identify the causal effect of this on that.” And you upload literally a dataset in a comma-separated values file and it can do the stuff. I’m just like, how can people not lose their jobs because of this? It seems amazing.
And then at least I haven’t heard about lots of people losing their jobs this year so far. It’s been less than one might have thought when GPT-3 or when ChatGPT first came out. And you just think about all the bottlenecks in your own organisation to using new technology and replacing people with bots and how hard that might be, and you could think, well, this could actually drag out quite a long time. Models might be capable of doing things long before they’re actually implemented in providing services directly.
Did you have any take on this?
Tantum Collins: I feel like 25% of what I’ve said in this interview has been disclaimers. But I’ll throw in another disclaimer before saying this, which is that the more time I’ve spent in the AI space, the more I’ve learned not to trust my own predictions — or really, almost anyone’s predictions — about where things go. I mean, at some level, yeah, I will sort of defer to the superforecasters, but it’s just so hard to say how quickly things will progress. And anecdotally, it seems to me as though, on average, experts in the field are not significantly better at predicting these things than a randomly selected person. So of course I have some very broad distribution of instincts here, but I won’t with confidence make any prediction about what unemployment will look like induced by AI in five years’ time or something.
But one thing I will note is something that I’m sure you’ve probably read this series of papers, but Erik Brynjolfsson has written several papers that in turn are based on this famous paper from the ’90s, “The dynamo and the computer.” But it makes this very interesting observation, that is grounded in the reality that often you’ll have a capability, and it’ll take a long time before it “shows up in the productivity statistics.”
And the reason for this, so the authors argue, has been the role of what they call “complementary intangibles,” historically. So saying it’s not just about the thing itself; it’s also about the fact that usually, much like, for instance, AI doesn’t neatly slot into the taxonomy of government things, it also doesn’t neatly slot into the taxonomy of well-formed work tasks and processes that we use at the moment. And as a result, you have to redesign the whole physical and human infrastructure within which things get done.
The classic example here that’s the most interpretable is around electrification: in order to make the most of electrical power, it wasn’t just a question of swapping that in for a preexisting component, but you actually ended up totally redesigning factories. And most of the gains came from the fact, for instance, that you could now modularise production so you didn’t have to shut everything down when one thing broke: you could build them flat, you could have skylights because you didn’t have all this equipment on the ceilings, and that reduced the rate of accidents and so on.
But all of that requires experimentation and capital expenditure. And so it can often take many decades between the invention of the thing itself — whether that’s the steam engine or the dynamo or personal computing — and the rise in productivity that results from it. Even if at the moment the thing is invented, people can, in an abstract way, identify the fact that in principle, this can do all of this stuff that we find useful, and maybe that we right now are paid to do ourselves. But that doesn’t guarantee that we’ll figure out how to integrate it in a way that is reliable and easy to use. Often, whether it’s designing an interface or restructuring a whole workflow, that can take a long time.
Rob Wiblin: Yeah, I was commenting on this on Slack the other day, and I think I added the bold prediction that I didn’t think we’d see increases in unemployment in the next 12 months — but I didn’t know after that, because I do feel so at sea, it’s so hard to say.
Tantum Collins: I try to bound things. And I’ve made and lost so many bets where I’ve had misplaced confidence in either how quickly or how slowly something will happen. I’ve been waiting for self-driving cars now for so long. So long. I don’t have a driver’s licence. This wasn’t a principled thing; it was just I never got around to it. And then it became this principled thing where I said I’m waiting for self-driving cars. And I felt so smug about it in the late 2010s, because when I began saying it, people were like, “That’s crazy.” And then there was this period of peak hype from like 2015 to 2018 where everyone’s like, “I think you were right about the self-driving cars.” And now I look like an idiot again. So anyway, one of the many things I’ve been wrong about, about the rate at which technology will progress.
And of course that highlights another thing, which is that things might be technically capable, but it still takes a long time for us to trust them. So even in the self-driving car case, even if you had something tomorrow that you could show, across a representative range of environments, had a lower fatality rate than a human driver, most people would not be comfortable with it until that was reduced by an order of magnitude or something like that. So that’s another factor as well.
Rob Wiblin: Yeah. I’ve always been very bullish on the idea that the march of technology, if it continues, will ultimately result in humans not being able to do useful work, because we’ll invent a cheaper, more reliable way of performing every task that we would like to be performed. And so in the very long run, I have reasonable confidence about what the outcome would be. And some people do disagree with that long-term outcome, but I think they’re just wrong on the merits of it. I think if we just continued with the rate of technological progress that we have now forever, then there’s just no way that in one million AD, humans would still, in their current form, be doing anything very much in the economy.
Tantum Collins: Or you have some Baumol effect thing, where people get paid a totally preposterous amount of money to do something that you 100% care about just because it’s being done by a human. The classic Baumol effect example, I think, is the string quartet, where a string quartet is very expensive relative to what things looked like 200 years ago, because everything else has gotten cheaper. And so the opportunity costs of people’s time is way higher. So maybe we’ll have things where we just feel good about the fact that a human is doing that, and so we’ll pay them a billion dollars because everything else is free and produced by AI.
Rob Wiblin: I think I’m sceptical of that one as well, in the very long run. But that’s a trickier argument. But anyway, having this belief about the long-run outcome says nothing about what’s going to happen in 2024 or 2025, really, because it could just turn out that in most of the industries where this stuff can get rolled out, in the short term, the complementarities outweigh the substitutability. And so, in fact, maybe you just get an expansion of the amount of output: maybe the cost goes down, people buy more of it, and then just more people go into it. It’s completely imaginable.
Tantum Collins: Totally. I think in so many of these things it gets so thorny, because so many of these phenomena are interdependent. You can be right about 99% of those relationships, and if you get one thing sequentially wrong, your predictions may end up being 100% incorrect because of exactly that kind of phenomenon, where these things are tethered in ways that are just really difficult to predict.
Rob Wiblin: Yeah, I’ll be very curious to see. I suppose the groups that have seemed like they’re most on the chopping block in the near term was potentially journalists doing low-quality reporting or stuff that doesn’t require on-the-ground —
Tantum Collins: Synthesis of readily available stuff.
Rob Wiblin: Stuff that’s on the internet, basically. Yeah. Then there were artists or people doing stuff that can now be produced by Midjourney. Although it wouldn’t shock me there. You could imagine people might just consume an awful lot more imagery than they did before because it now costs one-hundredth as much, and you just have lots of experts producing tonnes of images on Midjourney and being extremely good at doing that.
Tantum Collins: Also, art is an interesting one from a historical perspective. I’ve always been fascinated by this, because my mother is an art historian, and the sums of money that people will pay for the original of something compared to a visually indiscernible copy is sometimes many orders of magnitude [higher]. Someone will pay $500 million for a painting, where for an exact replica they would pay $5,000. I’m sure that for many types of image production what you’ve described will happen, but I would not be surprised if in some ways the art market is quite resilient.
Rob Wiblin: Yeah, another group was kind of low-skilled legal work, so legal adjuncts or legal assistants. But even there, it wouldn’t shock me if they’re not replaced anytime soon. I suppose if there’s any one group that I trust to stop the march of AI in their industry, it’s lawyers.
Tantum Collins: Exactly. There is some amount of sort of capture here, right? It’s not just about what is technically feasible, but how much sway do different groups have with legislators? Lawyers have quite a lot.
Rob Wiblin: Yeah. I suppose there’s coders as well, because it seems like software engineering, maybe a lot of that stuff can be automated now. But it also wouldn’t shock me if we decide to make a lot more software and a lot more bespoke software than we did before.
Tantum Collins: A whole bunch of no-code generation, right? Where any time you want to do something that previously you would take two hours to do yourself, now you can, without writing a single line, spin up an app that does it for you, like courtesy of an AlphaCode-type system.
Rob Wiblin: Exactly. Because we’ve been living in a world where we just assume that writing code in order to accomplish a task faster is just never going to be sensible unless you’re doing it at a big scale, whereas now it could be the reverse. I guess I’ll be very interested to see the labour statistics on that over the next 24 months.
Is AI policy going to become a partisan political issue? [02:08:10]
Rob Wiblin: An audience member sent in this question for you: Do you think that AI policy is going to become a partisan political issue in the US, and potentially also the UK?
My gut intuition says yes, because everything gets polarised. But the interesting thing is I have no idea in which direction, because it’s so cross-cutting — which speaks to the absurdity of the idea of becoming polarised. But what do you reckon?
Tantum Collins: I would love to say no, because intrinsically there is no reason why it seems like it should be. Unfortunately, the historical track record is… I mean, who would have thought that vaccines would become a partisan issue? But they did in the US, at least.
There is maybe one reason, a somewhat depressing reason actually, to think that it won’t, which is that one of the only issues that enjoys bipartisan consensus in the US at the moment is anything involving competition with China. And so to the extent that people see this stuff as being a way to win, I would expect that to be motivating. And again, maybe there is a way to sort of spin things that we wouldn’t necessarily at first blush think of as adversarial, as being a competition in a positive way. Like you were saying earlier, “How can we win the safety race?” or something like that. So yeah, that would be one reason to think that it will not become partisan.
Rob Wiblin: Yeah, I’ve heard that many issues that are kind of presented in a partisan way in the media and by politicians on social media. But behind the scenes in Congress, there’s a lot of very normal negotiations about them, and actually a surprising amount of stuff gets done — and that a lot of the partisan stuff is just for the cameras, basically. Is that right?
Tantum Collins: You know, the funny thing is I actually know almost nothing outside of news coverage about Congress. I didn’t really know what to expect when I took this White House job, like how much we’d work with legislature. And there are definitely some teams, like Legislative Affairs teams and so on at the White House, that do work quite closely with Congress. But I didn’t do any congressional briefings. I don’t really have any insight into what things look like.
One issue area that is interesting through this light is high-skilled immigration — where my sense is that high-skilled immigration itself is an area where there is bipartisan consensus. But the problem is that both sides, because they know the other side also wants it, try to use it as a bargaining chip to get what they want, at the other end of the immigration spectrum in particular. So people on the left say, “We’ll approve high-skilled immigration reform if we also let in more asylum seekers.” And people on the right say, “We’ll approve it if you build a wall” or something like that.
Rob Wiblin: Right, yeah. My guess is AI policy won’t become partisan in the UK, just because many things are not partisan.
Tantum Collins: The US is, I think, unusually polarised.
The value of political philosophy [02:10:53]
Rob Wiblin: It sounds like you’ve engaged a bunch with political philosophy — maybe more than I imagine most nuts-and-bolts policymakers have. Are you taking a very academic political philosophy approach to all of this, or is it more practical?
Tantum Collins: I would say definitely in terms of what my day to day was like when I was at OSTP, it was not deeply philosophical — it was very operational. The stuff that I’ve been thinking about since then has been a bit more abstract. In particular, I’ve really enjoyed working with this new centre at Harvard, GETTING-Plurality, which is led by Danielle Allen, who’s a political philosopher who’s done a bunch of very cool work.
Coming back to these questions around how can we pair increased state capacity with the use of technology to strengthen populist sovereignty over decision making: I think that this raises a series of philosophical questions, some of which are as old as time, but maybe it contextualises them in a new way.
And in a nutshell, the way I think about this is the design space is now much, much, much broader, conceivably. The types of institutions that we could build vary much more widely than the possibilities available to people in, let’s say, the late 1700s. And that, in some ways, removes a lot of constraints. That removal of those constraints means that we have to make, in a more conscious and deliberate way, decisions that we didn’t have to face before.
And some of these get to the core of some major issues in political philosophy. So just to list a few, one would be: What is the relationship between interpretability and performance? There’s this whole interesting and emerging space of research that says, if we take an information-theoretic approach to thinking about government systems, ballots are incredibly lossy and compressive, right? You get like one or two bits of information, basically, depending on how many choices you have on it. What if, instead, you could just articulate all of your political hopes and dreams to a system that could engage with you in natural language, help you paint these pictures of what life would look like if policy A was passed versus policy B, help you ground this in the things that you actually really understand deeply and care about?
In some ways, this sounds great, because it means that we can sort of have our cake and eat it too. Often I think of machine learning as being “scalable nuance.” We can take things that previously you could only do in a local way: in a community of 10 people, you can really hash things out, but as we build society up, we’ve historically had to accept way lossier regimes for aggregating information. And with ML, maybe you can have scale and nuance. Everyone can engage with this system, and that can really extract your preferences in a super fine-grained way, and then you can apply over that the social choice function of your choosing, or social welfare function. So you can say we want something that’s maximin, or maximises average utility or something.
Even if you manage to avoid all of the predictable technical pitfalls here — What if these systems end up not being aligned? What if they end up being racist? What if they end up sort of actually optimising for the inverse of what you told them to because you made some error in the code? — even if it works perfectly, we face this question of how much do we care about the interpretability of the system versus the performance that it instantiates?
And on the interpretability side, ballots are incredibly straightforward: everyone can understand the idea of the person or the policy that wins has the most Xs next to it. And even something like ranked-choice voting — which is in the scheme of things, relatively interpretable — can, understandably, throw people for a loop. If we had a blank slate and we were going to Mars, would we want something where you can just talk to it? In some ways, you could end up in equilibria that — utility-wise or according to whatever the function of your choosing — are way better. On the other hand, if people deny election results now, of course it’s way easier for them to say, “The system didn’t listen to me.”
Rob Wiblin: Well, who makes the system?
Tantum Collins: And who validates it, right?
Rob Wiblin: Because couldn’t someone just switch it with a machine that listens to you at great length and talks at great length, and then just always spits outcome X no matter what? How would you know?
Tantum Collins: In some ways, of course, this is exactly what a lot of the conspiracy theories around the 2020 election said, right? “Oh sure, in this abstract way, we understand what the mechanism is of adding up votes — but how do we know that what was in this Dominion voting machine was actually doing that?”
Rob Wiblin: Which actually is a good reason to not have electronic voting machines either. I don’t necessarily trust them completely.
Tantum Collins: Sure, and Bruce Schneier has done a bunch of good writing on the reasons not to trust electronic voting. So again, I’m kind of assuming away all of these technical implementation things here — I’m sure that any cybersecurity researcher who’s listening to this is tearing their hair out. But even with manual counting, you can say we don’t know what happens in the room once we give all these things to these people. Maybe the room of counters has been captured by special interests in some way, maybe they burned a bunch of ballots, et cetera. At some level we’ll always have this. But this is certainly more extreme, the less interpretable system is.
And on the one hand, there’s reason to be pessimistic, because look at how much doubt is already cast on these things. On the other hand, there’s reason to be optimistic, because in all kinds of other domains we use things that we really have no idea how they work. Almost all of us have no idea how they work all the time and we don’t really question it. We get on planes, and we buy stuff where the supply chains are obscenely complicated, and we mostly trust social systems and legal systems — no one’s read the entire body of law, and so on and so forth. There are all kinds of questions about what would it mean to trust a system like this.
Maybe I’ll give one or two other of these, because I think that there are loads that are interesting. One would be, there’s this big question in political philosophy where, to a great extent, existing legal regimes implicitly end up honouring the wishes of the dead rather than the living — because the body of law is huge, and the throughput of a legislature is way too narrow to reconsider things, given the scope that we want the state to have. And so historically, you’ve had this tradeoff where either you can have a state with a remit that is representative of what people want it to cover — and you have to rely on really old stuff — or you can have some kind of minarchy where the state only has three laws and those three laws get repassed every year. And at various times there have been almost joking proposals, but some that are serious, to have laws expire. But again, the problem is that then you run into this throughput problem.
So now, let’s say instead that we have these magical AI systems that can take all of our preferences, and are tracking who’s alive and who’s dead, and who’s moved somewhere, and that regularly update all of these clauses. So it’s exactly representative of the current citizenry — which raises all kinds of other questions about who actually gets to vote. For instance, now do children get to vote, because they can communicate with this AI system and they don’t really need to know that much about policy? Or do we think it’s still a rite of passage that it’s only once you become 18? And so on, non-citizens, and there’s all kinds of ways that AI could broaden this stuff in general — animals, inanimate objects, future generations, and so on.
But one of many questions this raises is: If we could now, would we want this to be only representative of people who are breathing right now, for instance? You know, people have historically made arguments in favour of honouring the wishes of the dead for all kinds of reasons that don’t have to do with these practical considerations. But we’ve never really had to seriously take this into account, because it wasn’t actually feasible to build something that [updated laws in a real-time basis to represent the wishes of only currently living constituents]. Now, as the design space expands, it’s kind of like everything has to become deliberate. And that might force upon us all kinds of questions about what equality means, what representation means, in ways that we haven’t had to confront before.
Rob Wiblin: Yeah. I mean, there’s one view on politics which is very focused on power and who has control and who’s managing to get their way over others. I guess normally I’m somewhat more on the side of thinking that the main problem isn’t people being selfish or some people having power over others: it’s that we don’t know how to fix the problems that we have. Although, of course, both of these are issues. But talking about just building a machine that would talk to people and then spit out an answer of what the law should be, definitely alarms are going off in my head, saying, “It’s not just a technical problem!”
Tantum Collins: Completely. Yeah. And this is exactly why I find this stuff interesting. My inner techno-optimist wants to say that of course we can reduce democracy to this very elegant information-theoretic framing, and now you can have a less compressive system. And then my inner pragmatist — who’s seen things in the world, and been immensely frustrated, especially in terms of bridging the policy and technical spaces by people who take the software engineer mindset to all of these complex, knotty social problems — my inner pragmatist and like student of the humanities wants to say, “Oh my god, that’s horrific. You’re missing all of these things!”
And so the work that I was planning on doing this coming year was trying to reconcile exactly those two camps of things. If we start with this premise that there are some things that we could all agree would be very, very bad, where you get like autocratic lock-in; there are some ways that maybe we can create new methods through which we could instil some democratic virtues. But instead of just optimising for that blindly… There are all kinds of historical pitfalls, where techno-optimists have said, “This is great because it makes information sharing easier, and how could that be bad?” And then it turns out that the world is more complicated.
So the thing that I’m really interested in is: How can we take a look at these technical considerations and these philosophical considerations as well as practical social science and historical observations, and say, if you did remove these constraints, would it be good? If you could in some way optimise for this idea that we can capture people’s preferences more fully, is that actually what we want? Almost certainly not. Almost certainly that is a reductive and inaccurate depiction of democracy. But if we have that tool available, do we want to use it not at all? Or do we want to use it a bit and maybe do some experimentation and maybe supplement it with other things?
There’s a lot of interesting stuff here from aesthetics: you look at beautiful buildings where there’s an old building with a new addition — like the British Museum or the Reichstag or something like that. Maybe there’s something like that that we want to realise in terms of our approach to institutions, where there are some things that function the way that they’ve always functioned, because tradition is a good way to avoid having things completely collapse, but there are some new tools that we want to introduce.
And so there’s a thicket of really messy issues and ways that things could go wrong. And the good news is that historically there’s loads of great political philosophy work that has unpacked these. But there are now new questions around how we apply that in light of new and emerging technical capabilities. That’s the space of things that I find most interesting.
Tantum’s work at DeepMind [02:21:20]
Rob Wiblin: Let’s push on to talk a little bit about your own career history, and what advice you might have for listeners who want to go out there and try to do something useful in this general area. What’s something cool that you did while you were at DeepMind?
Tantum Collins: I had a real adventure when I was at DeepMind, because I actually showed up having almost no knowledge of anything about AI. And I cycled through a number of different roles when I was there, and by the time I left, I was very in the weeds because I took this research scientist position.
So there were loads of things that I enjoyed working on. One in particular was my final stint at DeepMind. I was the research lead for this team, Meta-Research, that thought about how we can use machine learning, among other tools, to generate insights that help guide managerial and strategic decision making. So a lot of that was doing research using bibliometric data and metadata to try to get a sense of: What are the subdomains of machine learning research that are rising and falling in importance? What are things that could be especially well suited to DeepMind writ large, or to specific individuals and teams within DeepMind? What are specific geographies where more talent seems to be graduating from universities? Which conferences seem to be producing stuff that’s in line with our interests? How do capabilities stack up across countries and across different labs?
And then there was also this internal managerial-facing component: How can we improve the matching of people with one another and with projects? Because once a lab exceeds 1,000 people, that’s very difficult to do in this high-dimensional, intuitive way that you can when you have a few dozen people. And so especially in an area like AI, where a huge number of papers are pushed to arXiv every week — way too many to navigate — tools like this can be very helpful.
For instance, one person on the team, Adam Liska — who’s now running a very cool startup called Glyphic — created this tool, this paper recommender, that essentially used data from an internal project management system, so that you could get a really good, fine-grained, up-to-date sense of what areas people were working on and interested in. And then essentially converted that to this embedding space and used that to find nearest neighbours based on recently released arXiv papers, to say which of these papers are relevant to specific people and specific projects. It was also really helpful for suggesting which workshops people should go to at conferences and so on. We had a lunch recommender tool that suggested people that you hadn’t yet worked with, whose interests were suitably close to yours that a conversation might produce new insights.
So we were interested in a lot of questions around how you can frame the function of organisations, which is also something I worked on in a previous life before I began doing AI stuff: How can you frame those things as machine learning problems? And it turns out a lot of them are very, very well suited to things that at this point are totally commoditised machine learning capabilities, like recommendation engines and so on. That was something I especially enjoyed.
Rob Wiblin: Yeah. Did it work? Intuitively, it sounds like a case where you might bring some very advanced technology to a problem and then find that it just doesn’t super help in trying to organise your work exactly.
Tantum Collins: The paper recommender got very good reviews. Well, I should step back. Obviously, as with deploying anything in the wild, it’s impossible to truly know the counterfactual. And unlike deploying a product that gets used by millions of people, of course we’re not going to be able to do A/B testing at the scale that would be required to say, in an ironclad way, 100% this improved efficiency by 6%, or something like that.
The paper recommender and the conference recommender tool people found very useful, because this problem of the information overload of stuff out there was very visceral, and anyone who’s in the AI research space will be familiar with this. So both anecdotally people said very positive things, and just the click-through rate of people subsequently reading the papers that were surfaced to them in this interface was very high.
The lunch pairing tool was something where it’s so hard to measure this stuff, because research is such a contingent activity. And this was also a program that we ran for a while, but then COVID happened, and so we took it down. So that was a tricky one to measure. Anecdotally, people said that they had a great time, but maybe that’s just because people like having lunch with each other. So yeah, difficult to say.
Rob Wiblin: You mentioned potentially using AI systems to help with task allocation. So you’re saying, as an organisation gets very big, that a manager who has some project idea doesn’t necessarily know who in the organisation is best positioned to do the thing? When you’re a team of three, then you really know all of your colleagues extremely well and you have a great sense of who’s available and who would do the best job out of the three of you. But once you’re a team of 1,000, not so much.
What’s the vision for how that would work in practice? I suppose maybe I’m really stuck at the MML mindset, but I’m imagining what an MML that reads every doc that anyone in the organisation produces, and everything that they write on Slack, and everything in their email — and builds a model of each person and their interests and their availability and so on, possibly even of who they would work best with. And then you write in your thing, “Here’s the project brief, who do you think in the organisation should do it?” and it might be able to spit it back. Is that kind of the vision for how this would work?
Tantum Collins: Yeah. So there are lots of different ways that something like this could manifest. Some would be totally hellish, and bring all of the dehumanisation that I think we’ve seen in a lot of the gig economy stuff; and some of it would be really wonderful, and you could avoid the bureaucracy that people often find really oppressive.
So if we think at the highest level, a lot of what organisations are doing is indeed matching people with projects, information, resourcing, and one another. And in any small group, we tend to think that’s done pretty well from a precision-and-recall perspective. As groups grow bigger, precision and recall really drop off. So even organisations that we think of as being incredibly effective, if you talk to people, they’ll say, “I spend a tonne of time reading things and going to meetings and so on that are irrelevant to me. And also I feel like often I’ve missed the memo on things that I would like to be doing and that would be relevant to me.” So clearly we’re very far away from optimal matching.
Because now a lot of stuff is tracked, let’s say project management tools can often get quite a deep sense of the work that you’re doing. There are loads of ways that one could imagine improving that matching accuracy. To give a couple of specific examples, obviously this depends on the space in which one is working, but in research, and especially in tech companies, a lot of your work involves writing code and then there’s this whole code review process. So who is best suited to review your code or debug a specific problem that you’ve come across, or help you spitball new ideas for something? That’s exactly the kind of thing that you could represent pretty adequately with the kinds of tools that have been used all over the place, in everything from Spotify recommendations to dating apps and stuff like that.
There are all kinds of ways that this can go wrong and risks that people run into if they take too naive an approach to matching. So for instance, it’s not always the case that the best match is the closest match. If you think about what’s the best paper for you to read, there’s probably something that’s too close, because it’s almost synonymous with something that you yourself has produced, and then there’s something that’s so far away that it won’t make any sense. Then there’s some Goldilock zone in this high-dimensional semantic space within which you will be pushed and inspired, but you’ll also kind of understand what’s going on. Likewise with collaborators and so on. I think oftentimes — as we’ve learned from, for instance, information bubbles and weird YouTube playlists and stuff like that — if these things are just maximising for click-through rate, you can end up getting sucked down in a really unproductive and unfulfilling rabbit hole.
But you can imagine that if you represented this stuff sufficiently richly, and if you thought really hard about the metrics for which you actually want to optimise, and if you had a number of points at which users could provide feedback about how productive they were, how satisfied they felt and so on — in addition to looking at more objective standards, like how many lines of code are you writing, how well does your code run, what are your performance reviews like — that’s the kind of thing that you could imagine could inform… Let me put it this way: I would be shocked if in like 2035, we look at how organisations are functioning and the most successful organisations look anything like they do today — because I think the tooling we will have available will just radically expand the space of possibilities.
And surely there are some subsets of that space that are just net better. There are lots of pitfalls along the way, and there are lots of ways that this stuff could be dystopian. If it’s done carefully, I think it could be great.
Rob Wiblin: Yeah. What’s the hell vision of this? Everyone is the AI’s bitch now? Just “Do this random task”?
Tantum Collins: The hell vision is some combination of the dehumanisation of Taylorism combined with losing any sense of solidarity because you’re working with a different person every day.
To critique the argument I just made: someone could have painted a similarly rosy picture of gig work 10 or 15 years ago, saying, “At the moment you have this really slow bureaucracy that assigns people to things and it’s inefficient. And all of this middle management function isn’t accomplishing that much. Wouldn’t it be great if you had the flexibility of being able to say, ‘I’m going to drive people, deliver things, order things, get a ride, et cetera, whenever I want,’ and you can set your rates and so on?” And in some ways, of course, that did facilitate new forms of flexibility, but it also has had all kinds of second- and third-order consequences that have been quite harmful for the people involved, especially on the supplying side. So that manifests, I think, what some of these problems could look like.
Rob Wiblin: Yeah. In principle, it seems like if the firm had the right incentives, if they were motivated to be concerned about the wellbeing of staff and how much they enjoyed the work or how fulfilling they found the work, if only because they’re worried about staff retention, then it seems like this stuff should be quite straightforward to include in the model. You would learn that if you just place someone with a completely new person, then you tend to get much worse feedback from them afterwards on how good that matching was. But the question is: Do the incentives exist?
Tantum Collins: Exactly. So in principle, yes. The problem is, of course, some things are more easily measured than others. And I think this is responsible for a lot of the problems we’ve seen historically with recommender systems.
One of the terms actually that I learned from this colleague of mine who’s a superforecaster is the “rigour-relevance tradeoff.” And this is essentially just a restatement of a big piece of the alignment problem, which is that often the things that you actually care about are really hard to measure. And there are things that are easy to measure, that are like OK proxies, but Goodhart’s law comes into effect pretty quickly — so if you optimise just for that, you end up in a really weird place. So organisationally, something like “How many trips is someone running?” or “How many lines of code are they writing?” is really easily measured. Something like their overall social satisfaction and how that’s going to make them feel after they’ve been there for 18 months can be much harder to measure. But in principle, yes, one should in the limit be able to capture all of these things.
But as you note, the second big risk, in addition to the technical challenges of measuring this stuff, is: Do the incentives actually exist? There are some cases where unfortunately it will be in the firm’s interest not to have people be happy, because unemployment is high and you can just hire someone else, and maybe this is an industry where the interpersonal and tacit knowledge don’t matter a tonne, and so you do just want to keep getting warm bodies in there. So there are all kinds of areas where the incentives don’t necessarily align, but in the cases where they do, I think that there are ways that ML can sort of expand the space of solutions to things that are net positive.
CSET [02:32:48]
Rob Wiblin: What did you work on at CSET? So this is the Center for Security and Emerging Technology. Actually, maybe a broader question: Is there anything that CSET has done in the last few years that you’ve been impressed by, or think that people should go check out?
Tantum Collins: I mean, disclaimer that I’m biased because I had some affiliation, but I think CSET has done a huge amount of really excellent research. There are too many papers to list, but I’ll point to a few categories of work.
The first is that the CSET data team has done a lot of amazing stuff pulling together a whole bunch of largely bibliometric datasets that had previously not been unified, and looking at the relationship between, for instance, papers and patents in different countries, in different regions, the relationship between universities and AI labs, all kinds of things that this data is very naturally well suited to.
Anything in the bibliometric space has two great things going for it: One, you have a lot of rich, unstructured data — which is to say you have the text of the abstracts and sometimes the full papers — that enables you to get a sense of, in topic space, what is this stuff? Secondly, you have this nice graphical structure, because you have these consistent entities — which is to say, institutions and people, as well as journals and conferences and so on — that enable you to have sort of nodes and edges, and say, Who knows whom? Who’s worked with whom? and so on. That gives you two powerful and different ways of relating things in these topic or social spaces.
And this is similar to some of the stuff I worked on at DeepMind. I think that the CSET data team has done a really good job of integrating a lot of messy datasets that together can generate a whole bunch of insights, and so almost everything that they’ve produced I found to be very useful.
A second category is they have done a lot of research that is fundamentally qualitative but is informed by deep subject matter expertise on emerging tech issues — which thankfully is becoming more common in DC, but a few years ago was actually quite rare, especially if it’s an area that wasn’t a technology of traditional government interest. So you’d be able to find loads of stuff on nuclear capabilities and missiles and so on, but most of the think tank output on AI was not grounded in, let’s say, a really deep understanding of the semiconductor supply chain. CSET did a bunch of great work on that supply chain in particular, as well as looking at other choke points relevant to competition between countries.
Career advice [02:35:21]
Rob Wiblin: What are some useful roles that people might be able to take in DC or London, if you know about London, that might not be immediately obvious?
Tantum Collins: I think in general there is a real dearth of people who have technical AI expertise and/or deep familiarity at the social level with the AI space who are interested in doing policy work. And that’s completely understandable for a whole host of reasons. But I would say if you have one or both of those bodies of knowledge, there is a tonne of value that you can add, given how thin the bench is at the moment within government and the policy community more broadly. I should say this may be especially acute in the United States, partially for geographic reasons: in the US, the centre of the tech community and the centre of the government community are thousands of miles apart. In London, conveniently, they’re a Tube ride apart.
Rob Wiblin: And maybe also socially disconnected in some way. And maybe politically disconnected a little bit as well.
Tantum Collins: Exactly. And I think that all of those things are closely related. In terms of this idea that these different communities speak different languages, I think that is more extreme in the US. Largely because you have these subcultures that have evolved to become, in their local environment, kind of all-consuming, right? DC is so heavily policy focused, and SF is so heavily tech focused, or the Bay Area writ large. And as a result, I think that had some benefits — you get the benefits of specialisation, and you have these funky subcultures that can do things they maybe could only do in isolation — but it also has the major drawback of increasing the barrier to coordination. So I think this is relevant all over the place, but in the US it is especially severe.
Rob Wiblin: I suppose by comparison, in the UK, there’s some people who go into politics and some people who go into more technical stuff, but they probably all went to Cambridge at some point. It is actually alarming how inbred I think some of the elite is in the UK. It has a lot of downsides.
Tantum Collins: It’s astounding. What is it, 20 prime ministers have gone to Eton or something like that? I mean, just absolutely shocking.
Rob Wiblin: Yeah. I think one doesn’t have to worry about elitism too much to think that in the UK it’s maybe a little bit extreme. Are there any factors that might not be super salient to people that help to determine how much influence they have in a policy role?
Tantum Collins: So one thing is, I think — maybe somewhat surprisingly for listeners who are familiar with James C. Scott’s work, where he talks about legibility and how important legibility historically has been to governments — governments themselves are remarkably illegible as institutions, coming into from the outside. This is a way bigger problem, I think, in government than it has been in my experience of industry.
And my experience of industry has been limited to the tech world. So when I began working at Google, there was a huge supply of explainer materials and code notebooks and videos and decks and docs that explained how everything worked — from physical infrastructure to what teams were responsible for what, and so on. Obviously there’s still some level of tacit knowledge that isn’t accessible, but you can onboard yourself pretty effectively.
Within government, so much of that knowledge is held within people’s brains. And that is partially just because it’s a more old school place and it’s not as online-forward, and so the idea of documenting something and putting it on a blog is just less intuitive. Also, I think partially it’s because in some sense information is power, and if you know “how the building works,” that’s something that you can wield to your advantage. And DC to some extent is a place that cares about that.
I should say that in general, I found the White House to be way less House of Cards-y than I was worried about going into it. But I do think it’s the case that a remarkably high share of information gets downloaded in, like, one-on-one coffee chats, where you need to have had the right intro beforehand. And a lot of the seemingly observable external structure — where you can look at an org chart and say, “This division has this name. Thus it’s responsible for issues X and Y” is not representative of the underlying realities: you have some groups that sound totally random and boring and are incredibly powerful, and vice versa.
So the first thing I’d say is really take that in stride, and — it’s an annoying thing to say, but networking does matter in this space — invest in trying to get interpersonal downloads of information. And I would say in terms of work that people can do that’s valuable: there are so many great explainer blog posts out there on all kinds of topics, especially in the ML space. I leaned on those hugely when I was learning ML. Someone who brings that energy to documenting how to get things done in DC would be invaluable. It’s not that no one has written things like that, but there are just way fewer of them. And something like that — especially something that is optimised for people who are trying to make this leap from the tech world to the policy world — would be really, really helpful.
Rob Wiblin: Do you think some of that might be explained by the adversarial relationship that the government has with some groups they’re trying to regulate, for instance? Or maybe journalists as well tend to be pretty harsh on government sometimes, so there’s an inclination to just hold as much information as close to your chest as you can as a defensive manoeuvre?
Tantum Collins: Absolutely there are some things that it makes sense to keep close hold, but that doesn’t need to include how to use the office printer. Even things like that, it was like, in order to figure this out, this person will have to explain it to you, or someone else will dig up a PDF that someone emailed to them five years ago that’s now 50% out of date. Just in general, the level of documentation is stuck in another era. Some of that, a small proportion of it, I think is defensible — or at least, let’s say, explicable — on the basis that you want to reduce information leakage. But I think a lot of it is just the result of some combination of old institutions, not in an especially tech-forward mindset, and parochial concerns about wanting to use information in a way that empowers you.
Rob Wiblin: Yeah, I know of a gym in an office building — not this one, but a different one that will remain anonymous — where, in order to use it, you have to get a safety briefing on how to use the gym, which only happens every six months.
Tantum Collins: Exactly. This is exactly the sort of thing. Where if you want to figure out how to do something as simple as file this form, you have to talk to Jim. And you can only catch Jim at 10:30, when he gets his sandwich —
Rob Wiblin: — on a full moon —
Tantum Collins: On a full moon. And he knows how things work in the northwest corner of the building. But if you’re in the southeast corner of the building, then he doesn’t know anything. Then you have to talk to Miranda. It’s incredible how tacit and parochial a lot of this knowledge is, and I think that is often a big impediment to some forms of efficiency.
Again, in some cases it makes sense. I understand lots of things for stuff that’s actually classified. I get it, at least in principle. There are other debates about over- and under-classification and so on. But for some of this stuff, absolutely I see the argument for keeping a close hold, but a lot of it is something that there is a better way.
Rob Wiblin: Imagine the recursive self-improvement loop that might be set off if bureaucrats accessed printer technology. Troubling.
Tantum Collins: I mean, this is the kind of thing I think about going to sleep. Yeah.
Rob Wiblin: What are some other important cultural differences between the tech scene and government for people who are moving between them?
Tantum Collins: One that surprised me is that I think the tenure that people have in government is more bimodal than in industry. So you will have people who are lifers — like civil servants who joined, let’s say, the State Department right out of university and stay there until they retire. But for most roles at the White House, for instance, the typical tenure is 12 to 24 months. This is the result of a whole bunch of things, including electoral time cycles and so on.
It’s just a much higher rate of churn than, for instance, at DeepMind: people stay there forever, and I was there for five and a half years. I don’t know if that was average, but it definitely wasn’t way above average. Whereas in government, I was there for 12 months, and by the time I left, a significant proportion of the people who were around when I arrived had left. Often that’s because they’re cycling out to a different piece of government. So you’ll have someone who, for instance, works at a department or agency and then gets detailed to the White House for a fixed-length term and then goes back. But the churn, especially when you combine it with this information flow problem —
Rob Wiblin: It seems like this is going to produce extremely odd dynamics, where you have people who are there permanently and people who are visiting for 12 months, more or less. This reminds me of the Yes Minister stuff — where if you’re a lifer, then you’ve just got to wait them out 12 months, and if they don’t manage to get it done by then, hopefully the next person doesn’t want to do it.
Tantum Collins: And you can imagine all the miscommunication that happens as a result of this.
One other thing is, depending on the type of policy work one wants to do, if you want to do anything that is classified or just related to national security, having a security clearance really does make a difference. It’s not like a tonne of decisions need classified information in order to be made, but almost all of the most senior decision-makers, of course, have security clearances. They want to be as well informed as possible, and as a result, the meetings and briefing documents and so on that are the closest to those decisions will often, whether necessarily or not, end up incorporating the classified reporting on the topic at hand. And that means that if you want to be in the room where it happens, so to speak, it really helps.
And so to the extent that one can get a job that includes conferring a clearance as part of it, that’s a real help, because then that means that you can take subsequent roles that require a clearance without having to go through that whole process — and that’s very appealing to the employer.
Rob Wiblin: You grew up not in the United States, or you spent an enormous amount of your life not inside the United States. I would have thought that could potentially be a big impediment. Wouldn’t you potentially have to report having a conversation with me, because I’m an Australian and I might be trying to turn you into a double agent or something like that?
Tantum Collins: Yes. So it’s complicated, but the short version is yes: it is slightly difficult to get a clearance if you have foreign ties. You have to fill out a form that lists all of your foreign contacts. There’s a lot of fuzziness around what exactly counts as a “contact,” how much interaction you have to have had with someone. This is an area where it is definitely worth it to pay a lawyer for one hour to get their advice on this.
Rob Wiblin: I feel sorry for some poor soul who’s scrolling back over a very large Tinder profile or something.
Tantum Collins: Exactly. There are many ways of overcounting, which will create needless work, and there are many ways of undercounting, which will violate the law. So speaking to a lawyer is very worthwhile in this space. And then you also have to list all of your foreign travel. Then you will go through an interview process, where there are people who look into all of this stuff and then ask you questions about it.
It’s not clear to me what happens behind the veil. It’s a total black box when you go through the process. I don’t know exactly what criteria they’re applying to things, so it’s possible that it’s all very enlightened and rigorous. But yes, just based on stories I’ve heard, I am somewhat worried that a lot of it is a formality, that — in keeping one of the themes of this conversation — isn’t totally aligned with the stuff that actually matters, but is something that has been filtered down many times.
Rob Wiblin: It’s highly procedural, maybe?
Tantum Collins: And there may also be a false negatives / false positives imbalance. If you approve one person who ends up being a security risk, that probably looks really bad. Whereas if you don’t approve like 50 people who have state-of-the-art knowledge of Chinese politics, for instance, then…
Rob Wiblin: It never gets pinned on you.
Tantum Collins: Exactly. It never gets blamed on you. So I don’t know. I haven’t seen what these processes look like on the other side, but I can imagine that pathologies like that might exist.
Rob Wiblin: In the interview with Ezra, he was a bit dismayed by how many people who wanted to help with AI safety have gone into the labs rather than going into government or moving to DC. He thought that there was a real bias towards doing that. I guess maybe because people found it more interesting, people found it cooler.
Reflecting on that, I mean the first episode we ever did on this show was about AI policy desperately trying to redirect people basically into DC. So I basically agree with this. But I also worry that many people who would have flourished in some of these labs would have crashed and burned if they’d moved to DC. And I wonder whether there was also a kind of sensible degree of prudence and self-preservation among people who sensed that they were not a good fit for going and working in policy.
In addition to the issue that people may be inclined to just do the thing that the people they know are doing, that’s the path of least resistance, there’s also just a problem that the people who know the most about this technically often just aren’t a very good fit — either personality-wise, or in terms of the skills that they develop — for going and actually having any success in policy circles.
Any thoughts on that?
Tantum Collins: I think that’s completely correct. I think that the disposition required for roles in industry and government is really different, and the working style is really different. Maybe that’s another thing I should have flagged: if you’re considering making this jump, talk to people who work in policy, in as close to the area that you want to do as possible, about just what their day to day is like. And if you really value having theoretical discussions in computer science, and you really hate wrangling people in bureaucracies to hold big meetings and reach consensus, and if that’s what the job that you would go into in the policy space would involve, then yeah, it’s probably not for you. So I think that is indeed a concern, but I think that the filtering to date has been more extreme than is merited by that concern, if that makes sense.
Rob Wiblin: As in, you think that there were a reasonable number of people who could have gone either way?
Tantum Collins: Yeah, exactly.
Rob Wiblin: I see. I was going to say it feels like the phenomena that is driving DC and tech groups apart is quite fundamental, and quite hard to shift, and is no one’s fault in particular. I guess you’re saying that there are some people who are on the fence, who could potentially do both, and it’s just up to them which path they go down.
But many people, it seems like temperamentally, they already know by the time they’re 18 that they are a much better fit for one of these than the other, because the cultures are already quite different. The personality type that might flourish studying computer science and then going into a technical role is potentially quite different than someone who would work well as a bureaucrat or as a politician.
And these places are geographically separated. People are streamed already at the age of 18 or 19, where they’re probably going, and studying quite different things, and then they probably don’t socialise all that much, and the background knowledge they have is completely different.
Tantum Collins: And it becomes self-reinforcing because, of course, if initially you feel a little bit like group A is your group as opposed to group B, and then you spend 10 years only hanging out with people in group A, it’s going to get harder to talk to people in group B.
Rob Wiblin: Right, yeah. The stereotype is that science and technology policy is an area where government really struggles, and it’s often kind of on the back foot. Looking at it that way, it’s almost remarkable that things aren’t worse. Things could be worse than they are.
Tantum Collins: Honestly, I’m astounded all the time that things aren’t worse. Just across so many domains, there are so many ways that things could go terribly off the rails all the time. And the fact that the world is still spinning and we haven’t had nuclear armageddon is sort of astounding.
Rob Wiblin: Yeah. Do you have any advice for people who are more on the technical side now, but are interested in making a transition, who feel like they could potentially succeed in DC? How should they go about it? What first steps could they take?
Tantum Collins: As is often the case, maybe the best first step is to connect with someone who has a similar background to you, who’s doing the types of things that you’re interested in doing. And I think that for AI in particular, there is still a huge shortage of people, but the situation is less bad now than it was a few years ago.
There are several institutions that have taken quite an active role in bringing people with that kind of expertise to DC. So a couple that I’ll list, and this isn’t an exhaustive set, but CSET, as I mentioned before, I think has done a bunch of good work in this space. There’s the Horizon Fellowship, run by Remco Zwetsloot and Joan Gass: that is specifically focused on bringing in people who have mainly STEM backgrounds to help them. And the programme doesn’t just pay for something, but it actually includes this whole extensive orientation to familiarise people with the space of opportunities in DC, and then does matchmaking to help situate people at both bits of the government itself, as well as think tanks. Then there are government programmes like the AAAS Fellowship, and things that have explicitly targeted people with science backgrounds. But yeah, I would say look for gateway individuals or institutions.
Rob Wiblin: What about people going the other way? Do you have any advice for people who are currently policy experts, and would you think that some of them should be going into working at DeepMind or OpenAI or Anthropic or another group?
Tantum Collins: Yes, I think that is good. In general, more cross-pollination is better. I do think that there is a shortage of people with policy expertise at these labs. I think it’s less acute than the imbalance going the other way, because these labs mostly do have very competent, well-staffed policy teams. But I still think that more is better for people who are interested in doing that stuff.
I think my advice would be similar: Speak to people who are in policy roles at these organisations now, to get a sense for how different life will be. And in particular, something that maybe people aren’t always prepared for is if you come from policy land, then you’re going to be used to policy considerations being paramount. And if you enter a lab, they increasingly do take policy stuff very seriously, but this is one of many considerations — and the cultures at these places tend to be, first and foremost, quite technical and research-y.
So if you’re in this category of a policy person who doesn’t have a technical background, who’s interested in going to a lab, one of the first questions is: How much do you care about learning to speak the local language, as it were? Which is to say, becoming proficient with the things that matter in machine learning? That doesn’t mean you have to get a PhD, but there’s a big difference between being able to give an Economist-article-level explanation of what ChatGPT is, and being able to, for instance, read the abstracts of papers at NeurIPS and understand why they matter and how significant a given advance is. Because some stuff will be very important, some stuff will be trivial. That’s really, really useful knowledge to have. It’s also quite difficult if you’re coming from not a STEM-focused background.
There are some people who really take to that like a fish to water, and they get to a lab and they dive in the deep end and learn loads of stuff. There are other people who do a great job in policy roles without really investing hugely in that. And I think that one thing that people who are considering making that transition should ask themselves is: do they care about doing that? And if so, do they really care about doing it? Almost everyone initially says that everyone wants to “learn to code,” right?
Rob Wiblin: I want to learn Japanese, but I’m not going to learn Japanese.
Tantum Collins: How many times have I started Hindi on Duolingo? I relearn the alphabet like every two years, then I forget it.
Rob Wiblin: I think this is going to be another driver’s licence situation, where you’ll speak Hindi as soon as ChatGPT speaks Hindi.
Tantum Collins: Thankfully, we now have good neural machine translation, even though we don’t have self-driving cars.
So a question is: If you’re interested in this kind of role, do you want to learn this stuff? And do you really want to learn it enough to put in a tonne of work? Because that will affect both plausible paths, but it will affect maybe the degree to which you feel at home in these places. I think it can be alienating. It was to me when I first arrived at DeepMind and had no idea what anyone was talking about. It can be alienating to think that the bread and butter of what a company does — and even just the conversations that people have at lunch, and the things that they present at meetings — are all oftentimes literal Greek to you. So learning it requires a big investment, and that’s something that I think people should price in.
Panpsychism [02:55:24]
Rob Wiblin: All right, we’ve been going for quite a while, so we should wrap up and let you get back home to continue moving out of your house and back to America.
It sounds like you have extremely broad knowledge of many different fields, or an impressively broad number of interests for a policymaker. Do you have any unusual philosophical views? On this show we’re constantly talking about thought experiments and things like that. Do you have any views that would raise eyebrows, maybe, in the White House?
Tantum Collins: Let’s see. Well, again, the White House is in many ways a rigorous place, but probably not the most philosophical place. So I doubt that this stuff would come up that much. But to give one: I have over time really been swayed by arguments in favour of panpsychism.
I’m probably not going to describe it in its most pristine and succinct philosophical summary, but panpsychism is the idea that consciousness exists along a spectrum, and almost all things are maybe to some extent conscious. So perhaps a human is more conscious than a pig, is more conscious than an ant, is more conscious than a tree. But then if it really is continuous, then maybe a rock has some infinitesimal level of…
Rob Wiblin: It’s not zero.
Tantum Collins: Exactly. And this one’s on my mind because I saw a friend for dinner the other day who had sent me this paper many years ago, that was the paper that first panpsychist-pilled me.
It lays out the case for thinking that you can imagine organisms across a whole spectrum of distribution and complexity, where you have what we think of now as a single animal; you have something that’s like an ant colony; you have these big emergent things like, for instance, the United States as an entity: Is the United States itself conscious? In many ways it fulfils the criteria of things that we think of as being conscious, despite the fact, of course, that in day-to-day life we don’t really think about it that way, even though we often describe it that way in the news, right? “The United States wants to do X, has taken steps to accomplish Y.”
And it feels very counterintuitive to say that inanimate objects have some level of consciousness, but I haven’t found a better theory.
Rob Wiblin: Is there a particular line of argument that particularly influences you? I guess people who are interested to learn more about this can go back and listen to the episode with David Chalmers on consciousness — we spend a bunch of time on panpsychism, because it’s slightly having a renaissance.
Tantum Collins: Yeah, I think it is. Essentially, it’s almost a process-of-elimination thing, where I have not been swayed by arguments I’ve seen in favour of either of some specific determinant — so only carbon-based life forms can be conscious or something — or of a discrete cutoff, where things transition suddenly from being not at all conscious to being fully conscious.
And intuitively, it seems natural to put things on a spectrum like this. Like, does an ant have the level of consciousness of a human? Most people probably say no. Does an ant have zero consciousness? I think most people would also say no. Or at least they would say that for a slightly larger animal, like a bird or something. And I think that once you allow that, it’s difficult to get away from the idea that this exists along the spectrum. And then it’s pretty easy to become swayed by the notion that if this exists along a spectrum, then all kinds of other things — mass and electric charge and so on — perhaps you could imagine this being instantiated to some degree for almost everything, even if for many things it’s so negligible as to be practically zero.
Rob Wiblin: What do you think of LLMs? People with this view it seems like should be more likely to think that there might be some level of awareness or some level of consciousness for something like GPT-4.
Tantum Collins: I’m certainly amenable to the idea that… I think we should assume that you can have non-carbon-based systems that are conscious until proven otherwise. Just intuitively.
Rob Wiblin: That feels like an easy one.
Tantum Collins: Yeah. So starting with that as the premise, and then saying, what do we think about existing LLMs? It’s tough to say, because at some level it does feel kind of like a rock. Obviously it’s more sophisticated than a rock, but it does feel like all kinds of machines that we’ve made before that accomplish specific tasks. And to some level, if you feel like you sort of understand how something works, there’s an intuition that says that that takes out the magic that would explain consciousness in some way, right?
Rob Wiblin: Well, I’ve literally seen this argument made, more or less, that an LLM can’t have goals or it can’t be conscious because it’s just a bunch of math. It’s just a series of maths equations. But obviously human beings are just a bunch of maths or just a bunch of chemical interactions. Yeah. That doesn’t work.
Tantum Collins: No, completely. To go back to this premise: It seems intuitive to think that all kinds of non-carbon-based systems could be conscious. When it comes to cashing it out in some kind of measure, different people have tried things. There’s integrated information theory and these various approaches. And I actually don’t know how LLMs fare on the IIT metric.
But I think that we have well-formed intuitions in some domains about where we’d put these things on the spectrum. Like, if you polled a bunch of people and you gave them this spectrum of 0 to 1 consciousness, and you gave them humans and chimpanzees and lobsters and dogs, people would probably do it almost certainly in the same ordinal sequence, and I wouldn’t be surprised if they landed cardinally in the same regions. Whereas I think that our intuitions when it comes to anything that isn’t carbon-based are probably much broader. I wouldn’t even really know where to begin. I appreciate this abstract argument that if the information-processing infrastructure is the same, you can have these digital systems that are conscious, but would an LLM be closer to a rock or an ant or…?
Now, there are different ways of measuring this. How many neurons does it have? But then there are all these questions about how much information is actually instantiated with the physical neuron versus a neural nets neuron? So yeah, this is all a long way of saying I have no well-formed intuitions on this.
Rob Wiblin: I think saying that ChatGPT is conscious is an idiosyncratic position now, but I think it’s not that crazy. I think many people at least worry that it might be, to some extent. And I feel like getting on board with that view is buying low in the hope… Because I think in 10 years’ time, we have a pretty good idea of which direction people’s views are going to move on this.
Tantum Collins: Yeah, I think that’s right. There’s a good case for that as a sort of speculative moral investment.
Rob Wiblin: Yeah. We had an interview with Rob Long on consciousness of AI systems, which is very good.
Tantum Collins: Yeah, right, which I listened to the other day.
Rob Wiblin: I saw something on Twitter that I’m now not going to be able to find again, which was a survey of people asking them how conscious would they rank different things as being: humans, various different animals, and I think rocks are on there as well. The surprising ones, from memory, were that people ranked dogs I think above babies.
Tantum Collins: Wow. Really?
Rob Wiblin: That is interesting, I don’t think it is necessarily wrong, but I was surprised.
Tantum Collins: But it’s surprising, emotionally, that people would.
Rob Wiblin: Exactly. And I think people ranked trees above insects.
Tantum Collins: Oh, really?
Rob Wiblin: Also, I’m not sure is wrong, but I think I probably would go the other way.
Tantum Collins: Both of those surprise me. Yeah, that’s very interesting.
Rob Wiblin: But it was very clear that there was just a gradation: people, by and large, do think that it’s just a continuum.
Tantum Collins: And that seems intuitive, but it leads you to all kinds of weird places. I’m not married to any of those conclusions, but they seem difficult to avoid.
Rob Wiblin: Well, hopefully one day we’ll design an AI that can tell us.
Tantum Collins: Can figure out the problem of consciousness. Exactly.
Rob Wiblin: My guest today has been Teddy Collins. Thanks so much for coming on The 80,000 Hours Podcast, Teddy.
Tantum Collins: Thank you so much for having me.
Rob’s outro [03:03:47]
Rob Wiblin: Hey, so as I quickly mentioned in the intro, the foundation Open Philanthropy is hiring for 16 different roles on their Global Catastrophic Risks team, which was previously known as the Longtermism team. They’re currently the biggest non-government funders in the world for work that specifically addresses existential and near-existential risks from various threats.
So if you love the kind of conversations that we have about those topics here on this show then Open Philanthropy might be a place you’d love to be.
And I’ll repeat the COI disclaimer that Open Philanthropy is 80,000 Hours’ biggest donor, in part because we prioritise such similar problems and for similar reasons.
Those 16 roles are spread out across their various focus areas including AI governance and policy, AI technical safety, biosecurity and pandemic preparedness, capacity building to deal with global catastrophic risks, and prioritisation across different and maybe new global catastrophic risks.
Some of the roles are based in the SF Bay Area, others are in Washington DC, and I counted four that can be done remotely. Applications for all these roles are expected to close on Thursday, November 9 at 11:59pm PST.
The role types are pretty wide-ranging, including:
- Program Associate
- Senior Program Associate
- Security Associate
- Operations Associate
- Executive Assistant
- Research Associate
- Chief of Staff
- Program Operations
- Research Fellow
- Strategy Fellow
The base salaries range from $83,000 for an Executive Assistant role to $190,000 for a Lead Researcher role, with most falling about in the middle of that range. Folks who relocate to the SF Bay Area get a cost-of-living adjustment upwards.
There’s lots of details about all of the specific positions in the page on the Open Phil website so I won’t attempt going through them all.
Given this was an episode primarily about AI governance here’s what Open Philanthropy’s Senior Program Officer for AI governance and policy, Luke Muehlhauser, had to say about the roles on his team:
Open Philanthropy’s AI governance and policy team aims to give away more than $100 million a year, and we’re currently hiring for multiple grantmaking roles. One role is a generalist role that could eventually specialize in many directions, funding work related to US, UK, or EU policy, or international coalitional AI governance, or frontier AI lab policy, or building talent pipelines, or various other things. Another role requires a technical background in computer science or similar, and would lead grantmaking on technical aspects of AI governance such as compute governance, protecting AI models from hackers, technical standards development, model evaluations, and more. These roles can be based anywhere in the world. A third role must be based in DC and is focused on funding US AI policy advocacy work in DC, including lobbying to help the best policy ideas from think tanks and elsewhere get implemented by Congress or the executive branch.
OK, if you want to learn more about those and all the other positions, which again, are likely to close in under a month on 9th November, then head to openphilanthropy.org/careers, then click through to “Multiple Positions on Our Global Catastrophic Risks Team.”
If you’re a great fit for one of them, then it could very easily be the way you could have the biggest positive impact with your career.
I will say that Open Phil is a pretty distinctive place to work, so it’s worth looking into whether it’s a good fit for you personally — it’s a fairly intense workplace where people really care about ideas and what they’re doing, for example, and as you’d expect the typical person working there is a highly analytical thinker. You can listen to interviews with past guests of the show who work there — like Ajeya Cotra or Tom Davidson or Alexander Berger — to help get a sense of whether it’s right for you.
That interview with Alexander Berger is from two years ago, so it’s a little bit older, but Alexander is now actually the CEO of Open Philanthropy, so it could be a particularly useful one to listen to.
If you’d like to discuss this or a similar career decision with someone on our team you can apply for personalised one-on-one career advice from our advisors at 80000hours.org/advising.
And you can find all of these jobs and other similar ones that might allow you to do a lot of good on our job board at 80000hours.org/jobs.
Of course again, Open Philanthropy is our biggest donor, which I don’t think is driving my view that it’s a very promising place to work for people who want to have a big positive impact, but you know, common sense suggests you shouldn’t completely rely on any one source in forming your view.
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
The audio engineering team is led by Ben Cordell, with mastering and technical editing by Milo McGuire and Simon Monsour.
Full transcripts and an extensive collection of links to learn more are available on our site, and put together by Katy Moore.
Thanks for joining, talk to you again soon.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.