AI governance and coordination
As advancing AI capabilities gained widespread attention in late 2022 and 2023 — particularly after the release of OpenAI’s ChatGPT and Microsoft’s Bing chatbot — interest in governing and regulating these systems has grown. Discussion of the potential catastrophic risks of misaligned or uncontrollable AI also became more prominent, potentially opening up opportunities for policy that could mitigate the threats.
There’s still a lot of uncertainty about which strategies for AI governance and coordination would be best, though parts of the community of people working on this subject may be coalescing around some ideas. See, for example, a list of potential policy ideas from Luke Muehlhauser of Open Philanthropy1 and a survey of expert opinion on best practices in AI safety and governance.
But there’s no roadmap here. There’s plenty of room for debate about which policies and proposals are needed.
We may not have found the best ideas yet in this space, and many of the existing policy ideas haven’t yet been developed into concrete, public proposals that could actually be implemented. We hope to see more people enter this field to develop expertise and skills that will contribute to risk-reducing AI governance and coordination.
In a nutshell: Advanced AI systems could have massive impacts on humanity and potentially pose global catastrophic risks. There are opportunities in AI governance and coordination around these threats to shape how society responds to and prepares for the challenges posed by the technology.
Given the high stakes, pursuing this career path could be many people’s highest-impact option. But they should be very careful not to accidentally exacerbate the threats rather than mitigate them.
Recommended
If you are well suited to this career, it may be the best way for you to have a social impact.
Review status
Based on an in-depth investigation
Table of Contents
- 1 Why this could be a high-impact career path
- 2 What kinds of work might contribute to AI governance?
- 3 How policy gets made
- 4 Examples of people pursuing this path
- 5 How to assess your fit and get started
- 6 Where can this kind of work be done?
- 7 How this career path can go wrong
- 8 What the increased attention on AI means
- 9 Read next
- 10 Learn more
“What you’re doing has enormous potential and enormous danger.” — US President Joe Biden, to the leaders of the top AI labs
Why this could be a high-impact career path
Artificial intelligence has advanced rapidly. In 2022 and 2023, new language and image generation models gained widespread attention for their abilities, blowing past previous benchmarks the technology had met.
And the applications of these models are still new; with more tweaking and integration into society, the existing AI systems may become easier to use and more ubiquitous in our lives.
We don’t know where all these developments will lead us. There’s reason to be optimistic that AI will eventually help us solve many of the world’s problems, raising living standards and helping us build a more flourishing society.
But there are also substantial risks. AI can be used for both good and ill. And we have concerns that the technology could, without the proper controls, accidentally lead to a major catastrophe — and perhaps even cause human extinction. We discuss the arguments that these risks exist in our in-depth problem profile.
Because of these risks, we encourage people to work on finding ways to reduce these risks through technical research and engineering.
But a range of strategies for risk reduction will likely be needed. Government policy and corporate governance interventions in particular may be necessary to ensure that AI is developed to be as broadly beneficial as possible and without unacceptable risk.
Governance generally refers to the processes, structures, and systems that carry out decision making for organisations and societies at a high level. In the case of AI, we expect the governance structures that matter most to be national governments and organisations developing AI — as well as some international organisations and perhaps subnational governments.
Some aims of AI governance work could include:
- Preventing the deployment of any AI systems that pose a significant and direct threat of catastrophe
- Mitigating the negative impact of AI technology on other catastrophic risks, such as nuclear weapons and biotechnology
- Guiding the integration of AI technology into our society and economy with limited harms and to the advantage of all
- Reducing the risk of an “AI arms race,” in which competition leads to technological advancement without the necessary safeguards and caution — between nations and between companies
- Ensuring that those creating the most advanced AI models are incentivised to be cooperative and concerned about safety
- Slowing down the development and deployment of new systems if the advancements are likely to outpace our ability to keep them safe and under control
We need a community of experts who understand the intersection of modern AI systems and policy, as well as the severe threats and potential solutions. This field is still young, and many of the paths within it aren’t clear and are not sure to pan out. But there are relevant professional paths that will provide you valuable career capital for a variety of positions and types of roles.
The rest of this article explains what work in this area might involve, how you can develop career capital and test your fit, and where some promising places to work might be.
What kinds of work might contribute to AI governance?
What should governance-related work on AI actually involve? There are a variety of ways to pursue AI governance strategies, and as the field becomes more mature, the paths are likely to become clearer and more established.
We generally don’t think people early in their careers should be aiming for a specific job that they think would be high-impact. They should instead aim to develop skills, experience, knowledge, judgement, networks, and credentials — what we call career capital — that they can later use when an opportunity to have a positive impact is ripe.
This may involve following a pretty standard career trajectory, or it may involve bouncing around in different kinds of roles. Sometimes, you just have to apply to a bunch of different roles and test your fit for various types of work before you know what you’ll be good at. The main thing to keep in mind is that you should try to get excellent at something for which you have strong personal fit and that will let you contribute to solving pressing problems.
In the AI governance and coordination space, we see at least six large categories of work that we expect to be important:
- Government work
- Research on AI policy and strategy
- Industry work
- Advocacy and lobbying
- Third-party auditing and evaluation
- International work and coordination
There aren’t necessarily openings in all these categories at the moment for careers in AI governance, but they represent a range of sectors in which impactful work may potentially be done in the coming years and decades. Thinking about the different skills and forms of career capital that will be useful for the categories of work you could see yourself doing in the future can help you figure out what your immediate next steps should be. (We discuss how to assess your fit and enter this field below.)
You may want to — and indeed it may be advantageous to — move between these different categories of work at different points in your career. You can also test out your fit for various roles by taking internships, fellowships, entry-level jobs, temporary placements, or even doing independent research, all of which can serve as career capital for a range of paths.
We have also reviewed career paths in AI technical safety research and engineering and information security, which may be crucial to reducing risks from AI, and which may play a significant role in an effective governance agenda. People serious about pursuing a career in AI governance should familiarise themselves with these fields as well.
Government work
Taking a role within government could lead to playing an important role in the development, enactment, and enforcement of AI policy.
Note that we generally expect that the US federal government will be the most significant player in AI governance for the foreseeable future. This is because of its global influence and its jurisdiction over much of the AI industry, including the top three AI labs training state-of-the-art, general-purpose models (Anthropic, OpenAI, and Google DeepMind) and key parts of the chip supply chain. Much of this article focuses on US policy and government.2
But other governments and international institutions may also end up having important roles to play in certain scenarios. For example, the UK government, the European Union, China, and potentially others, may all present opportunities for impactful AI governance work. Some US state-level governments, such as California, may also offer opportunities for impact and gaining career capital.
What would this work involve? Sections below discuss how to enter US policy work and which areas of the government that you might aim for.
But at the broadest level, people interested in positively shaping AI policy should aim to gain the skills and experience to work in areas of government with some connection to AI or emerging technology policy.
This can include roles in: legislative branches, domestic regulation, national security, diplomacy, appropriations and budgeting, and other policy areas.
If you can get a role out of the gate that is already working directly on this issue, such as a staff position with a lawmaker who is focused on AI, that could be a great opportunity.
Otherwise, you should seek to learn as much as you can about how policy works and which government roles might allow you to have the most impact, while establishing yourself as someone who’s knowledgeable about the AI policy landscape. Having almost any significant government role that touches on some aspect of AI, or having some impressive AI-related credential, may be enough to get you quite far.
One way to advance your career in government on a specific topic is what some call “getting visibility” — that is, using your position to learn about the landscape and connect with the actors and institutions that affect the policy area you care about. You’ll want to be invited to meetings with other officials and agencies, be asked for input on decisions, and engage socially with others who work in the policy area. If you can establish yourself as a well-regarded expert on an important but neglected aspect of the issue, you’ll have a better shot at being included in key discussions and events.
Career trajectories within government can be broken down roughly as follows:
- Standard government track: This involves entering government at a relatively low level and building up your career capital on the inside by climbing the seniority ladder. For the highest impact, you’d ideally end up reaching senior levels by sticking around, gaining skills and experience, and getting promoted. You may move between agencies, departments, or branches.
- Specialisation career capital: You can also move in and out of government throughout your career. People on this trajectory will also work at nonprofits, think tanks, industry labs, political parties, academia, and other organisations. But they will primarily focus on becoming an expert in a topic — such as AI. It can be harder to get seniority this way, but the value of expertise and experience can sometimes outweigh seniority.
- Direct-impact work: Some people move into government jobs without a longer plan to build career capital because they see an opportunity for direct, immediate impact. This might look like getting tapped to lead an important commission or providing valuable input on an urgent project. We don’t generally recommend planning on this kind of strategy for your career, but it’s good to be aware of it as an opportunity that might be worth taking at some point.
Research on AI policy and strategy
There’s still a lot of research to be done on the most important avenues for AI governance approaches. While there are some promising proposals for a system of regulatory and strategic steps that can help reduce the risk of an AI catastrophe, there aren’t many concrete and publicly available policy proposals ready for adoption.
The world needs more concrete proposals for AI policies that would really start to tackle the biggest threats; developing such policies, and deepening our understanding of the strategic needs of the AI governance space, should be high priorities.
Other relevant research could involve surveys of public opinion that could inform communication strategies, legal research about the feasibility of proposed policies, technical research on issues like compute governance, and even higher-level theoretical research into questions about the societal implications of advanced AI. Some research, such as that done by Epoch AI, focuses on forecasting the future course of AI developments, which can influence AI governance decisions.
However, several experts we’ve talked to warn that a lot of research on AI governance may prove to be useless, so it’s important to be reflective and seek input from others in the field — both from experienced policy practitioners and technical experts — about what kind of contribution you can make. We list several research organisations below that we think would be good to work at in order to pursue promising research on this topic.
One potentially useful approach for testing your fit for this work — especially when starting out in this research — is to write up analyses and responses to existing work on AI policy or investigate some questions in this area that haven’t been the subject of much attention. You can then share your work widely, send it out for feedback from people in the field, and evaluate how much you enjoy the work and whether you might productively contribute to this research longer term.
But it’s possible to spend too long testing your fit without making much progress, and some people find that they’re best able to contribute when they’re working on a team. So don’t overweight or over-invest in independent work, especially if there are few signs it’s working out especially well for you. This kind of project can make sense for maybe a month or a bit longer — but it’s unlikely to be a good idea to spend much more than that without meaningful funding or some really encouraging feedback from people working in the field.
If you have the experience to be hired as a researcher, work on AI governance can be done in academia, nonprofit organisations, and think tanks. Some government agencies and committees, too, perform valuable research.
Note that universities and academia have their own priorities and incentives that often aren’t aligned with producing the most impactful work. If you’re already an established researcher with tenure, it may be highly valuable to pivot into work on AI governance — this position may even give you a credible platform from which to advocate for important ideas.
But if you’re just starting out a research career and want to focus on this issue, you should carefully consider whether your work will be best supported inside or outside of academia. For example, if you know of a specific programme with particular mentors who will help you pursue answers to critical questions in this field, it might be worth doing. We’re less inclined to encourage people to pursue generic academic-track roles with the vague hope that one day they can do important research on this topic.
Advanced degrees in policy or relevant technical fields may well be valuable, though — see more discussion of this in the section on how to assess your fit and get started.
Industry work
While government policy is likely to play a key role in coordinating various actors interested in reducing the risks from advanced AI, internal policy and corporate governance at the largest AI labs themselves is also a powerful tool. We think people who care about reducing risk can potentially do valuable work internally at industry labs. (Read our career review of non-technical roles at AI labs.)
At the highest level, deciding who sits on corporate boards, what kind of influence those boards have, and to what extent the organisation is structured to seek profit and shareholder value as opposed to other aims, can end up having a major impact on the direction a company takes. If you might be able to get a leadership role at a company developing frontier AI models, such as a management position or a seat on the board, it could potentially be a very impactful position.
If you’re able to join a policy team at a major lab, you can model threats and help develop, implement, and evaluate promising proposals internally to reduce risks. And you can build consensus around best practices, such as strong information security policies, using outside evaluators to find vulnerabilities and dangerous behaviours in AI systems (red teaming), and testing out the latest techniques from the field of AI safety.
And if, as we expect, AI labs face increasing government oversight, industry governance and policy work can ensure compliance with any relevant laws and regulations that get put in place. Interfacing with government actors and facilitating coordination over risk reduction approaches could be impactful work.
In general, the more cooperative AI labs are with each other3 and outside groups seeking to minimise catastrophic risks from AI, the better. And this doesn’t seem to be an outlandish hope — many industry leaders have expressed concern about extinction risks and have even called for regulation of the frontier technology they’re creating.
That said, we can expect this cooperation to take substantial work — it would be surprising if the best policies for reducing risks were totally uncontroversial in industry, since labs also face huge commercial incentives to build more powerful systems, which can carry more risk. The more everyone’s able to communicate and align their incentives, the better things seem likely to go.
Advocacy and lobbying
People outside of government or AI labs can influence the shape of public policy and corporate governance via advocacy and lobbying.
As of this writing, there has not yet been a large public movement in favour of regulating or otherwise trying to reduce risks from AI, so there aren’t many openings that we know about in this category. But we expect growing interest in this area to open up new opportunities to press for political action and policy changes at AI labs, and it could make sense to start building career capital and testing your fit now for different kinds of roles that would fall into this category down the line.
If you believe AI labs may be disposed to advocate for generally beneficial regulation, you might want to try to work for them, or become a lobbyist for the industry as a whole, to push the government to adopt specific policies. It’s plausible that AI labs will have by far the best understanding of the underlying technology, as well as the risks, failure modes, and safest paths forward.
On the other hand, it could be the case that AI labs have too much of a vested interest in the shape of regulations to reliably advocate for broadly beneficial policies. If that’s right, it may be better to join or create advocacy organisations unconnected from the industry — supported by donations or philanthropic foundations — that can take stances that are opposed to the labs’ commercial interests.
For example, it could be the case that the best approach from a totally impartial perspective would be at some point to deliberately slow down or halt the development of increasingly powerful AI models. Advocates could make this demand of the labs themselves or of the government to slow down AI progress. It may be difficult to come to this conclusion or advocate for it if you have strong connections to the companies creating these systems.
It’s also possible that the best outcomes will be achieved with a balance of industry lobbyists and outside lobbyists and advocates making the case for their preferred policies — as both bring important perspectives.
We expect there will be increasing public interest in AI policy as the technological advancements have ripple effects in the economy and wider society. And if there’s increasing awareness of the impact of AI on people’s lives, the risks the technology poses may become more salient to the public, which will give policymakers strong incentives to take the problem seriously. It may also bring new allies into the cause of ensuring that the development of advanced AI goes well.
Advocacy can also:
- Highlight neglected but promising approaches to governance that have been uncovered in research
- Facilitate the work of policymakers by showcasing the public’s support for governance measures
- Build bridges between researchers, policymakers, the media, and the public by communicating complicated ideas in an accessible way to many audiences
- Pressure corporations themselves to proceed more cautiously
- Change public sentiment around AI and discourage irresponsible behaviour by individual actors, such as the spreading of powerful open-source models
However, note that advocacy can sometimes backfire. Predicting how information will be received is far from straightforward. Drawing attention to a cause area can sometimes trigger a backlash; presenting problems with certain styles of rhetoric can alienate people or polarise public opinion; spreading misleading or mistaken messages can discredit yourself and fellow advocates. It’s important that you are aware of the risks, consult with others (particularly those who you respect but might disagree with tactically), and commit to educating yourself deeply about the topic before expounding on it in public.
You can read more in the section about doing harm below. We also recommend reading our article on ways people trying to do good accidentally make things worse and how to avoid them.
Case study: the Future of Life Institute open letter
In March 2023, the Future of Life Institute published an open letter calling for a pause of at least six months on training any new models more “powerful” than OpenAI’s GPT-4 — which had been released about a week earlier. GPT-4 is a state-of-the-art language model that can be used through ChatGPT to produce novel and impressive text responses to a wide range of prompts.
The letter attracted a lot of attention, perhaps in part because it was signed by prominent figures such as Elon Musk. While it didn’t immediately achieve its explicit aims — the labs didn’t commit to a pause — it drew a lot of attention and fostered public conversations about the risks of AI and the potential benefits of slowing down. (An earlier article titled “Let’s think about slowing down AI” — by Katja Grace of the research organisation AI Impacts — aimed to have a similar effect.)
There’s no clear consensus on whether the FLI letter was on the right track. Some critics of the letter, for example, said that its advice would actually lead to worse outcomes overall if followed, because it would slow down AI safety research while many of the innovations that drive AI capabilities progress, such as chip development, would continue to race forward. Proponents of the letter pushed back on these claims.4 It does seem clear that the letter changed the public discourse around AI safety in a way that few other efforts have achieved, which is proof of concept for what impactful advocacy can accomplish.
Third-party auditing and evaluation
If regulatory measures are put in place to reduce the risks of advanced AI, some agencies and organisations — within government or outside — will need to audit companies and systems to make sure that regulations are being followed.
One nonprofit, the Alignment Research Center, has been at the forefront of this kind of work.5 In addition to its research work, it has launched a program to evaluate the capabilities of advanced AI models. In early 2023, the organisation partnered with two leading AI labs, OpenAI and Anthropic, to evaluate the capabilities of the latest versions of their chatbot models prior to their release. They sought to determine in a controlled environment if the models had any potentially dangerous capabilities.
The labs voluntarily cooperated with ARC for this project, but at some point in the future, these evaluations may be legally required.
Governments often rely on third-party auditors as crucial players in regulation, because the government may lack the expertise (or the capacity to pay for the expertise) that the private sector has. There aren’t many such opportunities available in this type of role that we know of as of this writing, but they may end up playing a critical part of an effective AI governance framework.
Other types of auditing and evaluation may be required as well. ARC has said it intends to develop methods to determine which models are appropriately aligned — that is, that they will behave as their users intend them to behave — prior to release.
Governments may also want to employ auditors to evaluate the amount of compute that AI developers have access to, their information security practices, the uses of models, the data used to train models, and more.
Acquiring the technical skills and knowledge to perform these types of evaluations, and joining organisations that will be tasked to perform them, could be the foundation of a highly impactful career. This kind of work will also likely have to be facilitated by people who can manage complex relationships across industry and government. Someone with experience in both sectors could have a lot to contribute.
Some of these types of roles may have some overlap with work in AI technical safety research.
One potential advantage of working in the private sector for AI governance work is you may be significantly better paid than you would be in government.
International work and coordination
US-China
For someone with the right fit, cooperation and coordination with China on the safe development of AI could be a particularly impactful approach within the broad AI governance career path.
The Chinese government has been a major funder in the field of AI, and the country has giant tech companies that could potentially drive forward advances.
Given tensions between the US and China, and the risks posed by advanced AI, there’s a lot to be gained from increasing trust, understanding, and coordination between the two countries. The world will likely be much better off if we can avoid a major conflict between great powers and if the most significant players in emerging technology can avoid exacerbating any global risks.
We have a separate career review that goes into more depth on China-related AI safety and governance paths.
Other governments and international organisations
As we’ve said, we focus most on US policy and government roles. This is largely because we anticipate that the US is now and will likely continue to be the most pivotal actor when it comes to regulating AI, with a major caveat being China, as discussed in the previous section.
But many people interested in working on this issue can’t or don’t want to work in US policy — perhaps because they live in another country and don’t intend on moving.
Much of the advice above still applies to these people, because roles in AI governance research and advocacy can be done outside of the United States.6 And while we don’t think it’s generally as impactful in expectation as US government work, opportunities in other governments and international organisations can be complementary to the work to be done in the US.
The United Kingdom, for instance, may present another strong opportunity for AI policy work that would complement US work. Top UK officials have expressed interest in developing policy around AI, perhaps even a new international agency, and reducing extreme risks. And the UK government announced in 2023 the creation of a new AI Foundation Model Taskforce, with the expressed intention to drive forward safety research.
It’s possible that by taking significant steps to understand and regulate AI, the UK will encourage or inspire US officials to take similar steps by showing how it can work.
And any relatively wealthy country could use portions of its budget to fund AI safety research. While a lot of the most important work likely needs to be done in the US, along with leading researchers and at labs with access to large amounts of compute, some lines of research may be productive even without these resources. Any significant advances in AI safety research, if communicated properly, could be used by researchers working on the most powerful models.
Other countries might also develop liability standards for the creators of AI systems that could incentivise corporations to proceed more cautiously and judiciously before releasing models.
The European Union has shown that its data protection standards — the General Data Protection Regulation (GDPR) — affect corporate behaviour well beyond its geographical boundaries. EU officials have also pushed forward on regulating AI, and some research has explored the hypothesis that the impact of the union’s AI regulations will extend far beyond the continent — the so-called “Brussels effect.”
And at some point, we do expect there will be AI treaties and international regulations, just as the international community has created the International Atomic Energy Agency, the Biological Weapons Convention, and Intergovernmental Panel on Climate Change to coordinate around and mitigate other global catastrophic threats.
Efforts to coordinate governments around the world to understand and share information about threats posed by AI may end up being extremely important in some future scenarios.
The Organisation for Economic Cooperation and Development is one place where such work might occur. So far, it has been the most prominent international actor working on AI policy and has created the AI Policy Observatory.
Third-party countries may also be able to facilitate cooperation and reduce tensions betweens the United States and China, whether around AI or other potential flashpoints, should such an intervention become necessary.
How policy gets made
What does it actually take to make policy?
In this section, we’ll discuss three phases of policy making: agenda setting, policy creation and development, and implementation. We’ll generally discuss these as aspects of making government policy, but they could also be applied to organisational policy. The following section will discuss the types of work that you could do to positively contribute to the broad field of AI governance.
Agenda setting
To enact and implement a programme of government policies that have a positive impact, you have to first ensure that the subject of potential legislation and regulation is on the agenda for policymakers.
Agenda setting for policy involves identifying and defining problems, drawing attention to the problems and raising their salience (at least to the relevant people), and promoting potential approaches to solving them.
For example, when politicians take office, they often enter on a platform of promises made to their constituents and their supporters about which policy agendas they want to pursue. Those agendas are formed through public discussion, media narratives, internal party politics, deliberative debate, interest group advocacy, and other forms of input. The agenda can be, to varying degrees, problem-specific — having a broad remit of “improving health care.” Or it could be more solution-specific — aiming to create, for example, a single-payer health system.
Issues don’t necessarily have to be unusually salient to get on the agenda. Policymakers or officials at various levels of government can prioritise solving certain problems or enacting specific proposals that aren’t the subject of national debate. In fact, sometimes making issues too salient, framing them in divisive ways, or allowing partisanship and political polarisation to shape the discussion, can make it harder to successfully put solutions on the agenda.
What’s key for agenda setting as an approach to AI governance is that people with the authority have to buy into the idea of prioritising the issue, if they’re going to use their resources and political capital to focus on it.
Policy creation and development
While there does appear to be growing enthusiasm for a set or sets of policy proposals that could start to reduce the risk of an AI-related catastrophe, there’s still a lack of concrete policies that are ready to get off the ground.
This is what the policy creation and development process is for. Researchers, advocates, civil servants, lawmakers and their staff, and others all can play a role in shaping the actual legislation and regulation that the government eventually enforces. In the corporate context, internal policy creation can serve similar functions, though it may be less enforceable unless backed up with contracts.
Policy creation involves crafting solutions for the problem at hand with the policy tools available, usually requiring input from technical experts, legal experts, stakeholders, and the public. In countries with strong judicial review like the United States, special attention often has to be paid to make sure laws and regulations will hold up under the scrutiny of judges.
Once concrete policy options are on the table, they must be put through the relevant decision-making process and negotiations. If the policy in question is a law that’s going to be passed, rather than a regulation, it needs to be crafted so that it will have enough support from lawmakers and other key decision makers to be enacted. This can happen in a variety of ways; it might be rolled into a larger piece of legislation that has wide support, or it may be rallied around and brought forward as its own package to be voted on individually.
Policy creation can also be an iterative process, as policies are enacted, implemented, monitored, evaluated, and revised.
For more details on the complex work of policy creation, we recommend Thomas Kalil’s article “Policy Entrepreneurship in the White House: Getting Things Done in Large Organisations.”
Implementation
Fundamentally, a policy is only an idea. For an idea to have an impact, someone actually has to carry it out. Any of the proposals for AI-related government policy — including standards and evaluations, licensing, and compute governance — will demand complex management and implementation.
Policy implementation on this scale requires extensive planning, coordination in and out of government, communication, resource allocation, training and more — and every step in this process can be fraught with challenges. To rise to the occasion, any government implementing an AI policy regime will need talented individuals working at a high standard.
The policy creation phase is critical and is probably the highest-priority work. But good ideas can be carried out badly, which is why policy implementation is also a key part of the AI governance agenda.
Examples of people pursuing this path
How to assess your fit and get started
If you’re early on in your career, you should focus first on getting skills and other career capital to successfully contribute to the beneficial governance and regulation of AI.
You can gain career capital for roles in many ways, and the best options will vary based on your route to impact. But broadly speaking, working in or studying fields such as politics, law, international relations, communications, and economics can all be beneficial for going into policy work.
And expertise in AI itself, gained by studying and working in machine learning and technical AI safety, or potentially related fields such as computer hardware or information security, should also give you a big advantage.
Testing your fit
One general piece of career advice we give is to find relatively “cheap” tests to assess your fit for different paths. This could mean, for example, taking a policy internship, applying for a fellowship, doing a short bout of independent research as discussed above, or taking classes or courses on technical machine learning or computer engineering.
It can also just involve talking to people currently doing a job you might consider having and finding out what the day-to-day experience of the work is like and what skills are needed.
All of these factors can be difficult to predict in advance. While we grouped “government work” into a single category above, that label covers a wide range of positions and types of occupations in many different departments and agencies. Finding the right fit within a broad category like “government work” can take a while, and it can depend on a lot of factors out of your control, such as the colleagues you happen to work closely with. That’s one reason it can be useful to build broadly valuable career capital, so you have the option to move around to find the right role for you.
And don’t underestimate the value at some point of just applying to many relevant openings in the field and sector you’re aiming for and seeing what happens. You’ll likely face a lot of rejection with this strategy, but you’ll be able to better assess your qualifications for different kinds of roles after you see how far you get in the process, if you take enough chances. This can give you a lot more information than just guessing about whether you have the right experience.
It can be useful to rule out certain types of work if you gather evidence that you’re not a strong fit for the role. For example, if you invest a lot of time and effort trying to get into reputable universities or nonprofit institutions to do AI governance research, but you get no promising offers and receive little encouragement even after applying widely, this might be a significant signal that you’re unlikely to thrive in that particular path.
That wouldn’t mean you have nothing to contribute, but your comparative advantage may lie elsewhere.
Read the section of our career guide on finding a job that fits you.
Types of career capital
For a field like AI governance, a mix of people with technical and policy expertise — and some people with both — is needed.
While anyone involved in this field should work to maintain an evolving understanding of both the technical and policy details, you’ll probably start out focusing on either policy or technical skills to gain career capital.
This section covers:
- Generally useful career capital
- Policy-related career capital
- Technical career capital
- Other specific forms of career capital
Much of this advice is geared toward roles in the US, though it may be relevant in other contexts.
Generally useful career capital
The chapter of the 80,000 Hours career guide on career capital lists five key components that will be useful in any path: skills and knowledge, connections, credentials, character, and runway.
For most jobs touching on policy, social skills, networking, and — for lack of a better word — political skill will be a huge asset. This can probably be learned to some extent, but some people may find they don’t have these kinds of skills and can’t or don’t want to acquire them. That’s OK — there are many other routes to having a fulfilling and impactful career, and there may be some roles within this path that demand these skills to a much lesser extent. That’s why testing your fit is important.
Read the full section of the career guide on career capital.
Policy-related career capital
To gain skills in policy, you can pursue education in many relevant fields, such as political science, economics, and law.
Many master’s programmes offer specific coursework on public policy, science and society, security studies, international relations, and other topics; having a graduate degree or law degree will give you a leg up for many positions.
In the US, a master’s, a law degree, or a PhD is particularly useful if you want to climb the federal bureaucracy. Our article on US policy master’s degrees provides detailed information about how to assess the many options.
Internships in DC are a promising route to evaluate your aptitude for policy work and to establish early career capital. Many academic institutions now offer a strategic “Semester in DC” programme, which can let you explore placements of choice in Congress, federal agencies, or think tanks. The Virtual Student Federal Service (VSFS) also offers part-time, remote government internships. Balancing their academic commitments, students can access these opportunities during the academic year, further solidifying their grasp on the intricacies of policy work. This technological advance could be the stepping stone many aspiring policy professionals need to ascend in their future careers.
Once you have a suitable background, you can take entry-level positions within parts of the government where you can build a professional network and develop your skills. In the US, you can become a congressional staffer, or take a position at a relevant federal department, such as the Department of Commerce, Department of Energy, or the Department of State. Alternatively, you can gain experience in think tanks — a particularly promising option if you have a strong aptitude for research — and government contractors, private sector companies providing services to the government.
In Washington, DC, the culture is fairly unique. There’s a big focus on networking and internal bureaucratic politics to navigate. We’ve also been told that while merit matters to a degree in US government work, it is not the primary determinant of who is most successful. People who think they wouldn’t feel able or comfortable to be in this kind of environment for the long term should consider whether other paths would be best.
If you find you can enjoy government and political work, impress your colleagues, and advance in your career, though, that’s a strong signal that you have the potential to make a real impact. Just being able to thrive in government work can be an extremely valuable comparative advantage.
US citizenship
Your citizenship may affect which opportunities are available to you. Many of the most important AI governance roles within the US — particularly in the executive branch and Congress — are only open to, or will at least heavily favour, American citizens. All key national security roles that might be especially important will be restricted to those with US citizenship, which is required to obtain a security clearance.
This may mean that those who lack US citizenship will want to consider not pursuing roles that require it. Alternatively, they could plan to move to the US and pursue the long process of becoming a citizen. For more details on immigration pathways and types of policy work available to non-citizens, see this blog post on working in US policy as a foreign national. Consider also participating in the annual diversity visa lottery if you’re from an eligible country, as this is low effort and allows you to win a US green card if you’re lucky.
Technical career capital
Technical experience in machine learning, AI hardware, and related fields can be a valuable asset for an AI governance career. So it will be very helpful if you’ve studied a relevant subject area for an undergraduate or graduate degree, or a particularly productive course of independent study.
We have a guide to technical AI safety careers, which explains how to learn the basics of machine learning.
The following resources may be particularly useful for familiarising yourself with the field of AI safety:
- Redwood Research’s MLAB bootcamp curriculum
- BlueDot Impact’s AI Safety Fundamentals
- Center for AI Safety’s Introduction to ML Safety course
Working at an AI lab in technical roles, or other companies that use advanced AI systems and hardware, may also provide significant career capital in AI policy paths. (Read our career review discussing the pros and cons of working at a top AI lab.)
We also have a separate career review on how becoming an expert in AI hardware could be very valuable in governance work.
Many politicians and policymakers are generalists, as their roles require them to work in many different subject areas and on different types of problems. This means they’ll need to rely on expert knowledge when crafting and implementing policy on AI technology that they don’t fully understand. So if you can provide them this information, especially if you’re skilled at communicating it clearly, you can potentially fill influential roles.
Some people who may have initially been interested in pursuing a technical AI safety career, but who have found that they either are no longer interested in that path or find more promising policy opportunities, might also decide that they can effectively pivot into a policy-oriented career.
It is common for people with STEM backgrounds to enter and succeed in US policy careers. People with technical credentials that they may regard as fairly modest — such as computer science bachelor’s degrees or a master’s in machine learning — often find their knowledge is highly valued in Washington, DC.
Most DC jobs don’t have specific degree requirements, so you don’t need to have a policy degree to work in DC. Roles specifically addressing science and technology policy are particularly well-suited for people with technical backgrounds, and people hiring for these roles will value higher credentials like a master’s or, better even, a terminal degree like a PhD or MD.
There are many fellowship programmes specifically aiming to support people with STEM backgrounds to enter policy careers; some are listed below.
This won’t be right for everybody — many people with technical skills may not have the disposition or skills necessary for engaging in policy. People in policy-related paths often benefit from strong writing and social skills as well as a comfort navigating bureaucracies and working with people holding very different motivations and worldviews.
Other specific forms of career capital
There are other ways to gain useful career capital that could be applied in this career path.
- If you have or gain great communication skills as, say, a journalist or an activist, these skills could be very useful in advocacy and lobbying around AI governance.
- Especially since advocacy around AI issues is still in its early stages, it will likely need people with experience advocating in other important cause areas to share their knowledge and skills.
- Academics with relevant skill sets are sometimes brought into government for limited stints to serve as advisors in agencies such as the US Office of Science and Technology. This isn’t necessarily the foundation of a longer career in government, though it can be, and it can give an academic deeper insight into policy and politics than they might otherwise gain.
- You can work at an AI lab in non-technical roles, gaining a deeper familiarity with the technology, the business, and the culture. (Read our career review discussing the pros and cons of working at a top AI lab.)
- You could work on political campaigns and get involved in party politics. This is one way to get involved in legislation, learn about policy, and help impactful lawmakers, and you can also potentially help shape the discourse around AI governance. Note, though, the previously mentioned downsides of potentially polarising public opinion around AI policy; and entering party politics may limit your potential for impact whenever the party you’ve joined doesn’t hold power.
- You could even try to become an elected official yourself, though it’s obviously competitive. If you take this route, make sure you find trustworthy and highly informed advisors to rely on to build expertise in AI, since politicians have many other responsibilities and won’t be able to focus as much on any particular issue.
- You can focus on developing specific skill sets that might be valuable in AI governance, such as information security, intelligence work, diplomacy with China, etc.
- Other skills: Organisational, entrepreneurial, management, diplomatic, and bureaucratic skills will also likely prove highly valuable in this career path. There may be new auditing agencies to set up or policy regimes to implement. Someone who has worked at high levels in other high-stakes industries, started an influential company, or coordinated complicated negotiations between various groups, would bring important skills to the table.
Want one-on-one advice on pursuing this path?
Because this is one of our priority paths, if you think this path might be a great option for you, we’d be especially excited to advise you on next steps, one-on-one. We can help you consider your options, make connections with others working in the same field, and possibly even help you find jobs or funding opportunities.
Where can this kind of work be done?
Since successful AI governance will require work from governments, industry, and other parties, there will be many potential jobs and places to work for people in this path. The landscape will likely shift over time, so if you’re just starting out on this path, the places that seem most important might be different by the time you’re pivoting to using your career capital to make progress on the issue.
Within the US government, for instance, it’s not clear which bodies will be most impactful when it comes to AI policy in five years. It will likely depend on choices that are made in the meantime.
That said, it seems useful to give our understanding of which parts of the government are generally influential in technology governance and most involved right now to help orient. Gaining AI-related experience in government right now should still serve you well if you end up wanting to move into a more impactful AI-related role down the line when the highest-impact areas to work in are clearer.
We’ll also give our current sense of important actors outside government where you might be able to build career capital and potentially have a big impact.
Note that this list has by far the most detail about places to work within the US government. We would like to expand it to include more options as we learn more. You can use this form to suggest additional options for us to include. (And the fact that an option isn’t on this list shouldn’t be taken to mean we recommend against it or even that it would necessarily be less impactful than the places listed.)
We have more detail on other options in separate (and older) career reviews, including the following:
- China-related AI safety and governance paths
- Party politics (UK-focused)
- Policy-oriented government jobs (UK-focused)
With that out of the way, here are some of the places where someone could do promising work or gain valuable career capital:
In Congress, you can either work directly for lawmakers themselves or as staff on a legislative committee. Staff roles on the committees are generally more influential on legislation and more prestigious, but for that reason, they’re more competitive. If you don’t have that much experience, you could start out in an entry-level job staffing a lawmaker and then later try to transition to staffing a committee.
Some people we’ve spoken to expect the following committees — and some of their subcommittees — in the House and Senate to be most impactful in the field of AI. You might aim to work on these committees or for lawmakers who have significant influence on these committees.
House of Representatives
- House Committee on Energy and Commerce
- House Judiciary Committee
- House Committee on Space, Science, and Technology
- House Committee on Appropriations
- House Armed Services Committee
- House Committee on Foreign Affairs
- House Permanent Select Committee on Intelligence
Senate
- Senate Committee on Commerce, Science, and Transportation
- Senate Judiciary Committee
- Senate Committee on Foreign Relations
- Senate Committee on Homeland Security and Government Affairs
- Senate Committee on Appropriations
- Senate Committee on Armed Services
- Senate Select Committee on Intelligence
- Senate Committee on Energy & Natural Resources
- Senate Committee on Banking, Housing, and Urban Affairs
The Congressional Research Service, a nonpartisan legislative agency, also offers opportunities to conduct research that can impact policy design across all subjects.
In general, we don’t recommend taking entry-level jobs within the executive branch for this path because it’s very difficult to progress your career through the bureaucracy at this level. It’s better to get a law degree or relevant master’s degree, which can give you the opportunity to start with more seniority.
The influence of different agencies over AI regulation may shift over time, and there may even be entirely new agencies set up to regulate AI at some point, which could become highly influential. Whichever agency may be most influential in the future, it will be useful to have accrued career capital working effectively in government, creating a professional network, learning about day-to-day policy work, and deepening your knowledge of all things AI.
We have a lot of uncertainty about this topic, but here are some of the agencies that may have significant influence on at least one key dimension of AI policy as of this writing:
- Executive Office of the President (EOP)
- Office of Management and Budget (OMB)
- National Security Council (NSC)
- Office of Science and Technology Policy (OSTP)
- Department of State
- Office of the Special Envoy for Critical and Emerging Technology (S/TECH)
- Bureau of Cyberspace and Digital Policy (CDP)
- Bureau of Arms Control, Verification and Compliance (AVC)
- Office of Emerging Security Challenges (ESC)
- Federal Trade Commission
- Department of Defense (DOD)
- Chief Digital and Artificial Intelligence Office (CDAO)
- Emerging Capabilities Policy Office
- Defense Advanced Research Projects Agency (DARPA)
- Defense Technology Security Administration (DTSA)
- Intelligence Community (IC)
- Intelligence Advanced Research Projects Activity (IARPA)
- National Security Agency (NSA)
- Science advisor roles within the various agencies that make up the intelligence community
- Department of Commerce (DOC)
- The Bureau of Industry and Security (BIS)
- The National Institute of Standards and Technology (NIST)
- CHIPS Program Office
- Department of Energy (DOE)
- Artificial Intelligence and Technology Office (AITO)
- Advanced Scientific Computing Research (ASCR) Program Office
- National Science Foundation (NSF)
- Directorate for Computer and Information Science and Engineering (CISE)
- Directorate for Technology, Innovation and Partnerships (TIP)
- Cybersecurity and Infrastructure Security Agency (CISA)
Readers can find listings for roles in these departments and agencies at the federal government’s job board, USAJOBS; a more curated list of openings for potentially high impact roles and career capital is on the 80,000 Hours job board.
We do not currently recommend attempting to join the US government via the military if you are aiming for a career in AI policy. There are many levels of seniority to rise through and many people competing for places, and initially you have to spend all of your time doing work unrelated to AI. However, having military experience already can be valuable career capital for other important roles in government, particularly national security positions. We would consider this route more competitive for military personnel who have been to an elite military academy, such as West Point, or for commissioned officers at rank O-3 or above.
Policy fellowships are among the best entryways into policy work. They offer many benefits like first-hand policy experience, funding, training, mentoring, and networking. While many require an advanced degree, some are open to college graduates.
- Center for Security and Emerging Technology (CSET)
- Center for a New American Security
- RAND Corporation
- The MITRE Corporation
- Brookings Institution
- Carnegie Endowment for International Peace
- Center for Strategic and International Studies (CSIS)
- Federation of American Scientists (FAS)
- Alignment Research Center
- Open Philanthropy1
- Institute for AI Policy and Strategy
- Epoch AI
- Centre for the Governance of AI (GovAI)
- Center for AI Safety (CAIS)
- Legal Priorities Project
- Apollo Research
- Centre for Long-Term Resilience
- AI Impacts
- Johns Hopkins Applied Physics Lab
- Anthropic is an AI safety company working on building interpretable and safe AI systems. They focus on empirical AI safety research. Anthropic cofounders Daniela and Dario Amodei gave an interview about the lab on the Future of Life Institute podcast. On our podcast, we spoke to Chris Olah, who leads Anthropic’s research into interpretability, and Nova DasSarma, who works on systems infrastructure at Anthropic.
- Google DeepMind is probably the largest and most well-known research group developing general artificial machine intelligence, and is famous for its work creating AlphaGo, AlphaZero, and AlphaFold. It is not principally focused on safety, but has two teams focused on AI safety, with the Scalable Alignment Team focusing on aligning existing state-of-the-art systems, and the Alignment Team focused on research bets for aligning future systems.
- OpenAI, founded in 2015, is a lab that is trying to build artificial general intelligence that is safe and benefits all of humanity. OpenAI is well known for its language models like GPT-4. Like DeepMind, it is not principally focused on safety, but has a safety team and a governance team. Jan Leike (head of the alignment team) has some blog posts on how he thinks about AI alignment.
- Ought is a machine learning lab building Elicit, an AI research assistant. Their aim is to align open-ended reasoning by learning human reasoning steps and to direct AI progress towards helping with evaluating evidence and arguments.
(Read our career review discussing the pros and cons of working at a top AI lab.)
- Organisation for Economic Co-operation and Development (OECD)
- International Atomic Energy Agency (IAEA)
- International Telecommunication Union (ITU)
- International Organization for Standardization (ISO)
- European Union institutions (e.g., European Commission)
- Simon Institute for Longterm Governance
Our job board features opportunities in AI safety and policy:
How this career path can go wrong
Doing harm
As we discuss in an article on accidental harm, there are many ways to set back a new field that you’re working in when you’re trying to do good, and this could mean your impact is negative rather than positive. (You may also want to read our article on harmful careers.)
It seems likely there’s a lot of potential to inadvertently cause harm in the emerging field of AI governance. We discussed some possibilities in the section on advocacy and lobbying. Some other possibilities include:
- Pushing for a given policy to the detriment of a superior policy
- Communicating about the risks of AI in a way that ratchets up geopolitical tensions
- Enacting a policy that has the opposite impact of its intended effect
- Setting policy precedents that could be exploited by dangerous actors down the line
- Funding projects in AI that turn out to be dangerous
- Sending the message, implicitly or explicitly, that the risks are being managed when they aren’t, or that they’re lower than they in fact are
- Suppressing technology that would actually be extremely beneficial for society
The trouble is that we have to act with incomplete information, so it may never be very clear when or if people in AI governance are falling into these traps. Being aware that they are potential ways of causing harm will help you keep alert for these possibilities, though, and you should remain open to changing course if you find evidence that your actions may be damaging.
And we recommend keeping in mind the following pieces of general guidance from our article on accidental harm:
- Ideally, eliminate courses of action that might have a big negative impact.
- Don’t be a naive optimizer.
- Have a degree of humility.
- Develop expertise, get trained, build a network, and benefit from your field’s accumulated wisdom.
- Follow cooperative norms
- Match your capabilities to your project and influence.
- Avoid hard-to-reverse actions.
Burning out
We think this work is exceptionally pressing and valuable, so we encourage our readers who might have a strong personal fit for governance work to test it out. But going into government, in particular, can be difficult. Some people we’ve advised have gone into policy roles with the hope of having an impact, only to burn out and move on.
At the same time, many policy practitioners find their work very meaningful, interesting, and varied.
Some roles in government may be especially challenging for the following reasons:
- Some roles can be very fast-paced, involving relatively high stress and long hours. This is particularly true in Congress and senior executive branch positions and much less so in think tanks or junior agency roles.
- It can take a long time to get into positions with much autonomy or decision-making authority.
- Progress on the issues you care about can be slow, and you often have to work on other priorities. Congressional staffers in particular typically have very broad policy portfolios.
- Work within bureaucracies faces many limitations, which can be frustrating.
- It can be demotivating to work with people who don’t share your values. Though note that policy can select for altruistic people — even if they have different beliefs about how to do good.
- The work isn’t typically well paid relative to comparable positions outside of government.
So we recommend speaking to people in the kinds of positions you might aim to have in order to get a sense of whether the career path would be right for you. And if you do choose to pursue it, look out for signs that the work may be having a negative effect on you and seek support from people who understand what you care about.
If you end up wanting or needing to leave and transition into a new path, that’s not necessarily a loss or a reason for regret. You will likely make important connections and learn a lot of useful information and skills. This career capital can be useful as you transition into another role, perhaps pursuing a complementary approach to AI governance and coordination.
What the increased attention on AI means
We’ve been concerned about risks posed by AI for years. Based on the arguments that this technology could potentially cause a global catastrophe, and otherwise have a dramatic impact on future generations, we’ve advised many people to work to mitigate the risks.
The arguments for the risk aren’t completely conclusive, in our view. But the arguments are worth taking seriously, and given the fact that few others in the world seemed to be devoting much time to even figuring out how big the threat was or how to mitigate it (while at the same time progress in making AI systems more powerful was accelerating) we concluded it was worth ranking among our top priorities.
Now that there’s increased attention on AI, some might conclude that it’s less neglected and thus less pressing to work on. However, the increased attention on AI also makes many interventions potentially more tractable than they had been previously, as policymakers and others are more open to the idea of crafting AI regulations.
And while more attention is now being paid to AI, it’s not clear it will be focused on the most important risks. So there’s likely still a lot of room for important and pressing work positively shaping the development of AI policy.
Read next
If you’re interested in this career path, we recommend checking out some of the following articles next.
These degrees are highly valuable for those hoping to take on important roles in the US federal government.
The US government is likely to be a key actor in how advanced AI is developed and used in society, whether directly or indirectly.
Working at a leading AI lab is an important career option to consider, but the impact of any given role is complex to assess.
Learn more
Top recommendations
- AI Governance Course – AGI Safety Fundamentals from BlueDot Impact
- Podcast: Tantum Collins on what he’s learned as an AI policy insider
- A list of AI policy resources to learn about the field and recent development
Further recommendations
- Article: Working in US AI policy
- Podcast: Tom Kalil on how to have a big impact in government & huge organisations, based on 16 years’ experience in the White House
- Podcast: Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI risk
- Podcast: Lennart Heim on the compute governance and what has to come after
- Podcasts: Nathan Labenz on the final push for AGI, understanding OpenAI’s leadership drama, and red-teaming frontier models and recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps
- Career review: China-related AI safety and governance paths
- Podcast collection: The 80,000 Hours Podcast on Artificial Intelligence
- US policy career resources on the Effective Altruism Forum
- Jobs that can help with the most important century by Holden Karnofsky
- 12 tentative ideas for US AI policy by Luke Muehlhauser of Open Philanthropy
- Why and how governments should monitor AI development by Jess Whittlestone and Jack Clark
- AGI safety career advice by Richard Ngo of OpenAI
- The longtermist AI governance landscape: a basic overview on the Effective Altruism forum
- Four Battlegrounds: Power in the Age of Artificial Intelligence by Paul Scharre
- The New Fire: War, Peace, and Democracy in the Age of AI by Ben Buchanan and Andrew Imbrie
- Think tank reports, such as from CSET, CNAS, CSIS
- Government strategies, such as the White House’s 2023 US National Artificial Intelligence R&D Strategic Plan, NIST’s 2023 AI Risk Management Framework, the DOD’s 2022 Responsible AI Strategy and Implementation Pathway, and the 2021 Final Report of the National Security Commission on AI
- Lessons from the Development of the Atomic Bomb by Toby Ord
- Collection of work on ‘Should you should focus on the EU if you’re interested in AI governance for longtermist/x-risk reasons?’ on the Effective Altruism Forum
Read next: Learn about other high-impact careers
Want to consider more paths? See our list of the highest-impact career paths according to our research.
Notes and references
- Open Philanthropy is 80,000 Hours’ largest funder.↩
- If you are not a United States citizen but aim to work in US policy, we think this article offers solid advice.↩
- There may be good reasons in favour of the labs cooperating to reduce risks, but there might also be legal obstacles to some forms of cooperation — such as anti-trust laws. Figuring out how labs can act responsibly while also complying with all relevant laws may be an impactful course of action.↩
- Here’s one summary of arguments for and against the wisdom of the letter.↩
- ARC is advised by Holden Karnofsky, a co-founder of Open Philanthropy, which is 80,000 Hours’ largest funder.↩
- There are a few important caveats to this claim. A lot of important AI policy research and advocacy appears likely to happen in DC-based think tanks, and it can be difficult to do this work right if you lack the local context.↩