Government & policy (Topic archive) - 80,000 Hours https://80000hours.org/topic/careers/categories-of-impactful-careers/government-policy/ Wed, 31 Jan 2024 18:28:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 Lucia Coulter on preventing lead poisoning for $1.66 per child https://80000hours.org/podcast/episodes/lucia-coulter-lead-exposure-elimination-project/ Thu, 14 Dec 2023 21:36:16 +0000 https://80000hours.org/?post_type=podcast&p=84938 The post Lucia Coulter on preventing lead poisoning for $1.66 per child appeared first on 80,000 Hours.

]]>
The post Lucia Coulter on preventing lead poisoning for $1.66 per child appeared first on 80,000 Hours.

]]>
Experience with an emerging power (especially China) https://80000hours.org/skills/emerging-power/ Tue, 31 Oct 2023 12:07:12 +0000 https://80000hours.org/?post_type=skill_set&p=84320 The post Experience with an emerging power (especially China) appeared first on 80,000 Hours.

]]>
China will likely play an especially influential role in determining the outcome of many of the biggest challenges of the next century. India also seems very likely to be important over the next few decades, and many other non-western countries — for example, Russia — are also major players on the world stage.

A lack of understanding and coordination between all these countries and the West means we might not tackle those challenges as well as we can (and need to).

So it’s going to be very valuable to have more people gaining real experience with emerging powers, especially China, and then specialising in the intersection of emerging powers and pressing global problems.

In a nutshell: Many ways of solving the world’s most pressing problems will require international coordination. You could help with this by building specific experience of the culture, language, and policies in China or another emerging power. Once you have that expertise, you could consider working in an AI lab, think tanks, governments, or in research roles.

Key facts on fit

You’ll need fantastic cross-cultural communication skills (and probably a knack for learning languages), a keen interest in international relations, strong networking abilities, and excellent judgement to be a good fit.

Why is experience with an emerging power (especially China) valuable?

China in particular plays a crucial role in many of the major global problems we highlight. For instance:

  • The Chinese government ‘s spending on artificial intelligence research and development is estimated to be on the same order of magnitude as that of the US government.1
  • As the largest trading partner of North Korea, China plays an important role in reducing the chance of conflict, especially nuclear conflict, on the Korean peninsula.
  • China is the largest emitter of CO2, accounting for 30% of the global total.2
  • China recently became the largest consumer of factory-farmed meat.3
  • China is one of the most important nuclear and military powers.
  • As home to nearly 20% of the world’s population,4 it will play a central role in mitigating pandemics.
  • China is increasingly a leader in developing new technologies; Beijing is widely seen as a serious competitor to Silicon Valley5 and is the majority source of non-US ‘unicorns.’6

As a result, it’s difficult to understand the scale and urgency of these pressing problems without understanding the situation in China. What’s more, it’ll be difficult to solve them without coordination between Western groups and their Chinese equivalents.

At the same time, China is not well understood in the West.

Interest in China has grown in the last decade, but it still lags behind many other countries. For instance, in American colleges and universities, the number of students studying French is three times larger than those studying Chinese,7 while the starting level of cultural difference is larger.

All this suggests that having experience with China could be an extremely useful skill for improving collaboration between China and the West on many of the world’s most pressing problems, avoiding potentially dangerous conflicts and arms-race-like dynamics, and improving the actions and policy of governments and institutions in both China and the West.

Of course, a similar argument could be made for gaining expertise in other powerful nations, for example: India, Brazil, or Russia.

However, we see Russia as likely to be less important than China because it has a weaker technology industry, so isn’t nearly as likely to play a leading role in AI or biotech development. It has a much smaller economy and population in general and hasn’t been growing at anywhere near the rate of China, so seems less likely to be a central global power in the future. Also, as a result of the Russia-Ukraine war, most Western citizens should probably avoid travelling to Russia.

For some similar reasons, India and Brazil also seem less likely to play a leading role in shaping new technologies than China. The existence of many English speakers in India also means there are more people able to fill the coordination gap already, reducing the need for additional specialists.

Given this, we’ve spent most of our time researching China. As a result, we focus less on other emerging powers in this article, and most of our specific examples focus on China.

However, we do think that gaining experience with these other countries is likely to be valuable and is currently under-explored, especially given how important they could become in the next few decades. In fact, if you’re at the beginning of your career, it may even be valuable to think about which countries are most likely to be particularly influential in a few decades and focus on gaining expertise there. Becoming an expert in any emerging global power could be a very high-impact option and could be the best option for some people.

Safety when spending time abroad

Visiting some of these countries can be dangerous, and that danger can change depending on fast-moving events.

We’d always recommend reading up on your government’s travel advice for the country you’re planning to visit. Don’t travel if your government recommends against it (for example, as of September 2023, the UK and US governments recommend against travel to Russia).

The UK government’s foreign travel advice website is a helpful resource.

What does building and using experience with an emerging power involve?

Building this skill set involves working in roles that will give you real opportunities to learn about an emerging power, especially in the context of trying to solve particularly pressing problems.

Ideally, you’ll pick one emerging power, and try to gain experience specifically in and about that country. This might include working in policy, as a foreign journalist, in some parts of the private sector, in philanthropy, in academic research, or in any number of other roles from which you’ll learn about an emerging power (some of which we discuss in more detail below).

These roles overlap with ways you might build and use other impactful skills, like research or communicating ideas. That’s because, in order to have an impact with your experience of an emerging power, you’ll usually need to use other skills as well: for example, you might be doing research on AI safety in China (using research skills), developing or implementing US foreign policy (using policy and political skills), or writing as a journalist in India (using communication skills).

The distinguishing feature of this skill is that you’ll build deep cultural knowledge, a broad network, and real expertise about an emerging power, which will open up unique and high-impact ways to contribute.

Working with foreign organisations on any topic requires an awareness of their culture, history, and current affairs, as well as good intuitions about how each side will react to different messages and proposals. This involves understanding issues like:

  • What are attitudes like, in the emerging power you’re learning about, around doing good and social impact?
  • If you wanted to make connections with people in the emerging power interested in working on major global challenges, what messages should you focus on, and what pitfalls might you face? How does professional networking function in the emerging power in general?

We expect that fully understanding these topics will require deep familiarity with the country’s values, worldviews, history, customs, and so on — noting, of course, that these also vary substantially within large countries like China, India, and Russia.

Eventually, you’ll move from building the skill to a position where you can use this experience to help solve pressing global problems. To use this skill best, you might also need to combine it with knowledge of a relevant subject — some of which we discuss here. We discuss some ways to have an impact with this skill in the final section below.

Example people

How to evaluate your fit

How to predict your fit in advance

This is likely to be a great option for you if you are from one of these countries, if you have spent a substantial amount of time there, or if you’re really obsessively interested in a particular country. This is because the best paths to impact likely require deep understanding of the relevant cultures and institutions, as well as language fluency (e.g. at the level where you might be able to write a newspaper article about biotechnology in the language).

If you’re not sure, you could study in one of these countries for a month, or do some other kind of short visit or project, to see how interesting you find it. (Although recent tension between the US and China could mean that spending significant time in China could exclude you from certain government positions in the US or other countries — many of which could be very high-impact career options — so this is a risk.)

Other signs you might be a great fit:

  • Bilingualism or other cross-cultural communication skills. Experience living abroad or working in teams with highly diverse backgrounds could help build this.
  • Strong networking abilities and social skills.
  • Excellent judgement and prudence. This is important because there’s a real possibility of accidentally causing harm when interacting with emerging powers.

We think it’s also important that you’re interested in trying to help all people equally and identifying the most effective ways to help, aiming to have well-calibrated judgements that are justified with evidence and reason. We’ve found these attitudes are quite rare, especially in foreign policy, which is often focused on national interest.

How to tell if you’re on track

Only a few people we know have ever tried really gaining this skill, so we’re not quite sure what success looks like.

It’s worth asking “how strong is my performance in my job?” for whatever you are doing to build this skill. Don’t just ask yourself — you’ll get the best information by talking to the people you work with or the people who you think are excellent at understanding the emerging power you’re focusing on.

Hopefully, after 1–2 years, you will have:

  • Started building a strong network in the country you’re learning about
  • Learned something substantially important and impressive, like knowing a language to (almost) fluency
  • Found a fairly stable job relevant to the emerging power you’re learning about where you’re rapidly able to learn more (like one of the things we list below)
  • Built up knowledge of a global problem that you can combine with your experience of the emerging power to have an impact later

How to get started building experience with an emerging power

Broadly, the aim is to get a useful combination of the following as quickly as possible:

  1. Knowledge of the intersection of an emerging power and an important global problem, such as the topics listed below
  2. Knowledge of and connections with the community working on the pressing global problems you want to help tackle
  3. A general understanding of the language and culture of an emerging power, which probably requires spending at least a year living in the country. (Though again, having a background in China or Russia — and possibly even just visiting — could exclude you from some Western government jobs.)

Below is a list of specific career steps you can take to gain the above knowledge. Most people should pursue a combination depending on their existing expertise and personal fit.

For many people, the best option at the start of your career won’t be any of the steps in this section. Instead, you could take a step towards building a different skill that you’ll use in conjunction with experience of an emerging power — even if that initial step has absolutely nothing to do with an emerging power.

This option has significant flexibility, since it would be easy to switch into another career if you decide not to focus on an emerging power.

To learn more, we’d particularly highlight our articles on how to get started building:

We’d guess that these are the most relevant skills to combine with experience of an emerging power, but we’re not sure — for more, see all our articles on skills.

But if you’re ready to start building this skill in particular, here are some ways to do it.

Go to the country and learn the language

If you’re a fluent English speaker, it takes around six months of full-time study to learn a Western European language. For other languages — like Chinese — this time might be more like 18 months.8 (Learning to write Chinese can take much longer and isn’t clearly worth it.)

You can learn most effectively by living in the country and aiming to speak the language 100% of the time.

We’ve written about learning Chinese in China in more detail.

We’re not sure how valuable it would be to learn other languages common in emerging powers, like Hindi, Russian, or Portuguese. In general, it’ll depend on the ease of learning the language and the prevalence of English in the country you’re focusing on — especially among decision makers.

Teaching English in an emerging power

What’s the easiest job for someone smart but lazy? The top answer to this question on Quora claims that it’s teaching English in China.

The huge demand for English teachers means that this option is open to most native English college graduates. These positions typically pay $15,000–$30,000 per year, include accommodation, and might only require four hours of work per day. For instance, you get a monthly salary of $2,100–2,800 per month during a typical one-year program offered by First Leap. Job benefits include work visa sponsorship arrangement, flight to China, and a settling-in allowance of up to $1,500. Another program, Teach in China, offers $900–1,800 per month in compensation, but also provides rent-free housing and can be pursued for just one semester. This is more than enough to live in a small Chinese city. You can earn even more if you do private tutoring as well, although the Chinese government is currently clamping down on the private tutoring industry.

It’s harder to get paid positions teaching in India without previous teaching experience.

This option won’t get you equally useful skills and connections as the other options in this list, but you will be able to learn about a culture and study a language at the same time. However, doing this through a prestigious fellowship — such as the Fulbright English Teaching Assistant Programme — could mitigate this downside.

Build connections with people working on top problems

If you are a citizen of an emerging power, then we’d guess the best first step would be to get involved in the community of people working on the world’s most pressing problems and ideally volunteer or intern with some organisations working on these risks, like those on our list of recommended organisations.

If you have connections and trust with other altruistically-minded people, you can help them learn about China and help coordinate their efforts.

With that in mind, we’d also recommend getting involved with the effective altruism community, where there are lots of people working on the kinds of global problems that this skill is relevant for.

Work in top companies or a foreign office of a top Western company

Working at any high-performance company — such as a top startup — is a generally great initial step to build career capital. And if that company is based in an emerging power, you’ll get to learn about the country at the same time. For example, you could look at startups that have been funded by top Chinese venture capitalists, such as HongShan Capital, IDG Capital, and Hillhouse Capital. One VC even told us that they’d provide job recommendations if asked, as they often know which of their companies are best-performing. Read more about startup jobs.

You don’t need a technical background to work at a startup: there are often roles available in areas like product management, business development, operations, and marketing.

In general, the aim would be to learn about an emerging power, gain useful experience, and make relevant connections — rather than push any particular agenda or otherwise try to have an impact right away.

Another advantage of this option is that you could follow it into earning to give. In some countries (like China), charities, research, and scholarships can often only be funded by citizens of that country, which could make earning to give a more attractive option if you are a citizen.

You could also aim to work at an office of a top Western consultancy, finance firm, or professional services firm in the country you’re learning about. This offers many of the standard benefits of this path — namely a prestigious credential, flexibility, and general professional development — while also letting you learn about an emerging power. We’ve heard some claims that your career might advance faster if you start in London or New York, but this advantage seems to be shrinking due to the increasing opportunities and importance of emerging powers. However, the accessibility of these jobs can be precarious and highly dependent on your nationality — for example, China is increasingly cracking down on foreign consultancies. Another consideration is that salaries are generally lower in emerging powers, even at international firms (with the exception of Hong Kong).

Do relevant graduate study

Which subjects?

If you want to work on issues around future technology, then it might be better to study something like synthetic biology or machine learning, and then increase your focus on an emerging power later.

Alternatively, you could start studying economics, international relations, and security studies, with a focus on a particular emerging power. Ideally, you could also focus on issues like emerging technologies, conflict, and international coordination. See ideas for high-impact research within China studies.

It’s also useful to have a general knowledge of the language, history, and politics of the emerging power you’re studying. So another way to get started might be to pursue area or language studies (one source of support available for US students is the Foreign Language and Area Studies Fellowships Program), perhaps alongside one of the topics listed above.

All of these subjects are useful, so we’d recommend putting significant weight on personal fit in choosing between them. Some will also better keep your options open, such as economics and machine learning. See our general advice on choosing graduate programmes.

Should you study in the country you’re gaining experience with?

Once you’ve chosen a programme that’s a good fit, we think it’s generally best to aim to go to the highest-ranked university possible — whether that’s in the West or the country you’re studying — rather than specifically aiming to study in a foreign country. It’s probably more useful to gain an impressive credential than spend time living in the country since there are many other ways to do that.

An alternative is to look for a joint programme, such as — in the case of China — the dual degree offered by Johns Hopkins School of Advanced International Studies and the Department of International Relations at Tsinghua University. John Hopkins is highly ranked for policy master’s degrees, so this course combines a good credential with the opportunity to study in China.

You might also consider the Schwarzman Scholars programme — a one-year, fully-funded master’s programme at Tsinghua University in Beijing. Approximately 20% of all US students studying in China are on this programme.

If you don’t yet have many connections with the effective altruism community and want to get involved, then you could also use graduate study as an opportunity to gain these connections by being based in one of the main hubs, including the San Francisco Bay Area, London, Oxford, Cambridge, and Boston.

If you’re a Chinese citizen interested in studying in the West, you might want to consider that:

Work as a foreign journalist

If you’re proficient in a foreign language, you could try becoming a foreign correspondent in the country you’re gaining experience with. It could help if you have a related degree from a top university (e.g. China studies or international relations with a focus on East Asia).

English-language news agencies such as Reuters, the Associated Press, Agence France-Presse, and Bloomberg maintain large bureaus across the world (including in Beijing, Shanghai, and Hong Kong) and often hire younger journalists.

Most major international publications such as The New York Times, The Wall Street Journal, The Washington Post, and The Financial Times also have a small but significant presence in many major world cities where you can apply for internships. A fresh graduate should expect to intern for about six months before finding a full-time position.

If you’re focused on China and coming from the West, it is often easier to find work at China-based English-language publications where you can do original journalism, such as the South China Morning Post (which has a graduate scheme), Caixin Media, or Sixth Tone. We do not recommend working for Chinese state media, as there will be few opportunities to create original content and most work will likely be polishing articles translated from English.

We also don’t recommend directly writing about effective altruism in China because we think it’s particularly easy to cause harm.

Work in philanthropy in an emerging power

If you’re interested in doing good in an emerging power, it helps to understand attitudes about doing good in that country. One way to do that is to learn about philanthropy. You could also aim to make connections with philanthropists in an emerging power — this comes with the added benefit of building a network of (often wealthy) do-gooders.

One career option here is to work at research institutions dedicated to the topic of philanthropy. For example, in China, these include:

You could also find a list of other philanthropy research centres from the Global Chinese Philanthropy Initiative.

There are also Western foundations that work in emerging powers. The Berggruen Institute, Ford Foundation, and Gates Foundation all work in China.

To explore this, you could attend relevant conferences. For instance, if you’re a social entrepreneur interested in China, you could attend a Nexus Global Youth Summit in the region. It’s a network that brings together young philanthropists and social entrepreneurs. If you would like to learn more about the latest developments in Chinese philanthropy, you could attend the International Symposium on Global Chinese Philanthropy by the Global Chinese Philanthropy Initiative, and the Chinese and Chinese American Philanthropy Summit by Asia Society in Hong Kong.

Before pursuing these options, it might be useful to first learn about best practices in Western philanthropy, perhaps by taking any role (even a junior one) at Open Philanthropy, GiveWell, or other strategic philanthropy organisations.

What other knowledge should you gain to have an impact?

We think the most pressing global problems often relate to global catastrophic risks and emerging technology — though there are many other important issues you could work on, like factory farming.

Once you’ve chosen a particular emerging power, you can gain expertise in the following topics. These are all vital issues to understand in the West as well, but the intersection of these issues with China (and other emerging powers) is particularly neglected.

AI safety and strategy

Safely managing the development of transformative AI may require unprecedented international coordination, and it won’t be possible to achieve this without an understanding of global emerging powers and coordination with organisations in these countries. This means understanding issues like:

  • What is the state of AI development in the emerging power you’re learning about?9
  • What attitudes do technical experts in the emerging power have towards AI safety and their social responsibility? Who is most influential?
  • How does the government of the emerging power shape its technology policy? What attitudes does it have towards AI safety and regulation in particular?
  • What actions are likely to be taken by the government and companies in the emerging power concerning AI safety?

(Read more about AI strategy and policy, and about China-related AI safety and governance paths.)

Biorisk

Global coordination is also necessary to reduce biorisk. This means understanding issues like:

  • What is the state of synthetic biology research in the emerging power you’re learning about?10
  • What attitudes do biology researchers in the emerging power have towards safety and social responsibility?
  • How does government technology policy in the emerging power relate to the risks from this technology?

International coordination and foreign policy

Expertise on any of the following issues (among others) could be highly useful:

  • How, when, and why does the emerging power you’re learning about provide public goods globally?
  • If you’re focusing on China, what do its foreign non-government organisation laws and domestic charity laws mean for its international collaboration on global causes?
  • What are the emerging power’s foreign policy priorities, and how is it likely to handle the possibility of global catastrophic risks?
  • How can coordination between the West and the emerging power you’re focusing on be increased and the chance of conflict be decreased?
  • How should Western government policy concerning catastrophic risks relate to policy in the emerging power?

Other global problems

Many of the key organisations working to reduce factory farming are expanding rapidly into China, India, and Brazil, so expertise in these countries and factory farming is also useful.

Knowledge of China seems less important within global health and development than in many of the other global problems we focus on. This is because China is not as important a player in international aid and global health. It also seems easier to find people who are already experts on the intersection of China and development policy than with the topics listed above. We’d guess a knowledge of India would be more relevant to global health and development.

Once you have this skill, how can you best apply it to have an impact?

In general, having an impact with this skill involves three steps — not necessarily in this order:

  1. Choosing 1–3 top problems to focus on. It’s possible you’ll want to do something highly problem-specific (like doing AI research in an emerging power), but it’s also possible you’ll want to do something more broadly applicable (like working as a journalist). Either way, the problem you work on is a substantial driver of your impact, so it helps to have 1–3 top problems in mind.
  2. Building a complementary skill, such as research, communicating ideas, organisation-building, or policy and political skills. Most ways of having an impact are going to involve applying your experience with an emerging power using one of these other skills.
  3. Find a job that uses your complementary skill in a way that’s highly relevant to the emerging power you have experience with. Decide between jobs depending on your personal fit. If you can’t find one of those jobs, try to get a job that continues building your skills. For example, there might be a great policy job available that has nothing to do with emerging powers — and you can always switch back later in your career.

With that in mind, we’d recommend reading the relevant article for your complementary skill — these articles also contain ideas on having an impact using that skill. Depending on your personal fit, those ideas could be higher impact than the specific suggestions in this article.

Also, many of the options in the section above on how to get started could easily become impactful as you gain experience, for example:

Below we list some additional options that are harder to enter without a few years building up your skills.

Work in an AI lab in safety or policy

If you’re a citizen of an emerging power, especially China, you could try working for an AI lab in that country. The lab could be commercial or academic.

You could try to get a role working in technical safety research, and, in the long run, you could aim to progress to a senior position and promote increased interest in and implementation of AI safety measures internally.

You could also try working as a governance or policy advisor at a top AI lab — this could be a lab based in the emerging power or a role at a western AI lab focused on emerging power dynamics.

It’s possible that other roles in labs could be good for building AI-related career capital — but many such roles could be harmful. (For more, read our career review of working at leading AI labs.)

To learn more, read our career review of China-related AI safety and governance paths.

Work at a think tank

You could work at a Western think tank, studying issues specifically relevant to pressing problems in the emerging power you’re focusing on. Some think tanks focus more on the most relevant topics than others. For instance, Center for Security and Emerging Technology, Center for a New American Security, Centre for the Governance of AI, Brookings Institution, and Carnegie Endowment for International Peace seem relevant for issues related to existential risks. (There are doubtless others we’re not aware of.) One risk is that it can be much more difficult to work on China-Western coordination if you’ve had a job at a think tank that’s generally seen as particularly anti-China.

Beyond that, it could also be useful to work on anything concerning international coordination and foreign policy, such as the US-China Relations Independent Task Force of the Council on Foreign Relations and the Kissinger Institute on China and the United States. Another option is to work at a joint partnership institution, such as Carnegie-Tsinghua Center for Global Policy by applying to their Young Ambassadors Program in Beijing.

Unfortunately, it’s difficult to enter roles in Chinese think tanks if you’re not a Chinese citizen, and this may also be the case in other emerging powers (we’re not sure).

If you are a Chinese citizen, you could aim to work in a top Chinese think tank. You could look to work at a think tank doing AI-related work or look more broadly at think tanks such as the Chinese Academy of Social Sciences and the China Institutes of Contemporary International Relations.

You can read more about think tank roles in our separate career profile.

Work in roles focused on an emerging power in organisations focused on reducing existential risks

Many key organisations working on existential risks want to better understand China to inform their work. For instance, representatives of many AI risk research organisations we recommend have attended conferences in China.

These organisations struggle to find altruistically motivated people with deep knowledge of top problems as well as knowledge of China. They also struggle to find people connected to relevant Chinese experts. So you could use this skill set to aid organisations working on existential risks.

Academic research in an emerging power

Academic research could be a very high-impact career path, especially when the research is focused on a top problem, like biorisk research or technical AI safety research.

If you want that research to have an impact, your role as an academic could become closer to advocacy, using a communication skill set. For example, you could work on AI safety at a top Chinese university lab, which could be valuable both for making progress on technical safety problems and for encouraging interest in AI safety among other Chinese researchers — especially if you progress to take on teaching or supervisory responsibilities. (Read more.)

Other options

Advising parts of international organisations focused on AI, such as the UN Secretary General’s High-level Panel on Digital Cooperation or the OECD’s AI Policy Observatory, could provide opportunities for impact.

In industry, it could be worth exploring opportunities in semiconductor or cloud computing companies in emerging powers, especially in China. This is based on our view that shaping the AI hardware landscape could be a high-impact career path.

You might also consider supporting the translation of materials related to pressing problems into the language of the emerging power, in particular reputable academic materials — although be aware that this can be easy to get wrong.

Finally, there are likely many other promising opportunities to apply this skill now and in the future that we don’t know about. After all, a notable thing about this skill is that it involves gaining knowledge that Western organisations — like 80,000 Hours — lack by default. So if you go down this route you may well discover novel opportunities to use it.

Find jobs that use experience with an emerging power

If you think you might be a good fit for this skill and you’re ready to start looking at job opportunities that are currently accepting applications, see our curated list of opportunities. You could filter by policy or location to find relevant roles.

    View all opportunities

    Career paths we’ve reviewed that use this skill

    Learn more about building experience with an emerging power

    Top recommendations

    Read next:  Explore other useful skills

    Want to learn more about the most useful skills for solving global problems, according to our research? See our list.

    Plus, join our newsletter and we’ll mail you a free book

    Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

    The post Experience with an emerging power (especially China) appeared first on 80,000 Hours.

    ]]>
    Policy and political skills https://80000hours.org/skills/political-bureaucratic/ Mon, 18 Sep 2023 14:19:27 +0000 https://80000hours.org/?post_type=skill_set&p=83648 The post Policy and political skills appeared first on 80,000 Hours.

    ]]>
    Suzy Deuster wanted to be a public defender, a career path that could help hundreds receive fair legal representation. But she realised that by shifting her focus to government work, she could improve the justice system for thousands or even millions. Suzy ended up doing just that from her position in the US Executive Office of the President, working on criminal justice reform.

    This logic doesn’t just apply to criminal justice. For almost any global issue you’re interested in, roles in powerful institutions like governments often offer unique and high-leverage ways to address some of the most pressing challenges of our time.

    In a nutshell: Governments and other powerful institutions are often crucial forces in addressing pressing global problems, so learning to navigate, improve and assist these institutions is a route to having a big impact. Moreover, there are many positions that offer a good network and a high potential for impact relative to how competitive they are.

    Key facts on fit

    This skill set is fairly broad, which means it can potentially be a good fit for a wide variety of people. For many roles, indications of fit include being fairly social and comfortable in a political environment — but this isn’t true for all roles, and if you feel like that’s not you it could still be worth trying out something in the area.

    Why are policy and political skills valuable?

    We’ll argue that:

    Together, this suggests that building the skills needed to get things done in large institutions could give you a lot of opportunities to have an impact.

    Later, we’ll look at:

    Governments (and other powerful institutions) have a huge impact in the world

    National governments are hugely powerful.

    For a start, they command the spending of huge sums of money. The US government’s federal budget is approximately $6.4 trillion/year — that’s approximately the annual revenue of the world’s 14 largest companies by revenue (although only around $1.7 trillion/year is discretionary spending). Many other Western countries spend hundreds of billions of dollars a year.

    And it’s not just money. Governments produce laws governing the actions of millions — or billions — and have unique tools at their disposal, including taxation and tax breaks, regulation, antitrust actions, and, ultimately, the use of force.

    The US spends nearly a trillion dollars a year on its military (although this is an outlier — in other Western countries it’s more like tens of billions).

    Why does this scale matter?

    Well, we’ll argue that your chances of reaching a government role in which you can have a large influence are probably high enough that in expectation you can have a significant impact, given the huge scale of government action.

    And it’s not just governments. Most of the advice in this article can be applied to any powerful institution, such as an international body or organisation like the United Nations. Much of what we say even applies to jobs at large corporations.

    Governments and other major institutions play a major role in addressing the world’s most pressing problems

    National governments and international bodies — in particular the US, UK, and EU — are already working on some of the problems we have identified as most pressing. For example:

    • Biorisk: The UK government released the UK Biological Security Strategy aimed at preventing future pandemics in June 2023. The US Centers for Disease Control and Prevention (CDC) works on public health in the US and is also one of the most important organisations working on global disease control. The US defence and intelligence community also works in this area. For instance, the Department of Defense does a lot of work on infectious diseases and assists other countries’ efforts to prevent the proliferation of biological weapons.
    • AI safety and public policy: In her annual State of the Union Address, the President of the European Commission told the European Parliament that the EU should be working to mitigate the risk of extinction from AI. The White House Office issued an executive order on AI, which — among other things — requires developers of the most powerful AI systems to develop safety standards and tests and share these results with the US government. The Defense Advanced Research Projects Agency (DARPA) has a program on explainable AI, which is a component of AI safety research. The UK government has set up the AI Safety Institute. And as AI becomes more important, governments will likely become more involved.
    • Nuclear security: The US has the world’s most powerful military and the second biggest stockpile of nuclear weapons. Federal agencies such as the Department of Defense, the Department of Energy, and the State Department are important for preventing nuclear catastrophe.

    Governments also play a major role in pretty much every other global issue you can think of (including basically every issue we have have profiles on, such as global health, climate change, and factory farming).

    Throughout this article, we focus on the US because we think it has particular influence in areas related to the problems we think are most pressing, and because it’s where we have the most readers. However, we think these skills are also valuable to build if you’re based in many other countries (and we also have advice specifically about the UK).

    Beyond governments, there are also international organisations and large companies that are important for solving certain problems. For example, the Biological Weapons Convention plays a unique role in preventing biological catastrophes, while leading AI labs and large tech companies have a crucial influence over the development of AI.

    To see lists of particularly relevant institutions for various problems, see our problem profiles and job board.

    You can create change

    You might think that, even if you work at an important institution, you won’t have much impact because you won’t really be able to affect anything. You’ll have to carry out the will of elected officials, who are bound to the electorate, institutional constraints, and special interests. And while this is definitely true in many cases, we do think there are opportunities to have at least a small effect on the actions of these large and powerful institutions.

    Frances Kelsey was an academic and a pharmacologist. But, in 1960, she took a major career step when she was hired by the FDA. Just one month into her new career in government, she was given her first assignment to review a drug: thalidomide. Despite considerable pressure from the drug’s manufacturer, Kelsey insisted that it be tested more rigorously.

    And so, while more than 10,000 children across the world were born with birth defects as a result of thalidomide — living with life-long deformed limbs and defective organs — only 17 such children were born in the US. Kelsey was hailed by the American public as a hero and was awarded the President’s Award for Distinguished Federal Civilian Service in 1962.1

    But why was a mid-level official — only one month into her new job — able to have such an impact?

    First, there’s just a huge amount to do, and senior officials don’t have that much time.

    For example, in the US, there are 535 members of Congress and around 4,000 presidential appointees in the executive branch. That might sound like a lot, but think about it this way: each of these people, on average, has oversight over about 0.02% of the US federal budget — over $1 billion. It would be literally impossible to micromanage that amount of activity.

    This is only a very rough heuristic, but by dividing the $1.7 trillion discretionary federal budget by the number of people at different levels of seniority, we can estimate the average budget that different subsets of people in the government oversee.2

    Subset of people Approximate number Budget per person per year within this subset
    All federal employees (except US Postal Service workers) 2.3M $700,000
    Federal employees working in Washington DC 370,000 $4.6M
    Senior Executive Service and political appointees 12,000 $142M
    Political appointees 4,000 $425M

    Note that this method is just an estimate of the average and there are some reasons to think it’s probably too high.3

    Nevertheless, these figures are so high that if you can help those budgets be used just a little more efficiently, it could be worth millions of dollars of additional spending in the area of focus.

    And, in other ways, this is an underestimate of the responsibility of each individual because much of what the government does is not best thought of as setting budgets — rather it comes from regulation, foreign policy, changing social norms and so on. Budgets here are just being used as a proxy for one form of impact.

    Second, the views and opinions of others in government aren’t completely fixed. Otherwise — whether you think it’s protected free speech or a distortion of democracy — it’s hard to explain why private companies spend around $4 billion a year on federal lobbying. For every dollar spent by a profit-oriented company on lobbying, it’s probably getting more than a dollar back on average by affecting government policy. This suggests that people interested in social change can have an impact, especially if they’re focused on global issues with little other lobbying, or they can find neglected ways to affect policy.

    And so it’s not surprising that when we’ve spoken to people working in and around governments, we’ve found that — as in the case of Frances Kelsey — people have actually had the opportunity to influence things even in junior roles (if they had the skills).

    In the US, we spoke to a number of mid-level and senior federal employees, and most were able to give us an example of how they had a large positive impact through their role. Some of their examples involved starting new impactful programs worth $10s of millions, saving Americans $100s of millions, or moving billions to something potentially more impactful. We haven’t vetted these stories, but at the very least they persuaded us that mid-level and senior federal employees feel as though they can sometimes have a large positive influence on the government.

    In the UK, one junior civil servant we spoke to determined how £250 million was spent in her policy area through careful discussions with senior civil servants, while ministers were only scrutinising larger chunks of money.

    And it’s not just in the executive. For example, in the US Congress, huge amounts of work are done by congressional staffers. “Ninety-five percent of the nitty-gritty work of drafting bills and negotiating their final form is now done by staff,” according to former Senator Ted Kennedy.4

    Often this work is done by very junior people. One junior staff member in a Congressional office told us that more senior individuals (like Chiefs of Staff) are often tasked with substantial managerial responsibilities that crowd out their ability to focus on nitty-gritty policy research. Because of this, they have to defer to more junior staff (such as legislative assistants) who have the capacity and time to dig into a specific policy area and make concrete proposals.

    This all suggests that you can effect change in large institutions (even when you’re just getting started), and in particular:

    • On issues where people care enough for changes to be made, but not enough to micromanage the changes
    • Where powerful figures like elected officials have vague goals, but no specific idea of what they want
    • When details have a large impact, e.g. the details of one piece of legislation can affect many other laws

    All other things being equal, the more senior you are, the more influence you’ll have.

    If you’re a motivated graduate from a top university, over the course of your career, the chance of reaching high levels in the government is significant.

    Approximately 1 in 30 federal employees in DC are in the senior executive service. What’s more, we found that students with a strong academic background and great social skills (and an interest in politics) in the UK could have an around 1 in 3 chance of becoming an MP. Meanwhile, if you became a Congressional staffer in the US, you’d have something like a 1 in 40 chance of being elected to Congress.

    Other factors will also affect your ability to create change, such as how politicised your area is (the more political, the more your moves will be countered by others).

    All that said, many people we speak to in the civil service don’t feel that they have a lot of influence. That’s because many roles don’t have opportunities for a lot of impact. (We’ll discuss finding ones that do later, and it can be hard to see your impact even in those that do.)

    But the potential for change is there. You can think of decision making in large institutions as a negotiation between different groups with power. Most of the time you won’t tip the balance, but occasionally you might be able to — and it could have a large impact.

    But you’ll need to use your influence responsibly

    Having influence is a double-edged sword.

    If you use your position poorly, then you might make things worse than they would have been otherwise. This is especially easy in policy, because it’s hard to know what truly makes things better, and policy can have unintended consequences. This is especially disturbing if you end up working on critical problems, such as preventing pandemics or nuclear crises.

    This doesn’t mean you should avoid these positions altogether. For a start, someone has to take these positions, and it’ll probably be better for the world if more altruistic people enter them. Hopefully, if you’re reading this article, you’re more likely than average to be one of these people.

    However, it does mean that if you succeed in advancing you have a huge responsibility to use the position well — and the higher you advance, the more responsibility you have.

    This means trying to do the best job you can to help the institution do more good for society, and being especially careful to avoid actions that could cause significant harm.

    Unfortunately, the more you advance, the easier it is to lose touch with people who will give you frank feedback, and the more temptations you’ll face to do unethical or dishonest actions in order to preserve your influence or “for the greater good” — i.e. to get corrupted.

    This means we’d especially encourage people considering this path to focus on building good character and making sure they have friends around them who can keep them honest at the early stages, so these are in place in case they gain a lot of influence.

    It’s also important to make sure you have a clear ‘edge’ that will allow you to do more good than a typical employee. For instance, you might be able to give ministers more evidence-based advice, contribute specialist knowledge, or pay more attention to the effect of policies on the long-term future than typical.

    That said, even talented and very well-meaning people can fail to do good in government and even do harm, so it is worth learning constantly and thinking carefully and critically about what will actually help. Read more advice on avoiding harm.

    What does using a policy and political skill set involve?

    Any career path that ends up in an influential institutional position could be a way of using these skills, though some options are more likely to be relevant to the problems we think are most pressing.

    This typically involves the following steps:

    1. Identify some institutions that could play an important role tackling some of the problems you think are most pressing. See an introduction to comparing global problems in terms of impact and lists of institutions that are important to each area in our problem profiles and job board.

    2. Learn to make useful contributions to an institution (or group of institutions) by gaining experience, credibility, seniority, and authority.

    3. Often, it involves developing a speciality that’s especially relevant to the problems you want to focus on. For instance, if you want to work on tackling engineered pandemics, you might specialise in counter-terrorism, technology policy, or biomedical policy. This is both to help you advance into more relevant roles, but also to improve your understanding of which policies are actually helpful. That said, many policy makers remain generalists. In that case, you need to make sure you find trusted expert advisors to help you understand which policy changes would be most helpful.

    4. Move into roles that put you in a better position to help tackle these problems. Focusing on pandemics again, you might aim to work at the Center for Disease Control and Prevention and then advance to more senior positions.

    5. Have an impact by using your position and expertise to improve policies and practices relevant to pressing global problems or bringing attention to neglected but important priorities.

    Within this skill set, it’s possible to focus more on policy research or policy implementation. The first is about developing ideas for new policies, and involves an element of applied research skills, while the second is a bit more like an organisation building skill set and has an impact via making an important institution more efficient.

    There’s also a spectrum of roles from roles that are more like being a technical specialist to those — like roles in political parties or running for elected office — that are more political and closer to engagement with the general public and current affairs.

    In addition to roles actually within the relevant institutions, there are also “influencer” roles which aim to shape these institutions from the outside.

    This includes jobs in think tanks, advocacy non-profits, journalism, academia, and even corporations, rather than within government.

    The skills needed for influencer roles are similar to those needed for policy and political roles in many ways, but they also overlap a lot with skills in research and communicating ideas. These roles can be a better fit for someone who wants to work in a smaller organisation, is less comfortable with political culture, or wants to focus more on ideas rather than application.

    In practice, people often move between influencer and government positions across their careers.

    Some people think that to work in policy you have to be brilliant at networking.

    That’s not quite true — as we’ve seen, depending on your role, you might focus more on understanding and researching policies, communicating ideas to a specific audience, or just really understanding your particular institution very well.

    But it’s nevertheless true that networking skills are more important in building a policy and political skill set than, for example, if you wanted to work in a purely research — and you can learn more about how to network in our article on how to be successful in any job. In particular, multiple people — both in the US and in the UK — have told us that it’s important to be friendly and nice to others.

    Finally, we’d like to emphasise the potential value of doing policy-style work in industry, especially if you’re interested in AI policy. While government policy is likely to play a key role in coordinating various actors interested in reducing the risks from advanced AI, internal policy, compliance work, lobbying, and corporate governance within the largest AI labs are also powerful tools. Collaboration between labs and government also requires work that may use similar skills, like stakeholder management, policy design, and trust-building.

    Example people

    How to evaluate your fit

    This skill set is fairly broad, which also means it can potentially be a good fit for a wide variety of people. Don’t rule it out based on a hazy sense that government work isn’t for you!

    For example, entering policy through building specific expertise can be a good fit for people interested in research careers but who would like to do something more practical. Many roles are totally unlike the stereotype of a politician endlessly shaking hands or what ‘government bureaucrat’ brings to mind.

    How to predict your fit in advance

    Here are some traits that seem likely to point towards being a great fit:

    • You have the potential to succeed at relationship-building and fitting in. In many of these roles, you need to be able to develop good relationships with a wide range of people in a short amount of time, come across as competent and warm in your interactions, genuinely want to add value and help others achieve their goals, consistently follow up and stay in touch with people, and build a reputation and be remembered.

      It helps to have empathy and social intelligence so that you can model other people’s viewpoints and needs accurately. It also helps if you can remember small details about people! You don’t necessarily need all these skills when you start out, but you should be interested in improving them.

      These skills are most important in more public-facing party-political positions and are also needed to work in large institutions. However, there are also roles focused more on applying technical expertise to policy, which don’t require these skills as much (though they’re still probably more important than in e.g. academia).

    • You can think of a relevant institution at which you can imagine yourself being relatively happy, productive, and motivated for a long time — while playing by the institution’s rules. Try speaking with later-career people at the institution to get as detailed a sense as possible of how long it will take to reach the kind of position you’re hoping for, what your day-to-day life will be like in the meantime, and what you will need to do to succeed.

    • Having the right citizenship. There are lots of influential and important policy roles in every country, so you should consider them wherever you live. But some roles in the US seem especially impactful — as do certain roles at large institutions like the EU. In particular, any of the roles within the US most relevant to the problems we think are most pressing — particularly in the executive branch and Congress — are only open to, or at least will heavily favour, American citizens. All key national security roles that might be especially important will be restricted to those with US citizenship, which is required to obtain a security clearance.

      If you’re excited about US policy in particular and are curious about immigration pathways and types of policy work available to non-citizens, see this blog post. Consider also participating in the annual diversity visa lottery if you’re from an eligible country, as this is low-effort and allows you to win a US green card if you’re lucky (getting a green card is the only way to become a citizen).

    • Being comfortable with political culture. The culture in politics, especially US federal politics, can be difficult to navigate. Some people we know have entered promising policy positions, but later felt like the culture was a terrible fit for them. Experts we’ve spoken to say that, in Washington, DC, there’s a big cultural focus on networking and internal bureaucratic politics to navigate. We’ve also been told that while merit matters to a degree in US government work, it is not the primary determinant of who is most successful. We’d expect this to be similar in other countries. People who think they wouldn’t feel able or comfortable to be in this kind of environment for the long term should consider whether other skills or institutions would be a better fit.

      That said, this does vary substantially by area and by role. Some roles, like working in a parliament or somewhere like the White House, are much more exposed to politics than others. Also, if you work on a hot button, highly partisan issue, you’re much more likely to be exposed to intense political dynamics than if you work on more niche, technocratic, or cross-party issues.

    It’s useful if you can find ways to do cheap tests first, like speaking to someone in the area (which could take a couple of hours), or doing an internship (which could take a couple of months). But often, you’ll need to take a job in the area to tell whether this is a good fit for you — and be willing to switch after a year or more if it’s not. For more, read our article on finding a job that fits you.

    How to tell if you’re on track

    First, ask yourself “How quickly and impressively is my career advancing, by the standards of the institution I’m currently focused on?” People with more experience (and advancement) at the institution will often be able to help you get a clear idea of how this is going. (It’s also just generally important to have good enough relationships with some experienced people to get honest input from them — this is an additional indicator of whether you’re “on track” in most situations.)

    One caveat to this is that the rate of advancement could really vary depending on the exact role you have in that institution. For example, in Congress, speed of promotion often has to do less with your abilities and more with timing and the turnover of the office. As a result, the better the office, the fewer people leave and the slower the pace of promotion; the opposite is often true for bad offices. So you need to make sure you’re judging yourself by relevant standards — again, people with more experience at the institution should be able to help here.

    Another relevant question to ask is “How sustainable does this feel?” This question is relevant for all skills, but especially here — for government and policy roles, one of the main things that affects how well you advance is simply how long you can stick with it and how consistently you meet the institution’s explicit and implicit expectations. So, if you find you can enjoy government and political work, that’s a big sign you’re on track. Just being able to thrive in government work can be an extremely valuable comparative advantage.

    One other way to advance your career in government, especially as it relates to a specific area of policy, is what some call “getting visibility” — that is, using your position to learn about the landscape and connect with the actors and institutions that affect the policy area you care about. You’ll want to be invited to meetings with other officials and agencies, be asked for input on decisions, and engage socially with others who work in the policy area. If you can establish yourself as a well-regarded expert on an important but neglected aspect of the issue, you’ll have a better shot at being included in key discussions and events.

    How to get started building policy and political skills

    There are two main ways you might get started:

    1. Institution-first. You’d start your career by trying to find a set of institutions that are a good fit for you and that seems at least relevant to the problems you think are most pressing (e.g. the executive branch of the US government or tech companies). You’d then try to move up the ranks of those institutions.
    2. Expertise-first. In this route, you initially focus on building a relevant speciality or area of expertise (e.g. in academia or think tanks) and then use that to switch into institutional positions later. In addition, people with impressive credentials and accomplishments outside of government (e.g. in business, consulting, or law) can sometimes enter important departments and agencies at particularly senior and influential levels.

    If you take the institution-first approach, you can try for essentially any job at this institution and focus on performing well by the institution’s standards. All else being equal, it’d be better to work on jobs relevant to a pressing problem, but just trying to advance should probably be your main goal early in your career.

    The best way to learn how to perform and advance is to speak to people a couple of steps ahead of you in the path. Also look at cases of people who advanced unusually quickly and try to unpack what they did.

    Sometimes the best way to advance will involve going somewhere other than the institution itself temporarily. For instance, going to law school, public policy school, or working at think tanks can give you credentials and connections that open up positions in government later.

    If you’re focused on developing expertise in a particular area of policy, then it’s common to go to graduate school in a subject relevant to that area (e.g. economics, machine learning, biology).

    As always, whether these paths are a good way of building your skills depends on the specific job or programme and people you’ll be working with:

    • Will you get good mentorship?
    • What’s their reputation in the field?
    • Do they have good character?
    • Does their policy agenda seem positive?
    • Will the culture be a good fit for you?

    With all that in mind, here are a few next steps that are especially good for building these skills:

    Fellowships and leadership schemes

    Fellowships can be an effective way to gain experience inside government or think tanks and can help you advance quickly into more senior government positions.

    Some fellowships are aimed at people who already have some professional experience outside of policy but want to pivot into government roles, while others are aimed at recent graduates.

    In the US, consider the Presidential Management Fellows for recent graduates of advanced degrees, the Horizon Fellowship, the AAAS fellowship for people with science PhDs or engineering master’s, or the TechCongress fellowship for mid-career tech professionals. If you have completed a STEM graduate degree, also consider the Mirzayan Science and Technology Policy Graduate Fellowship Program.

    In the UK, try the Civil Service Fast Stream. And if you’re interested in EU AI policy, you can apply for the EU Tech Policy Fellowship. We also curate a list of UK / EU policy master’s options through our job board.

    Graduate school

    In general, we’d most recommend grad school for economics or machine learning. (Read more about why these are the best subjects to study at grad school.)

    Some other useful subjects to highlight, given our list of pressing problems, include:

    • Other applied quantitative subjects, like computer science, physics, and statistics
    • Security studies, international relations, public policy, or law school, particularly for entering government and policy careers
    • Subfields of biology relevant to pandemic prevention (like synthetic biology, mathematical biology, virology, immunology, pharmacology, or vaccinology)

    Many master’s programmes offer specific coursework on public policy, science and society, security studies, international relations, and other topics. Having a graduate degree or law degree will give you a leg up for many positions.

    In the US, a policy master’s, a law degree, or a PhD is particularly useful if you want to climb the federal bureaucracy. Choosing a graduate school near or close to DC is often a good idea, especially if you’re hoping to work part- or even full-time in public policy alongside graduate school.

    While you’re studying (either at grad school or as an undergraduate), internships — for example in DC — are a promising route to evaluate your fit for policy work and to establish early career capital. Many academic institutions in the US offer a “Semester in DC” programme, which can let you explore placements of choice in Congress, federal agencies, or think tanks. The Virtual Student Federal Service (VSFS) also offers part-time, remote government internships.

    Just bear in mind that graduate schools present the risk that you could spend a long time there without learning much about the actual career you’re pursuing itself or the problem you want to work on. It may sometimes make sense to try out a junior role or internship, see how it feels, and make sure you’re expecting a graduate degree to be worth it before going for it.

    Read more about going to grad school.

    Working for a politician or on a political campaign

    Working for a politician as a researcher or staffer (e.g. as a parliamentary researcher in the UK, legislative staff for a Member of Congress, or as campaign staff for an electoral candidate) can be one useful step into political and policy positions. It’s also demanding, prestigious (especially in the US, less so in the UK), and gives you lots of connections. From this step, it’s also common to move into the executive branch or to later run for office. Read more in our career review on becoming a congressional staffer.

    You don’t strictly need a master’s or other advanced degree to work in the US Congress. But many staffers still eventually pursue a graduate degree, in part because federal agencies and think tanks commonly care more about formal credentials, and many congressional staffers at some point switch to these institutions.

    You can also work for a politician on a particular campaign — some of the top people who work on winning campaigns eventually get high-impact positions in the federal government. This is a high-risk strategy: it often only pays off if your candidate wins, and even then, not everybody on the campaign staff will get influential jobs or jobs in the areas they care about, especially if you’re a junior campaign staffer. (Running for office yourself involves a similar high-risk, high-reward dynamic.)

    Roles in the executive branch

    Look for entry-level roles in your national government, again focusing on positions at the executive-branch equivalent or those most relevant to policy-making.

    In the US, you could take an entry-level role as a federal employee, ideally working on something relevant to a problem you want to help solve or will give you the flexibility to potentially work on multiple pressing problems. The most influential positions are usually in the executive branch.

    That said, most people have told us that, in the US, it’s even better to get a graduate degree first because it will allow you to reach higher levels of career advancement and seniority more quickly. A graduate degree could also qualify you for fellowships.

    In the UK, see our profile on civil service careers.

    Think tank roles

    Think tanks are organisations that aren’t part of government but still focus on informing and ultimately influencing policymaking.

    Research roles at policy think tanks involve conducting in-depth research on specific policy areas and formulating relevant recommendations. These researchers also often collaborate with experts, host events, engage with policymakers, and liaise with the media to influence and inform public policy discourse. This often involves fundraising, grant writing, and staying updated on political trends — and it can teach you many of the skills that are useful in government.

    These roles are relatively competitive and you may have your reputation tied to particular institutions you work for — which can have upsides and downsides.

    Think tanks also employ non-research staff in communications, HR, finance, and other areas; these roles are less likely to meaningfully impact policy outcomes, though they could still be a reasonable way to build policy career capital.

    Also, think tank staff are often fairly cleanly split between entry-level employees and senior employees with advanced degrees (often PhDs), with relatively few mid-level roles. For this reason, it’s fairly uncommon for people to stay and rise through the ranks at a think tank without leaving for graduate school or another role.

    These roles let you learn about important policy issues and can open up many options in policy. One option is to continue working in think tanks or other influencer positions, perhaps specialising in an area of policy. Otherwise, it’s common to switch from think tanks to the executive branch, a campaign, or other policy positions.

    (Read more in our career review on working in think tanks.)

    Other options

    It’s also common to enter policy and government jobs from consulting and law, as well as other professional services, public relations, and business in general.

    More broadly, having organisation-building skills (e.g. public relations, organisational communications, finance, and accounting knowledge) or research skills can help you find policy and political roles.

    Find jobs that use policy and political skills

    If you think you might be a good fit for this skill set and you’re ready to start looking at job opportunities that are currently accepting applications, see our curated list of opportunities.

      View all opportunities

      Once you have these skills, how can you best apply them to have an impact?

      Let’s suppose you now have a position with some ability to get things done in an important institution, and, from building expertise or an advisory network in particular pressing problems, you also have some ideas about the most important things you’d like to see happen. Then what should you do?

      Depending on the issue and your position, you might then seek to have an impact via:

      1. Improving the implementation of policy relevant to a pressing problem. For example, you could work at an agency regulating synthetic biology.

      2. Gathering support for policy ideas. For example, you could highlight the top areas of consensus in the field about promising ways the government could reduce global poverty to a politician you work for.

      3. Coming up with ideas for new policies. For example, you might craft new proposals for implementing compute governance policies.

      Improving the implementation of policies

      When people think about political careers, they usually think of people in suits having long debates about what to do.

      But fundamentally, a policy is only an idea. For an idea to have an impact, someone actually has to carry it out.

      The difference between the same policy carried out badly vs. competently can be enormous. For instance, during COVID-19, some governments reacted much faster than others, saving the lives of thousands of citizens.

      What’s more, many policies are by necessity, only defined vaguely. For instance, a set of drug safety standards might need to show there is “reasonable evidence” a drug is safe, but — as shown by Frances Kelsey — how that is interpreted is left up to the relevant agency and may even change over time.

      Many details are often left undecided when the policy is created, and again, these get filled out by government employees.

      This option especially requires skills like people and project management, planning, coordination in and out of government, communication, resource allocation, training, and more.

      So, if you can become great at one or more of these things (and really know your way around the institution you work in), it’s worth trying to identify large projects that might help solve the problems you think are most pressing — and then helping them run better.

      These roles are most commonly found in the executive branch such as the Defense Department, the State Department, intelligence agencies, or the White House. (See also our profile on the UK civil service.)

      Bringing ideas for new policies to the attention of important decision makers

      One way to have an impact is to help get issues “on the agenda” by getting the attention and buy-in of important people.

      For example, when politicians take office, they often enter on a platform of promises made to their constituents and their supporters about which policy agendas they want to pursue. They can be, to varying degrees, problem-specific — for example, having a broad remit of “improving health care.” Or, it could be more solution-specific — for example, aiming to create a single-payer health system or remove red tape facing critical industries. These agendas are formed through public discussion, media narratives, internal party politics, deliberative debate, interest group advocacy, and other forms of input. Using any of these ways to get something on the agenda is a great way to help make sure it happens.

      You can contribute to this process in political advisory positions (e.g. being a staffer for a congressperson) or through influencer positions, such as think tanks.

      As a rule of thumb, if you’re working within an institution (such as a large corporation or a government department), you want to be as senior as possible while still being responsible for a specific set of issues. In such a position, you’ll be in contact with all the key stakeholders, from the most senior people to those more on your level.

      But it’s important to remember that, for many important issues, policymakers or officials at various levels of government can also prioritise solving certain problems or enacting specific proposals that aren’t the subject of national debate. In fact, sometimes making issues too salient, framing them in divisive ways, or allowing partisanship and political polarisation to shape the discussion, can make it harder to successfully get things done.

      Coming up with ideas for new policies

      In many areas relevant to particularly pressing problems, there’s a lack of concrete policies that are ready to implement.

      Policy creation is a long process, often starting from broad intellectual ideas, which are iteratively developed into more practical proposals by think tanks, civil servants, political parties, advocates, and others, and then adjusted in response to their reception by peers, the media and the electorate, as well as political reality at the time.

      Once concrete policy options are on the table, they must be put through the relevant decision-making process and negotiations. In countries with strong judicial review like the US, special attention often has to be paid to make sure laws and regulations will hold up under the scrutiny of the courts.

      All this means there are many ways to contribute to policy creation in roles ranging from academia to government employees.

      Many policy details are only hashed out at the later stages by civil servants and political advisors. This also means there isn’t a bright line between policy creation and policy implementation — more a spectrum that blurs from one into the other.

      In the corporate context, internal policy creation can serve similar functions. Though they may be less enforceable unless backed up with contracts, the norms policies create can shape behaviour considerably.

      While policy research is the bread and butter of think tank work, many staffers in Congress, agencies, and the White House also develop policy ideas or translate existing ideas into concrete policy proposals. For many areas of technical policy, especially AI policy, some of the best policy research is being done at industry labs, like OpenAI and DeepMind. (Read more about whether you should take a job at a top AI lab.)

      For more details on the complex work of policy creation, we recommend Thomas Kalil’s article Policy Entrepreneurship in the White House: Getting Things Done in Large Organisations.

      Career paths we’ve reviewed that use these skills

      Learn more about government and policy

      See all our materials on policy and political careers.

      Read next:  Explore other useful skills

      Want to learn more about the most useful skills for solving global problems, according to our research? See our list.

      Plus, join our newsletter and we’ll mail you a free book

      Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

      The post Policy and political skills appeared first on 80,000 Hours.

      ]]>
      Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers https://80000hours.org/podcast/episodes/nita-farahany-neurotechnology/ Thu, 07 Dec 2023 22:19:32 +0000 https://80000hours.org/?post_type=podcast&p=84778 The post Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers appeared first on 80,000 Hours.

      ]]>
      The post Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers appeared first on 80,000 Hours.

      ]]>
      Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down https://80000hours.org/podcast/episodes/santosh-harish-air-pollution/ Wed, 01 Nov 2023 22:04:06 +0000 https://80000hours.org/?post_type=podcast&p=84364 The post Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down appeared first on 80,000 Hours.

      ]]>
      The post Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down appeared first on 80,000 Hours.

      ]]>
      Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere https://80000hours.org/podcast/episodes/tantum-collins-ai-policy-insider/ Thu, 12 Oct 2023 21:09:16 +0000 https://80000hours.org/?post_type=podcast&p=84175 The post Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere appeared first on 80,000 Hours.

      ]]>
      The post Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere appeared first on 80,000 Hours.

      ]]>
      Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite https://80000hours.org/podcast/episodes/michael-webb-ai-jobs-labour-market/ Wed, 23 Aug 2023 21:25:32 +0000 https://80000hours.org/?post_type=podcast&p=83134 The post Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite appeared first on 80,000 Hours.

      ]]>
      The post Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite appeared first on 80,000 Hours.

      ]]>
      Ezra Klein on existential risk from AI and what DC could do about it https://80000hours.org/podcast/episodes/ezra-klein-ai-and-dc/ Mon, 24 Jul 2023 21:18:39 +0000 https://80000hours.org/?post_type=podcast&p=82791 The post Ezra Klein on existential risk from AI and what DC could do about it appeared first on 80,000 Hours.

      ]]>
      The post Ezra Klein on existential risk from AI and what DC could do about it appeared first on 80,000 Hours.

      ]]>
      Lennart Heim on the compute governance era and what has to come after https://80000hours.org/podcast/episodes/lennart-heim-compute-governance/ Thu, 22 Jun 2023 23:23:01 +0000 https://80000hours.org/?post_type=podcast&p=82516 The post Lennart Heim on the compute governance era and what has to come after appeared first on 80,000 Hours.

      ]]>
      The post Lennart Heim on the compute governance era and what has to come after appeared first on 80,000 Hours.

      ]]>
      AI governance and coordination https://80000hours.org/career-reviews/ai-policy-and-strategy/ Tue, 20 Jun 2023 12:00:34 +0000 https://80000hours.org/?post_type=career_profile&p=74390 The post AI governance and coordination appeared first on 80,000 Hours.

      ]]>
      As advancing AI capabilities gained widespread attention in late 2022 and 2023 — particularly after the release of OpenAI’s ChatGPT and Microsoft’s Bing chatbot — interest in governing and regulating these systems has grown. Discussion of the potential catastrophic risks of misaligned or uncontrollable AI also became more prominent, potentially opening up opportunities for policy that could mitigate the threats.

      There’s still a lot of uncertainty about which strategies for AI governance and coordination would be best, though parts of the community of people working on this subject may be coalescing around some ideas. See, for example, a list of potential policy ideas from Luke Muehlhauser of Open Philanthropy1 and a survey of expert opinion on best practices in AI safety and governance.

      But there’s no roadmap here. There’s plenty of room for debate about which policies and proposals are needed.

      We may not have found the best ideas yet in this space, and many of the existing policy ideas haven’t yet been developed into concrete, public proposals that could actually be implemented. We hope to see more people enter this field to develop expertise and skills that will contribute to risk-reducing AI governance and coordination.

      In a nutshell: Advanced AI systems could have massive impacts on humanity and potentially pose global catastrophic risks. There are opportunities in AI governance and coordination around these threats to shape how society responds to and prepares for the challenges posed by the technology.

      Given the high stakes, pursuing this career path could be many people’s highest-impact option. But they should be very careful not to accidentally exacerbate the threats rather than mitigate them.

      Recommended

      If you are well suited to this career, it may be the best way for you to have a social impact.

      Review status

      Based on an in-depth investigation 

      “What you’re doing has enormous potential and enormous danger.” — US President Joe Biden, to the leaders of the top AI labs

      Why this could be a high-impact career path

      Artificial intelligence has advanced rapidly. In 2022 and 2023, new language and image generation models gained widespread attention for their abilities, blowing past previous benchmarks the technology had met.

      And the applications of these models are still new; with more tweaking and integration into society, the existing AI systems may become easier to use and more ubiquitous in our lives.

      We don’t know where all these developments will lead us. There’s reason to be optimistic that AI will eventually help us solve many of the world’s problems, raising living standards and helping us build a more flourishing society.

      But there are also substantial risks. AI can be used for both good and ill. And we have concerns that the technology could, without the proper controls, accidentally lead to a major catastrophe — and perhaps even cause human extinction. We discuss the arguments that these risks exist in our in-depth problem profile.

      Because of these risks, we encourage people to work on finding ways to reduce these risks through technical research and engineering.

      But a range of strategies for risk reduction will likely be needed. Government policy and corporate governance interventions in particular may be necessary to ensure that AI is developed to be as broadly beneficial as possible and without unacceptable risk.

      Governance generally refers to the processes, structures, and systems that carry out decision making for organisations and societies at a high level. In the case of AI, we expect the governance structures that matter most to be national governments and organisations developing AI — as well as some international organisations and perhaps subnational governments.

      Some aims of AI governance work could include:

      • Preventing the deployment of any AI systems that pose a significant and direct threat of catastrophe
      • Mitigating the negative impact of AI technology on other catastrophic risks, such as nuclear weapons and biotechnology
      • Guiding the integration of AI technology into our society and economy with limited harms and to the advantage of all
      • Reducing the risk of an “AI arms race,” in which competition leads to technological advancement without the necessary safeguards and caution — between nations and between companies
      • Ensuring that those creating the most advanced AI models are incentivised to be cooperative and concerned about safety
      • Slowing down the development and deployment of new systems if the advancements are likely to outpace our ability to keep them safe and under control

      We need a community of experts who understand the intersection of modern AI systems and policy, as well as the severe threats and potential solutions. This field is still young, and many of the paths within it aren’t clear and are not sure to pan out. But there are relevant professional paths that will provide you valuable career capital for a variety of positions and types of roles.

      The rest of this article explains what work in this area might involve, how you can develop career capital and test your fit, and where some promising places to work might be.

      What kinds of work might contribute to AI governance?

      What should governance-related work on AI actually involve? There are a variety of ways to pursue AI governance strategies, and as the field becomes more mature, the paths are likely to become clearer and more established.

      We generally don’t think people early in their careers should be aiming for a specific job that they think would be high-impact. They should instead aim to develop skills, experience, knowledge, judgement, networks, and credentials — what we call career capital — that they can later use when an opportunity to have a positive impact is ripe.

      This may involve following a pretty standard career trajectory, or it may involve bouncing around in different kinds of roles. Sometimes, you just have to apply to a bunch of different roles and test your fit for various types of work before you know what you’ll be good at. The main thing to keep in mind is that you should try to get excellent at something for which you have strong personal fit and that will let you contribute to solving pressing problems.

      In the AI governance and coordination space, we see at least six large categories of work that we expect to be important:

      There aren’t necessarily openings in all these categories at the moment for careers in AI governance, but they represent a range of sectors in which impactful work may potentially be done in the coming years and decades. Thinking about the different skills and forms of career capital that will be useful for the categories of work you could see yourself doing in the future can help you figure out what your immediate next steps should be. (We discuss how to assess your fit and enter this field below.)

      You may want to — and indeed it may be advantageous to — move between these different categories of work at different points in your career. You can also test out your fit for various roles by taking internships, fellowships, entry-level jobs, temporary placements, or even doing independent research, all of which can serve as career capital for a range of paths.

      We have also reviewed career paths in AI technical safety research and engineering and information security, which may be crucial to reducing risks from AI, and which may play a significant role in an effective governance agenda. People serious about pursuing a career in AI governance should familiarise themselves with these fields as well.

      Government work

      Taking a role within government could lead to playing an important role in the development, enactment, and enforcement of AI policy.

      Note that we generally expect that the US federal government will be the most significant player in AI governance for the foreseeable future. This is because of its global influence and its jurisdiction over much of the AI industry, including the top three AI labs training state-of-the-art, general-purpose models (Anthropic, OpenAI, and Google DeepMind) and key parts of the chip supply chain. Much of this article focuses on US policy and government.2

      But other governments and international institutions may also end up having important roles to play in certain scenarios. For example, the UK government, the European Union, China, and potentially others, may all present opportunities for impactful AI governance work. Some US state-level governments, such as California, may also offer opportunities for impact and gaining career capital.

      What would this work involve? Sections below discuss how to enter US policy work and which areas of the government that you might aim for.

      But at the broadest level, people interested in positively shaping AI policy should aim to gain the skills and experience to work in areas of government with some connection to AI or emerging technology policy.

      This can include roles in: legislative branches, domestic regulation, national security, diplomacy, appropriations and budgeting, and other policy areas.

      If you can get a role out of the gate that is already working directly on this issue, such as a staff position with a lawmaker who is focused on AI, that could be a great opportunity.

      Otherwise, you should seek to learn as much as you can about how policy works and which government roles might allow you to have the most impact, while establishing yourself as someone who’s knowledgeable about the AI policy landscape. Having almost any significant government role that touches on some aspect of AI, or having some impressive AI-related credential, may be enough to get you quite far.

      One way to advance your career in government on a specific topic is what some call “getting visibility” — that is, using your position to learn about the landscape and connect with the actors and institutions that affect the policy area you care about. You’ll want to be invited to meetings with other officials and agencies, be asked for input on decisions, and engage socially with others who work in the policy area. If you can establish yourself as a well-regarded expert on an important but neglected aspect of the issue, you’ll have a better shot at being included in key discussions and events.

      Career trajectories within government can be broken down roughly as follows:

      • Standard government track: This involves entering government at a relatively low level and building up your career capital on the inside by climbing the seniority ladder. For the highest impact, you’d ideally end up reaching senior levels by sticking around, gaining skills and experience, and getting promoted. You may move between agencies, departments, or branches.
      • Specialisation career capital: You can also move in and out of government throughout your career. People on this trajectory will also work at nonprofits, think tanks, industry labs, political parties, academia, and other organisations. But they will primarily focus on becoming an expert in a topic — such as AI. It can be harder to get seniority this way, but the value of expertise and experience can sometimes outweigh seniority.
      • Direct-impact work: Some people move into government jobs without a longer plan to build career capital because they see an opportunity for direct, immediate impact. This might look like getting tapped to lead an important commission or providing valuable input on an urgent project. We don’t generally recommend planning on this kind of strategy for your career, but it’s good to be aware of it as an opportunity that might be worth taking at some point.

      Research on AI policy and strategy

      There’s still a lot of research to be done on the most important avenues for AI governance approaches. While there are some promising proposals for a system of regulatory and strategic steps that can help reduce the risk of an AI catastrophe, there aren’t many concrete and publicly available policy proposals ready for adoption.

      The world needs more concrete proposals for AI policies that would really start to tackle the biggest threats; developing such policies, and deepening our understanding of the strategic needs of the AI governance space, should be high priorities.

      Other relevant research could involve surveys of public opinion that could inform communication strategies, legal research about the feasibility of proposed policies, technical research on issues like compute governance, and even higher-level theoretical research into questions about the societal implications of advanced AI. Some research, such as that done by Epoch AI, focuses on forecasting the future course of AI developments, which can influence AI governance decisions.

      However, several experts we’ve talked to warn that a lot of research on AI governance may prove to be useless, so it’s important to be reflective and seek input from others in the field — both from experienced policy practitioners and technical experts — about what kind of contribution you can make. We list several research organisations below that we think would be good to work at in order to pursue promising research on this topic.

      One potentially useful approach for testing your fit for this work — especially when starting out in this research — is to write up analyses and responses to existing work on AI policy or investigate some questions in this area that haven’t been the subject of much attention. You can then share your work widely, send it out for feedback from people in the field, and evaluate how much you enjoy the work and whether you might productively contribute to this research longer term.

      But it’s possible to spend too long testing your fit without making much progress, and some people find that they’re best able to contribute when they’re working on a team. So don’t overweight or over-invest in independent work, especially if there are few signs it’s working out especially well for you. This kind of project can make sense for maybe a month or a bit longer — but it’s unlikely to be a good idea to spend much more than that without meaningful funding or some really encouraging feedback from people working in the field.

      If you have the experience to be hired as a researcher, work on AI governance can be done in academia, nonprofit organisations, and think tanks. Some government agencies and committees, too, perform valuable research.

      Note that universities and academia have their own priorities and incentives that often aren’t aligned with producing the most impactful work. If you’re already an established researcher with tenure, it may be highly valuable to pivot into work on AI governance — this position may even give you a credible platform from which to advocate for important ideas.

      But if you’re just starting out a research career and want to focus on this issue, you should carefully consider whether your work will be best supported inside or outside of academia. For example, if you know of a specific programme with particular mentors who will help you pursue answers to critical questions in this field, it might be worth doing. We’re less inclined to encourage people to pursue generic academic-track roles with the vague hope that one day they can do important research on this topic.

      Advanced degrees in policy or relevant technical fields may well be valuable, though — see more discussion of this in the section on how to assess your fit and get started.

      Industry work

      While government policy is likely to play a key role in coordinating various actors interested in reducing the risks from advanced AI, internal policy and corporate governance at the largest AI labs themselves is also a powerful tool. We think people who care about reducing risk can potentially do valuable work internally at industry labs. (Read our career review of non-technical roles at AI labs.)

      At the highest level, deciding who sits on corporate boards, what kind of influence those boards have, and to what extent the organisation is structured to seek profit and shareholder value as opposed to other aims, can end up having a major impact on the direction a company takes. If you might be able to get a leadership role at a company developing frontier AI models, such as a management position or a seat on the board, it could potentially be a very impactful position.

      If you’re able to join a policy team at a major lab, you can model threats and help develop, implement, and evaluate promising proposals internally to reduce risks. And you can build consensus around best practices, such as strong information security policies, using outside evaluators to find vulnerabilities and dangerous behaviours in AI systems (red teaming), and testing out the latest techniques from the field of AI safety.

      And if, as we expect, AI labs face increasing government oversight, industry governance and policy work can ensure compliance with any relevant laws and regulations that get put in place. Interfacing with government actors and facilitating coordination over risk reduction approaches could be impactful work.

      In general, the more cooperative AI labs are with each other3 and outside groups seeking to minimise catastrophic risks from AI, the better. And this doesn’t seem to be an outlandish hope — many industry leaders have expressed concern about extinction risks and have even called for regulation of the frontier technology they’re creating.

      That said, we can expect this cooperation to take substantial work — it would be surprising if the best policies for reducing risks were totally uncontroversial in industry, since labs also face huge commercial incentives to build more powerful systems, which can carry more risk. The more everyone’s able to communicate and align their incentives, the better things seem likely to go.

      Advocacy and lobbying

      People outside of government or AI labs can influence the shape of public policy and corporate governance via advocacy and lobbying.

      As of this writing, there has not yet been a large public movement in favour of regulating or otherwise trying to reduce risks from AI, so there aren’t many openings that we know about in this category. But we expect growing interest in this area to open up new opportunities to press for political action and policy changes at AI labs, and it could make sense to start building career capital and testing your fit now for different kinds of roles that would fall into this category down the line.

      If you believe AI labs may be disposed to advocate for generally beneficial regulation, you might want to try to work for them, or become a lobbyist for the industry as a whole, to push the government to adopt specific policies. It’s plausible that AI labs will have by far the best understanding of the underlying technology, as well as the risks, failure modes, and safest paths forward.

      On the other hand, it could be the case that AI labs have too much of a vested interest in the shape of regulations to reliably advocate for broadly beneficial policies. If that’s right, it may be better to join or create advocacy organisations unconnected from the industry — supported by donations or philanthropic foundations — that can take stances that are opposed to the labs’ commercial interests.

      For example, it could be the case that the best approach from a totally impartial perspective would be at some point to deliberately slow down or halt the development of increasingly powerful AI models. Advocates could make this demand of the labs themselves or of the government to slow down AI progress. It may be difficult to come to this conclusion or advocate for it if you have strong connections to the companies creating these systems.

      It’s also possible that the best outcomes will be achieved with a balance of industry lobbyists and outside lobbyists and advocates making the case for their preferred policies — as both bring important perspectives.

      We expect there will be increasing public interest in AI policy as the technological advancements have ripple effects in the economy and wider society. And if there’s increasing awareness of the impact of AI on people’s lives, the risks the technology poses may become more salient to the public, which will give policymakers strong incentives to take the problem seriously. It may also bring new allies into the cause of ensuring that the development of advanced AI goes well.

      Advocacy can also:

      • Highlight neglected but promising approaches to governance that have been uncovered in research
      • Facilitate the work of policymakers by showcasing the public’s support for governance measures
      • Build bridges between researchers, policymakers, the media, and the public by communicating complicated ideas in an accessible way to many audiences
      • Pressure corporations themselves to proceed more cautiously
      • Change public sentiment around AI and discourage irresponsible behaviour by individual actors, such as the spreading of powerful open-source models

      However, note that advocacy can sometimes backfire. Predicting how information will be received is far from straightforward. Drawing attention to a cause area can sometimes trigger a backlash; presenting problems with certain styles of rhetoric can alienate people or polarise public opinion; spreading misleading or mistaken messages can discredit yourself and fellow advocates. It’s important that you are aware of the risks, consult with others (particularly those who you respect but might disagree with tactically), and commit to educating yourself deeply about the topic before expounding on it in public.

      You can read more in the section about doing harm below. We also recommend reading our article on ways people trying to do good accidentally make things worse and how to avoid them.

      Case study: the Future of Life Institute open letter

      In March 2023, the Future of Life Institute published an open letter calling for a pause of at least six months on training any new models more “powerful” than OpenAI’s GPT-4 — which had been released about a week earlier. GPT-4 is a state-of-the-art language model that can be used through ChatGPT to produce novel and impressive text responses to a wide range of prompts.

      The letter attracted a lot of attention, perhaps in part because it was signed by prominent figures such as Elon Musk. While it didn’t immediately achieve its explicit aims — the labs didn’t commit to a pause — it drew a lot of attention and fostered public conversations about the risks of AI and the potential benefits of slowing down. (An earlier article titled “Let’s think about slowing down AI” — by Katja Grace of the research organisation AI Impacts — aimed to have a similar effect.)

      There’s no clear consensus on whether the FLI letter was on the right track. Some critics of the letter, for example, said that its advice would actually lead to worse outcomes overall if followed, because it would slow down AI safety research while many of the innovations that drive AI capabilities progress, such as chip development, would continue to race forward. Proponents of the letter pushed back on these claims.4 It does seem clear that the letter changed the public discourse around AI safety in a way that few other efforts have achieved, which is proof of concept for what impactful advocacy can accomplish.

      Third-party auditing and evaluation

      If regulatory measures are put in place to reduce the risks of advanced AI, some agencies and organisations — within government or outside — will need to audit companies and systems to make sure that regulations are being followed.

      One nonprofit, the Alignment Research Center, has been at the forefront of this kind of work.5 In addition to its research work, it has launched a program to evaluate the capabilities of advanced AI models. In early 2023, the organisation partnered with two leading AI labs, OpenAI and Anthropic, to evaluate the capabilities of the latest versions of their chatbot models prior to their release. They sought to determine in a controlled environment if the models had any potentially dangerous capabilities.

      The labs voluntarily cooperated with ARC for this project, but at some point in the future, these evaluations may be legally required.

      Governments often rely on third-party auditors as crucial players in regulation, because the government may lack the expertise (or the capacity to pay for the expertise) that the private sector has. There aren’t many such opportunities available in this type of role that we know of as of this writing, but they may end up playing a critical part of an effective AI governance framework.

      Other types of auditing and evaluation may be required as well. ARC has said it intends to develop methods to determine which models are appropriately aligned — that is, that they will behave as their users intend them to behave — prior to release.

      Governments may also want to employ auditors to evaluate the amount of compute that AI developers have access to, their information security practices, the uses of models, the data used to train models, and more.

      Acquiring the technical skills and knowledge to perform these types of evaluations, and joining organisations that will be tasked to perform them, could be the foundation of a highly impactful career. This kind of work will also likely have to be facilitated by people who can manage complex relationships across industry and government. Someone with experience in both sectors could have a lot to contribute.

      Some of these types of roles may have some overlap with work in AI technical safety research.

      One potential advantage of working in the private sector for AI governance work is you may be significantly better paid than you would be in government.

      International work and coordination

      US-China

      For someone with the right fit, cooperation and coordination with China on the safe development of AI could be a particularly impactful approach within the broad AI governance career path.

      The Chinese government has been a major funder in the field of AI, and the country has giant tech companies that could potentially drive forward advances.

      Given tensions between the US and China, and the risks posed by advanced AI, there’s a lot to be gained from increasing trust, understanding, and coordination between the two countries. The world will likely be much better off if we can avoid a major conflict between great powers and if the most significant players in emerging technology can avoid exacerbating any global risks.

      We have a separate career review that goes into more depth on China-related AI safety and governance paths.

      Other governments and international organisations

      As we’ve said, we focus most on US policy and government roles. This is largely because we anticipate that the US is now and will likely continue to be the most pivotal actor when it comes to regulating AI, with a major caveat being China, as discussed in the previous section.

      But many people interested in working on this issue can’t or don’t want to work in US policy — perhaps because they live in another country and don’t intend on moving.

      Much of the advice above still applies to these people, because roles in AI governance research and advocacy can be done outside of the United States.6 And while we don’t think it’s generally as impactful in expectation as US government work, opportunities in other governments and international organisations can be complementary to the work to be done in the US.

      The United Kingdom, for instance, may present another strong opportunity for AI policy work that would complement US work. Top UK officials have expressed interest in developing policy around AI, perhaps even a new international agency, and reducing extreme risks. And the UK government announced in 2023 the creation of a new AI Foundation Model Taskforce, with the expressed intention to drive forward safety research.

      It’s possible that by taking significant steps to understand and regulate AI, the UK will encourage or inspire US officials to take similar steps by showing how it can work.

      And any relatively wealthy country could use portions of its budget to fund AI safety research. While a lot of the most important work likely needs to be done in the US, along with leading researchers and at labs with access to large amounts of compute, some lines of research may be productive even without these resources. Any significant advances in AI safety research, if communicated properly, could be used by researchers working on the most powerful models.

      Other countries might also develop liability standards for the creators of AI systems that could incentivise corporations to proceed more cautiously and judiciously before releasing models.

      The European Union has shown that its data protection standards — the General Data Protection Regulation (GDPR) — affect corporate behaviour well beyond its geographical boundaries. EU officials have also pushed forward on regulating AI, and some research has explored the hypothesis that the impact of the union’s AI regulations will extend far beyond the continent — the so-called “Brussels effect.”

      And at some point, we do expect there will be AI treaties and international regulations, just as the international community has created the International Atomic Energy Agency, the Biological Weapons Convention, and Intergovernmental Panel on Climate Change to coordinate around and mitigate other global catastrophic threats.

      Efforts to coordinate governments around the world to understand and share information about threats posed by AI may end up being extremely important in some future scenarios.

      The Organisation for Economic Cooperation and Development is one place where such work might occur. So far, it has been the most prominent international actor working on AI policy and has created the AI Policy Observatory.

      Third-party countries may also be able to facilitate cooperation and reduce tensions betweens the United States and China, whether around AI or other potential flashpoints, should such an intervention become necessary.

      How policy gets made

      What does it actually take to make policy?

      In this section, we’ll discuss three phases of policy making: agenda setting, policy creation and development, and implementation. We’ll generally discuss these as aspects of making government policy, but they could also be applied to organisational policy. The following section will discuss the types of work that you could do to positively contribute to the broad field of AI governance.

      Agenda setting

      To enact and implement a programme of government policies that have a positive impact, you have to first ensure that the subject of potential legislation and regulation is on the agenda for policymakers.

      Agenda setting for policy involves identifying and defining problems, drawing attention to the problems and raising their salience (at least to the relevant people), and promoting potential approaches to solving them.

      For example, when politicians take office, they often enter on a platform of promises made to their constituents and their supporters about which policy agendas they want to pursue. Those agendas are formed through public discussion, media narratives, internal party politics, deliberative debate, interest group advocacy, and other forms of input. The agenda can be, to varying degrees, problem-specific — having a broad remit of “improving health care.” Or it could be more solution-specific — aiming to create, for example, a single-payer health system.

      Issues don’t necessarily have to be unusually salient to get on the agenda. Policymakers or officials at various levels of government can prioritise solving certain problems or enacting specific proposals that aren’t the subject of national debate. In fact, sometimes making issues too salient, framing them in divisive ways, or allowing partisanship and political polarisation to shape the discussion, can make it harder to successfully put solutions on the agenda.

      What’s key for agenda setting as an approach to AI governance is that people with the authority have to buy into the idea of prioritising the issue, if they’re going to use their resources and political capital to focus on it.

      Policy creation and development

      While there does appear to be growing enthusiasm for a set or sets of policy proposals that could start to reduce the risk of an AI-related catastrophe, there’s still a lack of concrete policies that are ready to get off the ground.

      This is what the policy creation and development process is for. Researchers, advocates, civil servants, lawmakers and their staff, and others all can play a role in shaping the actual legislation and regulation that the government eventually enforces. In the corporate context, internal policy creation can serve similar functions, though it may be less enforceable unless backed up with contracts.

      Policy creation involves crafting solutions for the problem at hand with the policy tools available, usually requiring input from technical experts, legal experts, stakeholders, and the public. In countries with strong judicial review like the United States, special attention often has to be paid to make sure laws and regulations will hold up under the scrutiny of judges.

      Once concrete policy options are on the table, they must be put through the relevant decision-making process and negotiations. If the policy in question is a law that’s going to be passed, rather than a regulation, it needs to be crafted so that it will have enough support from lawmakers and other key decision makers to be enacted. This can happen in a variety of ways; it might be rolled into a larger piece of legislation that has wide support, or it may be rallied around and brought forward as its own package to be voted on individually.

      Policy creation can also be an iterative process, as policies are enacted, implemented, monitored, evaluated, and revised.

      For more details on the complex work of policy creation, we recommend Thomas Kalil’s article “Policy Entrepreneurship in the White House: Getting Things Done in Large Organisations.”

      Implementation

      Fundamentally, a policy is only an idea. For an idea to have an impact, someone actually has to carry it out. Any of the proposals for AI-related government policy — including standards and evaluations, licensing, and compute governance — will demand complex management and implementation.

      Policy implementation on this scale requires extensive planning, coordination in and out of government, communication, resource allocation, training and more — and every step in this process can be fraught with challenges. To rise to the occasion, any government implementing an AI policy regime will need talented individuals working at a high standard.

      The policy creation phase is critical and is probably the highest-priority work. But good ideas can be carried out badly, which is why policy implementation is also a key part of the AI governance agenda.

      Examples of people pursuing this path

      How to assess your fit and get started

      If you’re early on in your career, you should focus first on getting skills and other career capital to successfully contribute to the beneficial governance and regulation of AI.

      You can gain career capital for roles in many ways, and the best options will vary based on your route to impact. But broadly speaking, working in or studying fields such as politics, law, international relations, communications, and economics can all be beneficial for going into policy work.

      And expertise in AI itself, gained by studying and working in machine learning and technical AI safety, or potentially related fields such as computer hardware or information security, should also give you a big advantage.

      Testing your fit

      One general piece of career advice we give is to find relatively “cheap” tests to assess your fit for different paths. This could mean, for example, taking a policy internship, applying for a fellowship, doing a short bout of independent research as discussed above, or taking classes or courses on technical machine learning or computer engineering.

      It can also just involve talking to people currently doing a job you might consider having and finding out what the day-to-day experience of the work is like and what skills are needed.

      All of these factors can be difficult to predict in advance. While we grouped “government work” into a single category above, that label covers a wide range of positions and types of occupations in many different departments and agencies. Finding the right fit within a broad category like “government work” can take a while, and it can depend on a lot of factors out of your control, such as the colleagues you happen to work closely with. That’s one reason it can be useful to build broadly valuable career capital, so you have the option to move around to find the right role for you.

      And don’t underestimate the value at some point of just applying to many relevant openings in the field and sector you’re aiming for and seeing what happens. You’ll likely face a lot of rejection with this strategy, but you’ll be able to better assess your qualifications for different kinds of roles after you see how far you get in the process, if you take enough chances. This can give you a lot more information than just guessing about whether you have the right experience.

      It can be useful to rule out certain types of work if you gather evidence that you’re not a strong fit for the role. For example, if you invest a lot of time and effort trying to get into reputable universities or nonprofit institutions to do AI governance research, but you get no promising offers and receive little encouragement even after applying widely, this might be a significant signal that you’re unlikely to thrive in that particular path.

      That wouldn’t mean you have nothing to contribute, but your comparative advantage may lie elsewhere.

      Read the section of our career guide on finding a job that fits you.

      Types of career capital

      For a field like AI governance, a mix of people with technical and policy expertise — and some people with both — is needed.

      While anyone involved in this field should work to maintain an evolving understanding of both the technical and policy details, you’ll probably start out focusing on either policy or technical skills to gain career capital.

      This section covers:

      Much of this advice is geared toward roles in the US, though it may be relevant in other contexts.

      Generally useful career capital

      The chapter of the 80,000 Hours career guide on career capital lists five key components that will be useful in any path: skills and knowledge, connections, credentials, character, and runway.

      For most jobs touching on policy, social skills, networking, and — for lack of a better word — political skill will be a huge asset. This can probably be learned to some extent, but some people may find they don’t have these kinds of skills and can’t or don’t want to acquire them. That’s OK — there are many other routes to having a fulfilling and impactful career, and there may be some roles within this path that demand these skills to a much lesser extent. That’s why testing your fit is important.

      Read the full section of the career guide on career capital.

      To gain skills in policy, you can pursue education in many relevant fields, such as political science, economics, and law.

      Many master’s programmes offer specific coursework on public policy, science and society, security studies, international relations, and other topics; having a graduate degree or law degree will give you a leg up for many positions.

      In the US, a master’s, a law degree, or a PhD is particularly useful if you want to climb the federal bureaucracy. Our article on US policy master’s degrees provides detailed information about how to assess the many options.

      Internships in DC are a promising route to evaluate your aptitude for policy work and to establish early career capital. Many academic institutions now offer a strategic “Semester in DC” programme, which can let you explore placements of choice in Congress, federal agencies, or think tanks. The Virtual Student Federal Service (VSFS) also offers part-time, remote government internships. Balancing their academic commitments, students can access these opportunities during the academic year, further solidifying their grasp on the intricacies of policy work. This technological advance could be the stepping stone many aspiring policy professionals need to ascend in their future careers.

      Once you have a suitable background, you can take entry-level positions within parts of the government where you can build a professional network and develop your skills. In the US, you can become a congressional staffer, or take a position at a relevant federal department, such as the Department of Commerce, Department of Energy, or the Department of State. Alternatively, you can gain experience in think tanks — a particularly promising option if you have a strong aptitude for research — and government contractors, private sector companies providing services to the government.

      In Washington, DC, the culture is fairly unique. There’s a big focus on networking and internal bureaucratic politics to navigate. We’ve also been told that while merit matters to a degree in US government work, it is not the primary determinant of who is most successful. People who think they wouldn’t feel able or comfortable to be in this kind of environment for the long term should consider whether other paths would be best.

      If you find you can enjoy government and political work, impress your colleagues, and advance in your career, though, that’s a strong signal that you have the potential to make a real impact. Just being able to thrive in government work can be an extremely valuable comparative advantage.

      US citizenship

      Your citizenship may affect which opportunities are available to you. Many of the most important AI governance roles within the US — particularly in the executive branch and Congress — are only open to, or will at least heavily favour, American citizens. All key national security roles that might be especially important will be restricted to those with US citizenship, which is required to obtain a security clearance.

      This may mean that those who lack US citizenship will want to consider not pursuing roles that require it. Alternatively, they could plan to move to the US and pursue the long process of becoming a citizen. For more details on immigration pathways and types of policy work available to non-citizens, see this blog post on working in US policy as a foreign national. Consider also participating in the annual diversity visa lottery if you’re from an eligible country, as this is low effort and allows you to win a US green card if you’re lucky.

      Technical career capital

      Technical experience in machine learning, AI hardware, and related fields can be a valuable asset for an AI governance career. So it will be very helpful if you’ve studied a relevant subject area for an undergraduate or graduate degree, or a particularly productive course of independent study.

      We have a guide to technical AI safety careers, which explains how to learn the basics of machine learning.

      The following resources may be particularly useful for familiarising yourself with the field of AI safety:

      Working at an AI lab in technical roles, or other companies that use advanced AI systems and hardware, may also provide significant career capital in AI policy paths. (Read our career review discussing the pros and cons of working at a top AI lab.)

      We also have a separate career review on how becoming an expert in AI hardware could be very valuable in governance work.

      Many politicians and policymakers are generalists, as their roles require them to work in many different subject areas and on different types of problems. This means they’ll need to rely on expert knowledge when crafting and implementing policy on AI technology that they don’t fully understand. So if you can provide them this information, especially if you’re skilled at communicating it clearly, you can potentially fill influential roles.

      Some people who may have initially been interested in pursuing a technical AI safety career, but who have found that they either are no longer interested in that path or find more promising policy opportunities, might also decide that they can effectively pivot into a policy-oriented career.

      It is common for people with STEM backgrounds to enter and succeed in US policy careers. People with technical credentials that they may regard as fairly modest — such as computer science bachelor’s degrees or a master’s in machine learning — often find their knowledge is highly valued in Washington, DC.

      Most DC jobs don’t have specific degree requirements, so you don’t need to have a policy degree to work in DC. Roles specifically addressing science and technology policy are particularly well-suited for people with technical backgrounds, and people hiring for these roles will value higher credentials like a master’s or, better even, a terminal degree like a PhD or MD.

      There are many fellowship programmes specifically aiming to support people with STEM backgrounds to enter policy careers; some are listed below.

      This won’t be right for everybody — many people with technical skills may not have the disposition or skills necessary for engaging in policy. People in policy-related paths often benefit from strong writing and social skills as well as a comfort navigating bureaucracies and working with people holding very different motivations and worldviews.

      Other specific forms of career capital

      There are other ways to gain useful career capital that could be applied in this career path.

      • If you have or gain great communication skills as, say, a journalist or an activist, these skills could be very useful in advocacy and lobbying around AI governance.
        • Especially since advocacy around AI issues is still in its early stages, it will likely need people with experience advocating in other important cause areas to share their knowledge and skills.
      • Academics with relevant skill sets are sometimes brought into government for limited stints to serve as advisors in agencies such as the US Office of Science and Technology. This isn’t necessarily the foundation of a longer career in government, though it can be, and it can give an academic deeper insight into policy and politics than they might otherwise gain.
      • You can work at an AI lab in non-technical roles, gaining a deeper familiarity with the technology, the business, and the culture. (Read our career review discussing the pros and cons of working at a top AI lab.)
      • You could work on political campaigns and get involved in party politics. This is one way to get involved in legislation, learn about policy, and help impactful lawmakers, and you can also potentially help shape the discourse around AI governance. Note, though, the previously mentioned downsides of potentially polarising public opinion around AI policy; and entering party politics may limit your potential for impact whenever the party you’ve joined doesn’t hold power.
      • You could even try to become an elected official yourself, though it’s obviously competitive. If you take this route, make sure you find trustworthy and highly informed advisors to rely on to build expertise in AI, since politicians have many other responsibilities and won’t be able to focus as much on any particular issue.
      • You can focus on developing specific skill sets that might be valuable in AI governance, such as information security, intelligence work, diplomacy with China, etc.
        • Other skills: Organisational, entrepreneurial, management, diplomatic, and bureaucratic skills will also likely prove highly valuable in this career path. There may be new auditing agencies to set up or policy regimes to implement. Someone who has worked at high levels in other high-stakes industries, started an influential company, or coordinated complicated negotiations between various groups, would bring important skills to the table.

      Want one-on-one advice on pursuing this path?

      Because this is one of our priority paths, if you think this path might be a great option for you, we’d be especially excited to advise you on next steps, one-on-one. We can help you consider your options, make connections with others working in the same field, and possibly even help you find jobs or funding opportunities.

      APPLY TO SPEAK WITH OUR TEAM

      Where can this kind of work be done?

      Since successful AI governance will require work from governments, industry, and other parties, there will be many potential jobs and places to work for people in this path. The landscape will likely shift over time, so if you’re just starting out on this path, the places that seem most important might be different by the time you’re pivoting to using your career capital to make progress on the issue.

      Within the US government, for instance, it’s not clear which bodies will be most impactful when it comes to AI policy in five years. It will likely depend on choices that are made in the meantime.

      That said, it seems useful to give our understanding of which parts of the government are generally influential in technology governance and most involved right now to help orient. Gaining AI-related experience in government right now should still serve you well if you end up wanting to move into a more impactful AI-related role down the line when the highest-impact areas to work in are clearer.

      We’ll also give our current sense of important actors outside government where you might be able to build career capital and potentially have a big impact.

      Note that this list has by far the most detail about places to work within the US government. We would like to expand it to include more options as we learn more. You can use this form to suggest additional options for us to include. (And the fact that an option isn’t on this list shouldn’t be taken to mean we recommend against it or even that it would necessarily be less impactful than the places listed.)

      We have more detail on other options in separate (and older) career reviews, including the following:

      With that out of the way, here are some of the places where someone could do promising work or gain valuable career capital:

      In Congress, you can either work directly for lawmakers themselves or as staff on a legislative committee. Staff roles on the committees are generally more influential on legislation and more prestigious, but for that reason, they’re more competitive. If you don’t have that much experience, you could start out in an entry-level job staffing a lawmaker and then later try to transition to staffing a committee.

      Some people we’ve spoken to expect the following committees — and some of their subcommittees — in the House and Senate to be most impactful in the field of AI. You might aim to work on these committees or for lawmakers who have significant influence on these committees.

      House of Representatives

      • House Committee on Energy and Commerce
      • House Judiciary Committee
      • House Committee on Space, Science, and Technology
      • House Committee on Appropriations
      • House Armed Services Committee
      • House Committee on Foreign Affairs
      • House Permanent Select Committee on Intelligence

      Senate

      • Senate Committee on Commerce, Science, and Transportation
      • Senate Judiciary Committee
      • Senate Committee on Foreign Relations
      • Senate Committee on Homeland Security and Government Affairs
      • Senate Committee on Appropriations
      • Senate Committee on Armed Services
      • Senate Select Committee on Intelligence
      • Senate Committee on Energy & Natural Resources
      • Senate Committee on Banking, Housing, and Urban Affairs

      The Congressional Research Service, a nonpartisan legislative agency, also offers opportunities to conduct research that can impact policy design across all subjects.

      In general, we don’t recommend taking entry-level jobs within the executive branch for this path because it’s very difficult to progress your career through the bureaucracy at this level. It’s better to get a law degree or relevant master’s degree, which can give you the opportunity to start with more seniority.

      The influence of different agencies over AI regulation may shift over time, and there may even be entirely new agencies set up to regulate AI at some point, which could become highly influential. Whichever agency may be most influential in the future, it will be useful to have accrued career capital working effectively in government, creating a professional network, learning about day-to-day policy work, and deepening your knowledge of all things AI.

      We have a lot of uncertainty about this topic, but here are some of the agencies that may have significant influence on at least one key dimension of AI policy as of this writing:

      • Executive Office of the President (EOP)
        • Office of Management and Budget (OMB)
        • National Security Council (NSC)
        • Office of Science and Technology Policy (OSTP)
      • Department of State
        • Office of the Special Envoy for Critical and Emerging Technology (S/TECH)
        • Bureau of Cyberspace and Digital Policy (CDP)
        • Bureau of Arms Control, Verification and Compliance (AVC)
        • Office of Emerging Security Challenges (ESC)
      • Federal Trade Commission
      • Department of Defense (DOD)
        • Chief Digital and Artificial Intelligence Office (CDAO)
        • Emerging Capabilities Policy Office
        • Defense Advanced Research Projects Agency (DARPA)
        • Defense Technology Security Administration (DTSA)
      • Intelligence Community (IC)
        • Intelligence Advanced Research Projects Activity (IARPA)
        • National Security Agency (NSA)
        • Science advisor roles within the various agencies that make up the intelligence community
      • Department of Commerce (DOC)
        • The Bureau of Industry and Security (BIS)
        • The National Institute of Standards and Technology (NIST)
        • CHIPS Program Office
      • Department of Energy (DOE)
        • Artificial Intelligence and Technology Office (AITO)
        • Advanced Scientific Computing Research (ASCR) Program Office
      • National Science Foundation (NSF)
        • Directorate for Computer and Information Science and Engineering (CISE)
        • Directorate for Technology, Innovation and Partnerships (TIP)
      • Cybersecurity and Infrastructure Security Agency (CISA)

      Readers can find listings for roles in these departments and agencies at the federal government’s job board, USAJOBS; a more curated list of openings for potentially high impact roles and career capital is on the 80,000 Hours job board.

      We do not currently recommend attempting to join the US government via the military if you are aiming for a career in AI policy. There are many levels of seniority to rise through and many people competing for places, and initially you have to spend all of your time doing work unrelated to AI. However, having military experience already can be valuable career capital for other important roles in government, particularly national security positions. We would consider this route more competitive for military personnel who have been to an elite military academy, such as West Point, or for commissioned officers at rank O-3 or above.

      Policy fellowships are among the best entryways into policy work. They offer many benefits like first-hand policy experience, funding, training, mentoring, and networking. While many require an advanced degree, some are open to college graduates.

      • Center for Security and Emerging Technology (CSET)
      • Center for a New American Security
      • RAND Corporation
      • The MITRE Corporation
      • Brookings Institution
      • Carnegie Endowment for International Peace
      • Center for Strategic and International Studies (CSIS)
      • Federation of American Scientists (FAS)
      • Alignment Research Center
      • Open Philanthropy1
      • Institute for AI Policy and Strategy
      • Epoch AI
      • Centre for the Governance of AI (GovAI)
      • Center for AI Safety (CAIS)
      • Legal Priorities Project
      • Apollo Research
      • Centre for Long-Term Resilience
      • AI Impacts
      • Johns Hopkins Applied Physics Lab

      (Read our career review discussing the pros and cons of working at a top AI lab.)

      • Organisation for Economic Co-operation and Development (OECD)
      • International Atomic Energy Agency (IAEA)
      • International Telecommunication Union (ITU)
      • International Organization for Standardization (ISO)
      • European Union institutions (e.g., European Commission)
      • Simon Institute for Longterm Governance

      Our job board features opportunities in AI safety and policy:

        View all opportunities

        How this career path can go wrong

        Doing harm

        As we discuss in an article on accidental harm, there are many ways to set back a new field that you’re working in when you’re trying to do good, and this could mean your impact is negative rather than positive. (You may also want to read our article on harmful careers.)

        It seems likely there’s a lot of potential to inadvertently cause harm in the emerging field of AI governance. We discussed some possibilities in the section on advocacy and lobbying. Some other possibilities include:

        • Pushing for a given policy to the detriment of a superior policy
        • Communicating about the risks of AI in a way that ratchets up geopolitical tensions
        • Enacting a policy that has the opposite impact of its intended effect
        • Setting policy precedents that could be exploited by dangerous actors down the line
        • Funding projects in AI that turn out to be dangerous
        • Sending the message, implicitly or explicitly, that the risks are being managed when they aren’t, or that they’re lower than they in fact are
        • Suppressing technology that would actually be extremely beneficial for society

        The trouble is that we have to act with incomplete information, so it may never be very clear when or if people in AI governance are falling into these traps. Being aware that they are potential ways of causing harm will help you keep alert for these possibilities, though, and you should remain open to changing course if you find evidence that your actions may be damaging.

        And we recommend keeping in mind the following pieces of general guidance from our article on accidental harm:

        1. Ideally, eliminate courses of action that might have a big negative impact.
        2. Don’t be a naive optimizer.
        3. Have a degree of humility.
        4. Develop expertise, get trained, build a network, and benefit from your field’s accumulated wisdom.
        5. Follow cooperative norms
        6. Match your capabilities to your project and influence.
        7. Avoid hard-to-reverse actions.

        Burning out

        We think this work is exceptionally pressing and valuable, so we encourage our readers who might have a strong personal fit for governance work to test it out. But going into government, in particular, can be difficult. Some people we’ve advised have gone into policy roles with the hope of having an impact, only to burn out and move on.

        At the same time, many policy practitioners find their work very meaningful, interesting, and varied.

        Some roles in government may be especially challenging for the following reasons:

        • Some roles can be very fast-paced, involving relatively high stress and long hours. This is particularly true in Congress and senior executive branch positions and much less so in think tanks or junior agency roles.
        • It can take a long time to get into positions with much autonomy or decision-making authority.
        • Progress on the issues you care about can be slow, and you often have to work on other priorities. Congressional staffers in particular typically have very broad policy portfolios.
        • Work within bureaucracies faces many limitations, which can be frustrating.
        • It can be demotivating to work with people who don’t share your values. Though note that policy can select for altruistic people — even if they have different beliefs about how to do good.
        • The work isn’t typically well paid relative to comparable positions outside of government.

        So we recommend speaking to people in the kinds of positions you might aim to have in order to get a sense of whether the career path would be right for you. And if you do choose to pursue it, look out for signs that the work may be having a negative effect on you and seek support from people who understand what you care about.

        If you end up wanting or needing to leave and transition into a new path, that’s not necessarily a loss or a reason for regret. You will likely make important connections and learn a lot of useful information and skills. This career capital can be useful as you transition into another role, perhaps pursuing a complementary approach to AI governance and coordination.

        What the increased attention on AI means

        We’ve been concerned about risks posed by AI for years. Based on the arguments that this technology could potentially cause a global catastrophe, and otherwise have a dramatic impact on future generations, we’ve advised many people to work to mitigate the risks.

        The arguments for the risk aren’t completely conclusive, in our view. But the arguments are worth taking seriously, and given the fact that few others in the world seemed to be devoting much time to even figuring out how big the threat was or how to mitigate it (while at the same time progress in making AI systems more powerful was accelerating) we concluded it was worth ranking among our top priorities.

        Now that there’s increased attention on AI, some might conclude that it’s less neglected and thus less pressing to work on. However, the increased attention on AI also makes many interventions potentially more tractable than they had been previously, as policymakers and others are more open to the idea of crafting AI regulations.

        And while more attention is now being paid to AI, it’s not clear it will be focused on the most important risks. So there’s likely still a lot of room for important and pressing work positively shaping the development of AI policy.

        Read next

        If you’re interested in this career path, we recommend checking out some of the following articles next.

        Learn more

        Top recommendations

        Further recommendations

        Read next:  Learn about other high-impact careers

        Want to consider more paths? See our list of the highest-impact career paths according to our research.

        Plus, join our newsletter and we’ll mail you a free book

        Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

        The post AI governance and coordination appeared first on 80,000 Hours.

        ]]>