Data science (Topic archive) - 80,000 Hours https://80000hours.org/topic/careers/other-careers/data-science/ Wed, 31 Jan 2024 18:28:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 Software and tech skills https://80000hours.org/skills/software-tech/ Mon, 18 Sep 2023 13:00:13 +0000 https://80000hours.org/?post_type=skill_set&p=83654 The post Software and tech skills appeared first on 80,000 Hours.

]]>

In a nutshell:

You can start building software and tech skills by trying out learning to code, and then doing some programming projects before applying for jobs. You can apply (as well as continue to develop) your software and tech skills by specialising in a related area, such as technical AI safety research, software engineering, or information security. You can also earn to give, and this in-demand skill set has great backup options.

Key facts on fit

There’s no single profile for being great at software and tech skills. It’s particularly cheap and easy to try out programming (which is a core part of this skill set) via classes online or in school, so we’d suggest doing that. But if you’re someone who enjoys thinking systematically, building things, or has good quantitative skills, those are all good signs.

Why are software and tech skills valuable?

By “software and tech” skills we basically mean what your grandma would call “being good at computers.”

When investigating the world’s most pressing problems, we’ve found that in many cases there are software-related bottlenecks.

For example, machine learning (ML) engineering is a core skill needed to contribute to AI safety technical research. Experts in information security are crucial to reducing the risks of engineered pandemics, as well as other risks. And software engineers are often needed by nonprofits, whether they’re working on reducing poverty or mitigating the risks of climate change.

Also, having skills in this area means you’ll likely be highly paid, offering excellent options to earn to give.

Moreover, basic programming skills can be extremely useful whatever you end up doing. You’ll find ways to automate tasks or analyse data throughout your career.

What does a career using software and tech skills involve?

A career using these skills typically involves three steps:

  1. Learn to code with a university course or self-study and then find positions where you can get great mentorship. (Read more about how to get started.)
  2. Optionally, specialise in a particular area, for example, by building skills in machine learning or information security.
  3. Apply your skills to helping solve a pressing global problem. (Read more about how to have an impact with software and tech.)

There’s no general answer about when to switch from a focus on learning to a focus on impact. Once you have some basic programming skills, you should look for positions that both further improve your skills and have an impact, and then decide based on which specific opportunities seem best at the time.

Software and tech skills can also be helpful in other, less directly-related career paths, like being an expert in AI hardware (for which you’ll also need a specialist knowledge skill set) or founding a tech startup (for which you’ll also need an organisation-building skill set). Being good with computers is also often part of the skills required for quantitative trading.

Programming also tends to come in handy in a wide variety of situations and jobs; there will be other great career paths that will use these skills that we haven’t written about.

How to evaluate your fit

How to predict your fit in advance

Some indications you’ll be a great fit include:

  • The ability to break down problems into logical parts and generate and test hypotheses
  • Willingness to try out many different solutions
  • High attention to detail
  • Broadly good quantitative skills

The best way to gauge your fit is just to try out programming.

It seems likely that some software engineers are significantly better than average — and we’d guess this is also true for other technical roles using software. In particular, these very best software engineers are often people who spend huge amounts of time practicing. This means that if you enjoy coding enough to want to do it both as a job and in your spare time, you are likely to be a good fit.

How to tell if you’re on track

If you’re at university or in a bootcamp, it’s especially easy to tell if you’re on track. Good signs are that you’re succeeding at your assigned projects or getting good marks. An especially good sign is that you’re progressing faster than many of your peers.

In general, a great indicator of your success is that the people you work with most closely are enthusiastic about you and your work, especially if those people are themselves impressive!

If you’re building these skills at an organisation, signs you’re on track might include:

  • You get job offers at organisations you’d like to work for.
  • You’re promoted within your first two years.
  • You receive excellent performance reviews.
  • You’re asked to take on progressively more responsibility over time.
  • After some time, you’re becoming someone in your team who people look to solve their problems, and people want you to teach them how to do things.
  • You’re building things that others are able to use successfully without your input.
  • Your manager / colleagues suggest you might take on more senior roles in the future.
  • You ask your superiors for their honest assessment of your fit and they are positive (e.g. they tell you you’re in the top 10% of people they can imagine doing your role).

How to get started building software and tech skills

Independently learning to code

As a complete beginner, you can write a Python program in less than 20 minutes that reminds you to take a break every two hours.

A great way to learn the very basics is by working through a free beginner course like Automate the Boring Stuff with Python by Al Seigart.

Once you know the fundamentals, you could try taking an intro to computer science or intro to programming course. If you’re not at university, there are plenty of courses online, such as:

Don’t be discouraged if your code doesn’t work the first time — that’s what normally happens when people code!

A great next step is to try out doing a project with other people. This lets you test out writing programs in a team and working with larger codebases. It’s easy to come up with programming projects to do with friends — you can see some examples here.

Once you have some more experience, contributing to open-source projects in particular lets you work with very large existing codebases.

Attending a coding bootcamp

We’ve advised many people who managed to get junior software engineer jobs in less than a year by going to a bootcamp.

Coding bootcamps are focused on taking people with little knowledge of programming to as highly paid a job as possible within a couple of months. This is a great entry route if you don’t already have much background, though some claim the long-term prospects are not as good as if you studied at university or in a particularly thorough way independently because you lack a deep understanding of computer science. Course Report is a great guide to choosing a bootcamp. Be careful to avoid low-quality bootcamps. To find out more, read our interview with an App Academy instructor.

Studying at university

Studying computer science at university (or another subject involving lots of programming) is a great option because it allows you to learn to code in an especially structured way and while the opportunity cost of your time is lower.

It will also give you a better theoretical understanding of computing than a bootcamp (which can be useful for getting the most highly-paid and intellectually interesting jobs), a good network, some prestige, and a better understanding of lower-level languages like C. Having a computer science degree also makes it easier to get a US work visa if you’re not from the US.

Doing internships

If you can find internships, ideally at the sorts of organisations you might want to work for to build your skills (like big tech companies or startups), you’ll gain practical experience and the key skills you wouldn’t otherwise pick up from academic degrees (e.g. using version control systems and powerful text editors). Take a look at our our list of companies with software and machine learning internships.

AI-assisted coding

As you’re getting started, it’s probably worth thinking about how developments in AI are going to affect programming in the future — and getting used to AI-assisted coding.

We’d recommend trying out using GitHub CoPilot, which writes code for you based on your comments. Cursor is a popular AI-assisted code editor based on VSCode.

You can also just ask AI chat assistants for help. ChatGPT is particularly helpful (although only if you use the paid version).

We think it’s reasonably likely that many software and tech jobs in the future will be heavily based on using tools like these.

Building a specialty

Depending on how you’re going to use software and tech skills, it may be useful to build up your skills in a particular area. Here’s how to get started in a few relevant areas:

If you’re currently at university, it’s worth checking if you can take an ML course (even if you’re not majoring in computer science).

But if that’s not possible, here are some suggestions of places you might start if you want to self-study the basics:

PyTorch is a very common package used for implementing neural networks, and probably worth learning! When I was first learning about ML, my first neural network was a 3-layer convolutional neural network with L2 regularisation classifying characters from the MNIST database. This is a pretty common first challenge and a good way to learn PyTorch.

You may also need to learn some maths.

The maths of deep learning relies heavily on calculus and linear algebra, and statistics can be useful too — although generally learning the maths is much less important than programming and basic, practical ML.

Again, if you’re still at university we’d generally recommend studying a quantitative degree (like maths, computer science, or engineering), most of which will cover all three areas pretty well.

If you want to actually get good at maths, you have to be solving problems. So, generally, the most useful thing that textbooks and online courses provide isn’t their explanations — it’s a set of exercises to try to solve in order, with some help if you get stuck.

If you want to self-study (especially if you don’t have a quantitative degree) here are some possible resources:

You might be able to find resources that cover all these areas, like Imperial College’s Mathematics for Machine Learning.

Most people get started in information security by studying computer science (or similar) at a university, and taking some cybersecurity courses — although this is by no means necessary to be successful.

You can get an introduction through the Google Foundations of Cybersecurity course. The full Google Cybersecurity Professional Certificate series is also worth watching to learn more on relevant technical topics.

For more, take a look at how to try out and get started in information security.

Data science combines programming with statistics.

One way to get started is by doing a bootcamp. The bootcamps are a similar deal to programming, although they tend to mainly recruit science PhDs. If you’ve just done a science PhD and don’t want to continue with academia, this is a good option to consider (although you should probably consider other ways of using the software and tech skills first). Similarly, you can learn data analysis, statistics, and modelling by taking the right graduate programme.

Data scientists are well paid — offering the potential to earn to give — and have high job satisfaction.

To learn more, see our full career review of data science.

Depending on how you’re aiming to have an impact with these skills (see the next section), you may also need to develop other skills. We’ve written about some other relevant skill sets:

For more, see our full list of impactful skills.

Once you have these skills, how can you best apply them to have an impact?

The problem you work on is probably the biggest driver of your impact. The first step is to make an initial assessment of which problems you think are most pressing (even if you change your mind over time, you’ll need to decide where to start working).

Once you’ve done that, the next step is to identify the highest-potential ways to use software and tech skills to help solve your top problems.

There are five broad categories here:

While some of these options (like protecting dangerous information) will require building up some more specialised skills, being a great programmer will let you move around most of these categories relatively easily, and the earning to give options means you’ll always have a pretty good backup plan.

Find jobs that use software and tech skills

See our curated list of job opportunities for this path.

    View all opportunities

    Career paths we’ve reviewed that use these skills

    Read next:  Explore other useful skills

    Want to learn more about the most useful skills for solving global problems, according to our research? See our list.

    Plus, join our newsletter and we’ll mail you a free book

    Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

    The post Software and tech skills appeared first on 80,000 Hours.

    ]]>
    Hannah Ritchie on why it makes sense to be optimistic about the environment https://80000hours.org/podcast/episodes/hannah-ritchie-environmental-optimism/ Mon, 14 Aug 2023 21:16:39 +0000 https://80000hours.org/?post_type=podcast&p=83016 The post Hannah Ritchie on why it makes sense to be optimistic about the environment appeared first on 80,000 Hours.

    ]]>
    The post Hannah Ritchie on why it makes sense to be optimistic about the environment appeared first on 80,000 Hours.

    ]]>
    AI governance and coordination https://80000hours.org/career-reviews/ai-policy-and-strategy/ Tue, 20 Jun 2023 12:00:34 +0000 https://80000hours.org/?post_type=career_profile&p=74390 The post AI governance and coordination appeared first on 80,000 Hours.

    ]]>
    As advancing AI capabilities gained widespread attention in late 2022 and 2023 — particularly after the release of OpenAI’s ChatGPT and Microsoft’s Bing chatbot — interest in governing and regulating these systems has grown. Discussion of the potential catastrophic risks of misaligned or uncontrollable AI also became more prominent, potentially opening up opportunities for policy that could mitigate the threats.

    There’s still a lot of uncertainty about which strategies for AI governance and coordination would be best, though parts of the community of people working on this subject may be coalescing around some ideas. See, for example, a list of potential policy ideas from Luke Muehlhauser of Open Philanthropy1 and a survey of expert opinion on best practices in AI safety and governance.

    But there’s no roadmap here. There’s plenty of room for debate about which policies and proposals are needed.

    We may not have found the best ideas yet in this space, and many of the existing policy ideas haven’t yet been developed into concrete, public proposals that could actually be implemented. We hope to see more people enter this field to develop expertise and skills that will contribute to risk-reducing AI governance and coordination.

    In a nutshell: Advanced AI systems could have massive impacts on humanity and potentially pose global catastrophic risks. There are opportunities in AI governance and coordination around these threats to shape how society responds to and prepares for the challenges posed by the technology.

    Given the high stakes, pursuing this career path could be many people’s highest-impact option. But they should be very careful not to accidentally exacerbate the threats rather than mitigate them.

    Recommended

    If you are well suited to this career, it may be the best way for you to have a social impact.

    Review status

    Based on an in-depth investigation 

    “What you’re doing has enormous potential and enormous danger.” — US President Joe Biden, to the leaders of the top AI labs

    Why this could be a high-impact career path

    Artificial intelligence has advanced rapidly. In 2022 and 2023, new language and image generation models gained widespread attention for their abilities, blowing past previous benchmarks the technology had met.

    And the applications of these models are still new; with more tweaking and integration into society, the existing AI systems may become easier to use and more ubiquitous in our lives.

    We don’t know where all these developments will lead us. There’s reason to be optimistic that AI will eventually help us solve many of the world’s problems, raising living standards and helping us build a more flourishing society.

    But there are also substantial risks. AI can be used for both good and ill. And we have concerns that the technology could, without the proper controls, accidentally lead to a major catastrophe — and perhaps even cause human extinction. We discuss the arguments that these risks exist in our in-depth problem profile.

    Because of these risks, we encourage people to work on finding ways to reduce these risks through technical research and engineering.

    But a range of strategies for risk reduction will likely be needed. Government policy and corporate governance interventions in particular may be necessary to ensure that AI is developed to be as broadly beneficial as possible and without unacceptable risk.

    Governance generally refers to the processes, structures, and systems that carry out decision making for organisations and societies at a high level. In the case of AI, we expect the governance structures that matter most to be national governments and organisations developing AI — as well as some international organisations and perhaps subnational governments.

    Some aims of AI governance work could include:

    • Preventing the deployment of any AI systems that pose a significant and direct threat of catastrophe
    • Mitigating the negative impact of AI technology on other catastrophic risks, such as nuclear weapons and biotechnology
    • Guiding the integration of AI technology into our society and economy with limited harms and to the advantage of all
    • Reducing the risk of an “AI arms race,” in which competition leads to technological advancement without the necessary safeguards and caution — between nations and between companies
    • Ensuring that those creating the most advanced AI models are incentivised to be cooperative and concerned about safety
    • Slowing down the development and deployment of new systems if the advancements are likely to outpace our ability to keep them safe and under control

    We need a community of experts who understand the intersection of modern AI systems and policy, as well as the severe threats and potential solutions. This field is still young, and many of the paths within it aren’t clear and are not sure to pan out. But there are relevant professional paths that will provide you valuable career capital for a variety of positions and types of roles.

    The rest of this article explains what work in this area might involve, how you can develop career capital and test your fit, and where some promising places to work might be.

    What kinds of work might contribute to AI governance?

    What should governance-related work on AI actually involve? There are a variety of ways to pursue AI governance strategies, and as the field becomes more mature, the paths are likely to become clearer and more established.

    We generally don’t think people early in their careers should be aiming for a specific job that they think would be high-impact. They should instead aim to develop skills, experience, knowledge, judgement, networks, and credentials — what we call career capital — that they can later use when an opportunity to have a positive impact is ripe.

    This may involve following a pretty standard career trajectory, or it may involve bouncing around in different kinds of roles. Sometimes, you just have to apply to a bunch of different roles and test your fit for various types of work before you know what you’ll be good at. The main thing to keep in mind is that you should try to get excellent at something for which you have strong personal fit and that will let you contribute to solving pressing problems.

    In the AI governance and coordination space, we see at least six large categories of work that we expect to be important:

    There aren’t necessarily openings in all these categories at the moment for careers in AI governance, but they represent a range of sectors in which impactful work may potentially be done in the coming years and decades. Thinking about the different skills and forms of career capital that will be useful for the categories of work you could see yourself doing in the future can help you figure out what your immediate next steps should be. (We discuss how to assess your fit and enter this field below.)

    You may want to — and indeed it may be advantageous to — move between these different categories of work at different points in your career. You can also test out your fit for various roles by taking internships, fellowships, entry-level jobs, temporary placements, or even doing independent research, all of which can serve as career capital for a range of paths.

    We have also reviewed career paths in AI technical safety research and engineering and information security, which may be crucial to reducing risks from AI, and which may play a significant role in an effective governance agenda. People serious about pursuing a career in AI governance should familiarise themselves with these fields as well.

    Government work

    Taking a role within government could lead to playing an important role in the development, enactment, and enforcement of AI policy.

    Note that we generally expect that the US federal government will be the most significant player in AI governance for the foreseeable future. This is because of its global influence and its jurisdiction over much of the AI industry, including the top three AI labs training state-of-the-art, general-purpose models (Anthropic, OpenAI, and Google DeepMind) and key parts of the chip supply chain. Much of this article focuses on US policy and government.2

    But other governments and international institutions may also end up having important roles to play in certain scenarios. For example, the UK government, the European Union, China, and potentially others, may all present opportunities for impactful AI governance work. Some US state-level governments, such as California, may also offer opportunities for impact and gaining career capital.

    What would this work involve? Sections below discuss how to enter US policy work and which areas of the government that you might aim for.

    But at the broadest level, people interested in positively shaping AI policy should aim to gain the skills and experience to work in areas of government with some connection to AI or emerging technology policy.

    This can include roles in: legislative branches, domestic regulation, national security, diplomacy, appropriations and budgeting, and other policy areas.

    If you can get a role out of the gate that is already working directly on this issue, such as a staff position with a lawmaker who is focused on AI, that could be a great opportunity.

    Otherwise, you should seek to learn as much as you can about how policy works and which government roles might allow you to have the most impact, while establishing yourself as someone who’s knowledgeable about the AI policy landscape. Having almost any significant government role that touches on some aspect of AI, or having some impressive AI-related credential, may be enough to get you quite far.

    One way to advance your career in government on a specific topic is what some call “getting visibility” — that is, using your position to learn about the landscape and connect with the actors and institutions that affect the policy area you care about. You’ll want to be invited to meetings with other officials and agencies, be asked for input on decisions, and engage socially with others who work in the policy area. If you can establish yourself as a well-regarded expert on an important but neglected aspect of the issue, you’ll have a better shot at being included in key discussions and events.

    Career trajectories within government can be broken down roughly as follows:

    • Standard government track: This involves entering government at a relatively low level and building up your career capital on the inside by climbing the seniority ladder. For the highest impact, you’d ideally end up reaching senior levels by sticking around, gaining skills and experience, and getting promoted. You may move between agencies, departments, or branches.
    • Specialisation career capital: You can also move in and out of government throughout your career. People on this trajectory will also work at nonprofits, think tanks, industry labs, political parties, academia, and other organisations. But they will primarily focus on becoming an expert in a topic — such as AI. It can be harder to get seniority this way, but the value of expertise and experience can sometimes outweigh seniority.
    • Direct-impact work: Some people move into government jobs without a longer plan to build career capital because they see an opportunity for direct, immediate impact. This might look like getting tapped to lead an important commission or providing valuable input on an urgent project. We don’t generally recommend planning on this kind of strategy for your career, but it’s good to be aware of it as an opportunity that might be worth taking at some point.

    Research on AI policy and strategy

    There’s still a lot of research to be done on the most important avenues for AI governance approaches. While there are some promising proposals for a system of regulatory and strategic steps that can help reduce the risk of an AI catastrophe, there aren’t many concrete and publicly available policy proposals ready for adoption.

    The world needs more concrete proposals for AI policies that would really start to tackle the biggest threats; developing such policies, and deepening our understanding of the strategic needs of the AI governance space, should be high priorities.

    Other relevant research could involve surveys of public opinion that could inform communication strategies, legal research about the feasibility of proposed policies, technical research on issues like compute governance, and even higher-level theoretical research into questions about the societal implications of advanced AI. Some research, such as that done by Epoch AI, focuses on forecasting the future course of AI developments, which can influence AI governance decisions.

    However, several experts we’ve talked to warn that a lot of research on AI governance may prove to be useless, so it’s important to be reflective and seek input from others in the field — both from experienced policy practitioners and technical experts — about what kind of contribution you can make. We list several research organisations below that we think would be good to work at in order to pursue promising research on this topic.

    One potentially useful approach for testing your fit for this work — especially when starting out in this research — is to write up analyses and responses to existing work on AI policy or investigate some questions in this area that haven’t been the subject of much attention. You can then share your work widely, send it out for feedback from people in the field, and evaluate how much you enjoy the work and whether you might productively contribute to this research longer term.

    But it’s possible to spend too long testing your fit without making much progress, and some people find that they’re best able to contribute when they’re working on a team. So don’t overweight or over-invest in independent work, especially if there are few signs it’s working out especially well for you. This kind of project can make sense for maybe a month or a bit longer — but it’s unlikely to be a good idea to spend much more than that without meaningful funding or some really encouraging feedback from people working in the field.

    If you have the experience to be hired as a researcher, work on AI governance can be done in academia, nonprofit organisations, and think tanks. Some government agencies and committees, too, perform valuable research.

    Note that universities and academia have their own priorities and incentives that often aren’t aligned with producing the most impactful work. If you’re already an established researcher with tenure, it may be highly valuable to pivot into work on AI governance — this position may even give you a credible platform from which to advocate for important ideas.

    But if you’re just starting out a research career and want to focus on this issue, you should carefully consider whether your work will be best supported inside or outside of academia. For example, if you know of a specific programme with particular mentors who will help you pursue answers to critical questions in this field, it might be worth doing. We’re less inclined to encourage people to pursue generic academic-track roles with the vague hope that one day they can do important research on this topic.

    Advanced degrees in policy or relevant technical fields may well be valuable, though — see more discussion of this in the section on how to assess your fit and get started.

    Industry work

    While government policy is likely to play a key role in coordinating various actors interested in reducing the risks from advanced AI, internal policy and corporate governance at the largest AI labs themselves is also a powerful tool. We think people who care about reducing risk can potentially do valuable work internally at industry labs. (Read our career review of non-technical roles at AI labs.)

    At the highest level, deciding who sits on corporate boards, what kind of influence those boards have, and to what extent the organisation is structured to seek profit and shareholder value as opposed to other aims, can end up having a major impact on the direction a company takes. If you might be able to get a leadership role at a company developing frontier AI models, such as a management position or a seat on the board, it could potentially be a very impactful position.

    If you’re able to join a policy team at a major lab, you can model threats and help develop, implement, and evaluate promising proposals internally to reduce risks. And you can build consensus around best practices, such as strong information security policies, using outside evaluators to find vulnerabilities and dangerous behaviours in AI systems (red teaming), and testing out the latest techniques from the field of AI safety.

    And if, as we expect, AI labs face increasing government oversight, industry governance and policy work can ensure compliance with any relevant laws and regulations that get put in place. Interfacing with government actors and facilitating coordination over risk reduction approaches could be impactful work.

    In general, the more cooperative AI labs are with each other3 and outside groups seeking to minimise catastrophic risks from AI, the better. And this doesn’t seem to be an outlandish hope — many industry leaders have expressed concern about extinction risks and have even called for regulation of the frontier technology they’re creating.

    That said, we can expect this cooperation to take substantial work — it would be surprising if the best policies for reducing risks were totally uncontroversial in industry, since labs also face huge commercial incentives to build more powerful systems, which can carry more risk. The more everyone’s able to communicate and align their incentives, the better things seem likely to go.

    Advocacy and lobbying

    People outside of government or AI labs can influence the shape of public policy and corporate governance via advocacy and lobbying.

    As of this writing, there has not yet been a large public movement in favour of regulating or otherwise trying to reduce risks from AI, so there aren’t many openings that we know about in this category. But we expect growing interest in this area to open up new opportunities to press for political action and policy changes at AI labs, and it could make sense to start building career capital and testing your fit now for different kinds of roles that would fall into this category down the line.

    If you believe AI labs may be disposed to advocate for generally beneficial regulation, you might want to try to work for them, or become a lobbyist for the industry as a whole, to push the government to adopt specific policies. It’s plausible that AI labs will have by far the best understanding of the underlying technology, as well as the risks, failure modes, and safest paths forward.

    On the other hand, it could be the case that AI labs have too much of a vested interest in the shape of regulations to reliably advocate for broadly beneficial policies. If that’s right, it may be better to join or create advocacy organisations unconnected from the industry — supported by donations or philanthropic foundations — that can take stances that are opposed to the labs’ commercial interests.

    For example, it could be the case that the best approach from a totally impartial perspective would be at some point to deliberately slow down or halt the development of increasingly powerful AI models. Advocates could make this demand of the labs themselves or of the government to slow down AI progress. It may be difficult to come to this conclusion or advocate for it if you have strong connections to the companies creating these systems.

    It’s also possible that the best outcomes will be achieved with a balance of industry lobbyists and outside lobbyists and advocates making the case for their preferred policies — as both bring important perspectives.

    We expect there will be increasing public interest in AI policy as the technological advancements have ripple effects in the economy and wider society. And if there’s increasing awareness of the impact of AI on people’s lives, the risks the technology poses may become more salient to the public, which will give policymakers strong incentives to take the problem seriously. It may also bring new allies into the cause of ensuring that the development of advanced AI goes well.

    Advocacy can also:

    • Highlight neglected but promising approaches to governance that have been uncovered in research
    • Facilitate the work of policymakers by showcasing the public’s support for governance measures
    • Build bridges between researchers, policymakers, the media, and the public by communicating complicated ideas in an accessible way to many audiences
    • Pressure corporations themselves to proceed more cautiously
    • Change public sentiment around AI and discourage irresponsible behaviour by individual actors, such as the spreading of powerful open-source models

    However, note that advocacy can sometimes backfire. Predicting how information will be received is far from straightforward. Drawing attention to a cause area can sometimes trigger a backlash; presenting problems with certain styles of rhetoric can alienate people or polarise public opinion; spreading misleading or mistaken messages can discredit yourself and fellow advocates. It’s important that you are aware of the risks, consult with others (particularly those who you respect but might disagree with tactically), and commit to educating yourself deeply about the topic before expounding on it in public.

    You can read more in the section about doing harm below. We also recommend reading our article on ways people trying to do good accidentally make things worse and how to avoid them.

    Case study: the Future of Life Institute open letter

    In March 2023, the Future of Life Institute published an open letter calling for a pause of at least six months on training any new models more “powerful” than OpenAI’s GPT-4 — which had been released about a week earlier. GPT-4 is a state-of-the-art language model that can be used through ChatGPT to produce novel and impressive text responses to a wide range of prompts.

    The letter attracted a lot of attention, perhaps in part because it was signed by prominent figures such as Elon Musk. While it didn’t immediately achieve its explicit aims — the labs didn’t commit to a pause — it drew a lot of attention and fostered public conversations about the risks of AI and the potential benefits of slowing down. (An earlier article titled “Let’s think about slowing down AI” — by Katja Grace of the research organisation AI Impacts — aimed to have a similar effect.)

    There’s no clear consensus on whether the FLI letter was on the right track. Some critics of the letter, for example, said that its advice would actually lead to worse outcomes overall if followed, because it would slow down AI safety research while many of the innovations that drive AI capabilities progress, such as chip development, would continue to race forward. Proponents of the letter pushed back on these claims.4 It does seem clear that the letter changed the public discourse around AI safety in a way that few other efforts have achieved, which is proof of concept for what impactful advocacy can accomplish.

    Third-party auditing and evaluation

    If regulatory measures are put in place to reduce the risks of advanced AI, some agencies and organisations — within government or outside — will need to audit companies and systems to make sure that regulations are being followed.

    One nonprofit, the Alignment Research Center, has been at the forefront of this kind of work.5 In addition to its research work, it has launched a program to evaluate the capabilities of advanced AI models. In early 2023, the organisation partnered with two leading AI labs, OpenAI and Anthropic, to evaluate the capabilities of the latest versions of their chatbot models prior to their release. They sought to determine in a controlled environment if the models had any potentially dangerous capabilities.

    The labs voluntarily cooperated with ARC for this project, but at some point in the future, these evaluations may be legally required.

    Governments often rely on third-party auditors as crucial players in regulation, because the government may lack the expertise (or the capacity to pay for the expertise) that the private sector has. There aren’t many such opportunities available in this type of role that we know of as of this writing, but they may end up playing a critical part of an effective AI governance framework.

    Other types of auditing and evaluation may be required as well. ARC has said it intends to develop methods to determine which models are appropriately aligned — that is, that they will behave as their users intend them to behave — prior to release.

    Governments may also want to employ auditors to evaluate the amount of compute that AI developers have access to, their information security practices, the uses of models, the data used to train models, and more.

    Acquiring the technical skills and knowledge to perform these types of evaluations, and joining organisations that will be tasked to perform them, could be the foundation of a highly impactful career. This kind of work will also likely have to be facilitated by people who can manage complex relationships across industry and government. Someone with experience in both sectors could have a lot to contribute.

    Some of these types of roles may have some overlap with work in AI technical safety research.

    One potential advantage of working in the private sector for AI governance work is you may be significantly better paid than you would be in government.

    International work and coordination

    US-China

    For someone with the right fit, cooperation and coordination with China on the safe development of AI could be a particularly impactful approach within the broad AI governance career path.

    The Chinese government has been a major funder in the field of AI, and the country has giant tech companies that could potentially drive forward advances.

    Given tensions between the US and China, and the risks posed by advanced AI, there’s a lot to be gained from increasing trust, understanding, and coordination between the two countries. The world will likely be much better off if we can avoid a major conflict between great powers and if the most significant players in emerging technology can avoid exacerbating any global risks.

    We have a separate career review that goes into more depth on China-related AI safety and governance paths.

    Other governments and international organisations

    As we’ve said, we focus most on US policy and government roles. This is largely because we anticipate that the US is now and will likely continue to be the most pivotal actor when it comes to regulating AI, with a major caveat being China, as discussed in the previous section.

    But many people interested in working on this issue can’t or don’t want to work in US policy — perhaps because they live in another country and don’t intend on moving.

    Much of the advice above still applies to these people, because roles in AI governance research and advocacy can be done outside of the United States.6 And while we don’t think it’s generally as impactful in expectation as US government work, opportunities in other governments and international organisations can be complementary to the work to be done in the US.

    The United Kingdom, for instance, may present another strong opportunity for AI policy work that would complement US work. Top UK officials have expressed interest in developing policy around AI, perhaps even a new international agency, and reducing extreme risks. And the UK government announced in 2023 the creation of a new AI Foundation Model Taskforce, with the expressed intention to drive forward safety research.

    It’s possible that by taking significant steps to understand and regulate AI, the UK will encourage or inspire US officials to take similar steps by showing how it can work.

    And any relatively wealthy country could use portions of its budget to fund AI safety research. While a lot of the most important work likely needs to be done in the US, along with leading researchers and at labs with access to large amounts of compute, some lines of research may be productive even without these resources. Any significant advances in AI safety research, if communicated properly, could be used by researchers working on the most powerful models.

    Other countries might also develop liability standards for the creators of AI systems that could incentivise corporations to proceed more cautiously and judiciously before releasing models.

    The European Union has shown that its data protection standards — the General Data Protection Regulation (GDPR) — affect corporate behaviour well beyond its geographical boundaries. EU officials have also pushed forward on regulating AI, and some research has explored the hypothesis that the impact of the union’s AI regulations will extend far beyond the continent — the so-called “Brussels effect.”

    And at some point, we do expect there will be AI treaties and international regulations, just as the international community has created the International Atomic Energy Agency, the Biological Weapons Convention, and Intergovernmental Panel on Climate Change to coordinate around and mitigate other global catastrophic threats.

    Efforts to coordinate governments around the world to understand and share information about threats posed by AI may end up being extremely important in some future scenarios.

    The Organisation for Economic Cooperation and Development is one place where such work might occur. So far, it has been the most prominent international actor working on AI policy and has created the AI Policy Observatory.

    Third-party countries may also be able to facilitate cooperation and reduce tensions betweens the United States and China, whether around AI or other potential flashpoints, should such an intervention become necessary.

    How policy gets made

    What does it actually take to make policy?

    In this section, we’ll discuss three phases of policy making: agenda setting, policy creation and development, and implementation. We’ll generally discuss these as aspects of making government policy, but they could also be applied to organisational policy. The following section will discuss the types of work that you could do to positively contribute to the broad field of AI governance.

    Agenda setting

    To enact and implement a programme of government policies that have a positive impact, you have to first ensure that the subject of potential legislation and regulation is on the agenda for policymakers.

    Agenda setting for policy involves identifying and defining problems, drawing attention to the problems and raising their salience (at least to the relevant people), and promoting potential approaches to solving them.

    For example, when politicians take office, they often enter on a platform of promises made to their constituents and their supporters about which policy agendas they want to pursue. Those agendas are formed through public discussion, media narratives, internal party politics, deliberative debate, interest group advocacy, and other forms of input. The agenda can be, to varying degrees, problem-specific — having a broad remit of “improving health care.” Or it could be more solution-specific — aiming to create, for example, a single-payer health system.

    Issues don’t necessarily have to be unusually salient to get on the agenda. Policymakers or officials at various levels of government can prioritise solving certain problems or enacting specific proposals that aren’t the subject of national debate. In fact, sometimes making issues too salient, framing them in divisive ways, or allowing partisanship and political polarisation to shape the discussion, can make it harder to successfully put solutions on the agenda.

    What’s key for agenda setting as an approach to AI governance is that people with the authority have to buy into the idea of prioritising the issue, if they’re going to use their resources and political capital to focus on it.

    Policy creation and development

    While there does appear to be growing enthusiasm for a set or sets of policy proposals that could start to reduce the risk of an AI-related catastrophe, there’s still a lack of concrete policies that are ready to get off the ground.

    This is what the policy creation and development process is for. Researchers, advocates, civil servants, lawmakers and their staff, and others all can play a role in shaping the actual legislation and regulation that the government eventually enforces. In the corporate context, internal policy creation can serve similar functions, though it may be less enforceable unless backed up with contracts.

    Policy creation involves crafting solutions for the problem at hand with the policy tools available, usually requiring input from technical experts, legal experts, stakeholders, and the public. In countries with strong judicial review like the United States, special attention often has to be paid to make sure laws and regulations will hold up under the scrutiny of judges.

    Once concrete policy options are on the table, they must be put through the relevant decision-making process and negotiations. If the policy in question is a law that’s going to be passed, rather than a regulation, it needs to be crafted so that it will have enough support from lawmakers and other key decision makers to be enacted. This can happen in a variety of ways; it might be rolled into a larger piece of legislation that has wide support, or it may be rallied around and brought forward as its own package to be voted on individually.

    Policy creation can also be an iterative process, as policies are enacted, implemented, monitored, evaluated, and revised.

    For more details on the complex work of policy creation, we recommend Thomas Kalil’s article “Policy Entrepreneurship in the White House: Getting Things Done in Large Organisations.”

    Implementation

    Fundamentally, a policy is only an idea. For an idea to have an impact, someone actually has to carry it out. Any of the proposals for AI-related government policy — including standards and evaluations, licensing, and compute governance — will demand complex management and implementation.

    Policy implementation on this scale requires extensive planning, coordination in and out of government, communication, resource allocation, training and more — and every step in this process can be fraught with challenges. To rise to the occasion, any government implementing an AI policy regime will need talented individuals working at a high standard.

    The policy creation phase is critical and is probably the highest-priority work. But good ideas can be carried out badly, which is why policy implementation is also a key part of the AI governance agenda.

    Examples of people pursuing this path

    How to assess your fit and get started

    If you’re early on in your career, you should focus first on getting skills and other career capital to successfully contribute to the beneficial governance and regulation of AI.

    You can gain career capital for roles in many ways, and the best options will vary based on your route to impact. But broadly speaking, working in or studying fields such as politics, law, international relations, communications, and economics can all be beneficial for going into policy work.

    And expertise in AI itself, gained by studying and working in machine learning and technical AI safety, or potentially related fields such as computer hardware or information security, should also give you a big advantage.

    Testing your fit

    One general piece of career advice we give is to find relatively “cheap” tests to assess your fit for different paths. This could mean, for example, taking a policy internship, applying for a fellowship, doing a short bout of independent research as discussed above, or taking classes or courses on technical machine learning or computer engineering.

    It can also just involve talking to people currently doing a job you might consider having and finding out what the day-to-day experience of the work is like and what skills are needed.

    All of these factors can be difficult to predict in advance. While we grouped “government work” into a single category above, that label covers a wide range of positions and types of occupations in many different departments and agencies. Finding the right fit within a broad category like “government work” can take a while, and it can depend on a lot of factors out of your control, such as the colleagues you happen to work closely with. That’s one reason it can be useful to build broadly valuable career capital, so you have the option to move around to find the right role for you.

    And don’t underestimate the value at some point of just applying to many relevant openings in the field and sector you’re aiming for and seeing what happens. You’ll likely face a lot of rejection with this strategy, but you’ll be able to better assess your qualifications for different kinds of roles after you see how far you get in the process, if you take enough chances. This can give you a lot more information than just guessing about whether you have the right experience.

    It can be useful to rule out certain types of work if you gather evidence that you’re not a strong fit for the role. For example, if you invest a lot of time and effort trying to get into reputable universities or nonprofit institutions to do AI governance research, but you get no promising offers and receive little encouragement even after applying widely, this might be a significant signal that you’re unlikely to thrive in that particular path.

    That wouldn’t mean you have nothing to contribute, but your comparative advantage may lie elsewhere.

    Read the section of our career guide on finding a job that fits you.

    Types of career capital

    For a field like AI governance, a mix of people with technical and policy expertise — and some people with both — is needed.

    While anyone involved in this field should work to maintain an evolving understanding of both the technical and policy details, you’ll probably start out focusing on either policy or technical skills to gain career capital.

    This section covers:

    Much of this advice is geared toward roles in the US, though it may be relevant in other contexts.

    Generally useful career capital

    The chapter of the 80,000 Hours career guide on career capital lists five key components that will be useful in any path: skills and knowledge, connections, credentials, character, and runway.

    For most jobs touching on policy, social skills, networking, and — for lack of a better word — political skill will be a huge asset. This can probably be learned to some extent, but some people may find they don’t have these kinds of skills and can’t or don’t want to acquire them. That’s OK — there are many other routes to having a fulfilling and impactful career, and there may be some roles within this path that demand these skills to a much lesser extent. That’s why testing your fit is important.

    Read the full section of the career guide on career capital.

    To gain skills in policy, you can pursue education in many relevant fields, such as political science, economics, and law.

    Many master’s programmes offer specific coursework on public policy, science and society, security studies, international relations, and other topics; having a graduate degree or law degree will give you a leg up for many positions.

    In the US, a master’s, a law degree, or a PhD is particularly useful if you want to climb the federal bureaucracy. Our article on US policy master’s degrees provides detailed information about how to assess the many options.

    Internships in DC are a promising route to evaluate your aptitude for policy work and to establish early career capital. Many academic institutions now offer a strategic “Semester in DC” programme, which can let you explore placements of choice in Congress, federal agencies, or think tanks. The Virtual Student Federal Service (VSFS) also offers part-time, remote government internships. Balancing their academic commitments, students can access these opportunities during the academic year, further solidifying their grasp on the intricacies of policy work. This technological advance could be the stepping stone many aspiring policy professionals need to ascend in their future careers.

    Once you have a suitable background, you can take entry-level positions within parts of the government where you can build a professional network and develop your skills. In the US, you can become a congressional staffer, or take a position at a relevant federal department, such as the Department of Commerce, Department of Energy, or the Department of State. Alternatively, you can gain experience in think tanks — a particularly promising option if you have a strong aptitude for research — and government contractors, private sector companies providing services to the government.

    In Washington, DC, the culture is fairly unique. There’s a big focus on networking and internal bureaucratic politics to navigate. We’ve also been told that while merit matters to a degree in US government work, it is not the primary determinant of who is most successful. People who think they wouldn’t feel able or comfortable to be in this kind of environment for the long term should consider whether other paths would be best.

    If you find you can enjoy government and political work, impress your colleagues, and advance in your career, though, that’s a strong signal that you have the potential to make a real impact. Just being able to thrive in government work can be an extremely valuable comparative advantage.

    US citizenship

    Your citizenship may affect which opportunities are available to you. Many of the most important AI governance roles within the US — particularly in the executive branch and Congress — are only open to, or will at least heavily favour, American citizens. All key national security roles that might be especially important will be restricted to those with US citizenship, which is required to obtain a security clearance.

    This may mean that those who lack US citizenship will want to consider not pursuing roles that require it. Alternatively, they could plan to move to the US and pursue the long process of becoming a citizen. For more details on immigration pathways and types of policy work available to non-citizens, see this blog post on working in US policy as a foreign national. Consider also participating in the annual diversity visa lottery if you’re from an eligible country, as this is low effort and allows you to win a US green card if you’re lucky.

    Technical career capital

    Technical experience in machine learning, AI hardware, and related fields can be a valuable asset for an AI governance career. So it will be very helpful if you’ve studied a relevant subject area for an undergraduate or graduate degree, or a particularly productive course of independent study.

    We have a guide to technical AI safety careers, which explains how to learn the basics of machine learning.

    The following resources may be particularly useful for familiarising yourself with the field of AI safety:

    Working at an AI lab in technical roles, or other companies that use advanced AI systems and hardware, may also provide significant career capital in AI policy paths. (Read our career review discussing the pros and cons of working at a top AI lab.)

    We also have a separate career review on how becoming an expert in AI hardware could be very valuable in governance work.

    Many politicians and policymakers are generalists, as their roles require them to work in many different subject areas and on different types of problems. This means they’ll need to rely on expert knowledge when crafting and implementing policy on AI technology that they don’t fully understand. So if you can provide them this information, especially if you’re skilled at communicating it clearly, you can potentially fill influential roles.

    Some people who may have initially been interested in pursuing a technical AI safety career, but who have found that they either are no longer interested in that path or find more promising policy opportunities, might also decide that they can effectively pivot into a policy-oriented career.

    It is common for people with STEM backgrounds to enter and succeed in US policy careers. People with technical credentials that they may regard as fairly modest — such as computer science bachelor’s degrees or a master’s in machine learning — often find their knowledge is highly valued in Washington, DC.

    Most DC jobs don’t have specific degree requirements, so you don’t need to have a policy degree to work in DC. Roles specifically addressing science and technology policy are particularly well-suited for people with technical backgrounds, and people hiring for these roles will value higher credentials like a master’s or, better even, a terminal degree like a PhD or MD.

    There are many fellowship programmes specifically aiming to support people with STEM backgrounds to enter policy careers; some are listed below.

    This won’t be right for everybody — many people with technical skills may not have the disposition or skills necessary for engaging in policy. People in policy-related paths often benefit from strong writing and social skills as well as a comfort navigating bureaucracies and working with people holding very different motivations and worldviews.

    Other specific forms of career capital

    There are other ways to gain useful career capital that could be applied in this career path.

    • If you have or gain great communication skills as, say, a journalist or an activist, these skills could be very useful in advocacy and lobbying around AI governance.
      • Especially since advocacy around AI issues is still in its early stages, it will likely need people with experience advocating in other important cause areas to share their knowledge and skills.
    • Academics with relevant skill sets are sometimes brought into government for limited stints to serve as advisors in agencies such as the US Office of Science and Technology. This isn’t necessarily the foundation of a longer career in government, though it can be, and it can give an academic deeper insight into policy and politics than they might otherwise gain.
    • You can work at an AI lab in non-technical roles, gaining a deeper familiarity with the technology, the business, and the culture. (Read our career review discussing the pros and cons of working at a top AI lab.)
    • You could work on political campaigns and get involved in party politics. This is one way to get involved in legislation, learn about policy, and help impactful lawmakers, and you can also potentially help shape the discourse around AI governance. Note, though, the previously mentioned downsides of potentially polarising public opinion around AI policy; and entering party politics may limit your potential for impact whenever the party you’ve joined doesn’t hold power.
    • You could even try to become an elected official yourself, though it’s obviously competitive. If you take this route, make sure you find trustworthy and highly informed advisors to rely on to build expertise in AI, since politicians have many other responsibilities and won’t be able to focus as much on any particular issue.
    • You can focus on developing specific skill sets that might be valuable in AI governance, such as information security, intelligence work, diplomacy with China, etc.
      • Other skills: Organisational, entrepreneurial, management, diplomatic, and bureaucratic skills will also likely prove highly valuable in this career path. There may be new auditing agencies to set up or policy regimes to implement. Someone who has worked at high levels in other high-stakes industries, started an influential company, or coordinated complicated negotiations between various groups, would bring important skills to the table.

    Want one-on-one advice on pursuing this path?

    Because this is one of our priority paths, if you think this path might be a great option for you, we’d be especially excited to advise you on next steps, one-on-one. We can help you consider your options, make connections with others working in the same field, and possibly even help you find jobs or funding opportunities.

    APPLY TO SPEAK WITH OUR TEAM

    Where can this kind of work be done?

    Since successful AI governance will require work from governments, industry, and other parties, there will be many potential jobs and places to work for people in this path. The landscape will likely shift over time, so if you’re just starting out on this path, the places that seem most important might be different by the time you’re pivoting to using your career capital to make progress on the issue.

    Within the US government, for instance, it’s not clear which bodies will be most impactful when it comes to AI policy in five years. It will likely depend on choices that are made in the meantime.

    That said, it seems useful to give our understanding of which parts of the government are generally influential in technology governance and most involved right now to help orient. Gaining AI-related experience in government right now should still serve you well if you end up wanting to move into a more impactful AI-related role down the line when the highest-impact areas to work in are clearer.

    We’ll also give our current sense of important actors outside government where you might be able to build career capital and potentially have a big impact.

    Note that this list has by far the most detail about places to work within the US government. We would like to expand it to include more options as we learn more. You can use this form to suggest additional options for us to include. (And the fact that an option isn’t on this list shouldn’t be taken to mean we recommend against it or even that it would necessarily be less impactful than the places listed.)

    We have more detail on other options in separate (and older) career reviews, including the following:

    With that out of the way, here are some of the places where someone could do promising work or gain valuable career capital:

    In Congress, you can either work directly for lawmakers themselves or as staff on a legislative committee. Staff roles on the committees are generally more influential on legislation and more prestigious, but for that reason, they’re more competitive. If you don’t have that much experience, you could start out in an entry-level job staffing a lawmaker and then later try to transition to staffing a committee.

    Some people we’ve spoken to expect the following committees — and some of their subcommittees — in the House and Senate to be most impactful in the field of AI. You might aim to work on these committees or for lawmakers who have significant influence on these committees.

    House of Representatives

    • House Committee on Energy and Commerce
    • House Judiciary Committee
    • House Committee on Space, Science, and Technology
    • House Committee on Appropriations
    • House Armed Services Committee
    • House Committee on Foreign Affairs
    • House Permanent Select Committee on Intelligence

    Senate

    • Senate Committee on Commerce, Science, and Transportation
    • Senate Judiciary Committee
    • Senate Committee on Foreign Relations
    • Senate Committee on Homeland Security and Government Affairs
    • Senate Committee on Appropriations
    • Senate Committee on Armed Services
    • Senate Select Committee on Intelligence
    • Senate Committee on Energy & Natural Resources
    • Senate Committee on Banking, Housing, and Urban Affairs

    The Congressional Research Service, a nonpartisan legislative agency, also offers opportunities to conduct research that can impact policy design across all subjects.

    In general, we don’t recommend taking entry-level jobs within the executive branch for this path because it’s very difficult to progress your career through the bureaucracy at this level. It’s better to get a law degree or relevant master’s degree, which can give you the opportunity to start with more seniority.

    The influence of different agencies over AI regulation may shift over time, and there may even be entirely new agencies set up to regulate AI at some point, which could become highly influential. Whichever agency may be most influential in the future, it will be useful to have accrued career capital working effectively in government, creating a professional network, learning about day-to-day policy work, and deepening your knowledge of all things AI.

    We have a lot of uncertainty about this topic, but here are some of the agencies that may have significant influence on at least one key dimension of AI policy as of this writing:

    • Executive Office of the President (EOP)
      • Office of Management and Budget (OMB)
      • National Security Council (NSC)
      • Office of Science and Technology Policy (OSTP)
    • Department of State
      • Office of the Special Envoy for Critical and Emerging Technology (S/TECH)
      • Bureau of Cyberspace and Digital Policy (CDP)
      • Bureau of Arms Control, Verification and Compliance (AVC)
      • Office of Emerging Security Challenges (ESC)
    • Federal Trade Commission
    • Department of Defense (DOD)
      • Chief Digital and Artificial Intelligence Office (CDAO)
      • Emerging Capabilities Policy Office
      • Defense Advanced Research Projects Agency (DARPA)
      • Defense Technology Security Administration (DTSA)
    • Intelligence Community (IC)
      • Intelligence Advanced Research Projects Activity (IARPA)
      • National Security Agency (NSA)
      • Science advisor roles within the various agencies that make up the intelligence community
    • Department of Commerce (DOC)
      • The Bureau of Industry and Security (BIS)
      • The National Institute of Standards and Technology (NIST)
      • CHIPS Program Office
    • Department of Energy (DOE)
      • Artificial Intelligence and Technology Office (AITO)
      • Advanced Scientific Computing Research (ASCR) Program Office
    • National Science Foundation (NSF)
      • Directorate for Computer and Information Science and Engineering (CISE)
      • Directorate for Technology, Innovation and Partnerships (TIP)
    • Cybersecurity and Infrastructure Security Agency (CISA)

    Readers can find listings for roles in these departments and agencies at the federal government’s job board, USAJOBS; a more curated list of openings for potentially high impact roles and career capital is on the 80,000 Hours job board.

    We do not currently recommend attempting to join the US government via the military if you are aiming for a career in AI policy. There are many levels of seniority to rise through and many people competing for places, and initially you have to spend all of your time doing work unrelated to AI. However, having military experience already can be valuable career capital for other important roles in government, particularly national security positions. We would consider this route more competitive for military personnel who have been to an elite military academy, such as West Point, or for commissioned officers at rank O-3 or above.

    Policy fellowships are among the best entryways into policy work. They offer many benefits like first-hand policy experience, funding, training, mentoring, and networking. While many require an advanced degree, some are open to college graduates.

    • Center for Security and Emerging Technology (CSET)
    • Center for a New American Security
    • RAND Corporation
    • The MITRE Corporation
    • Brookings Institution
    • Carnegie Endowment for International Peace
    • Center for Strategic and International Studies (CSIS)
    • Federation of American Scientists (FAS)
    • Alignment Research Center
    • Open Philanthropy1
    • Institute for AI Policy and Strategy
    • Epoch AI
    • Centre for the Governance of AI (GovAI)
    • Center for AI Safety (CAIS)
    • Legal Priorities Project
    • Apollo Research
    • Centre for Long-Term Resilience
    • AI Impacts
    • Johns Hopkins Applied Physics Lab

    (Read our career review discussing the pros and cons of working at a top AI lab.)

    • Organisation for Economic Co-operation and Development (OECD)
    • International Atomic Energy Agency (IAEA)
    • International Telecommunication Union (ITU)
    • International Organization for Standardization (ISO)
    • European Union institutions (e.g., European Commission)
    • Simon Institute for Longterm Governance

    Our job board features opportunities in AI safety and policy:

      View all opportunities

      How this career path can go wrong

      Doing harm

      As we discuss in an article on accidental harm, there are many ways to set back a new field that you’re working in when you’re trying to do good, and this could mean your impact is negative rather than positive. (You may also want to read our article on harmful careers.)

      It seems likely there’s a lot of potential to inadvertently cause harm in the emerging field of AI governance. We discussed some possibilities in the section on advocacy and lobbying. Some other possibilities include:

      • Pushing for a given policy to the detriment of a superior policy
      • Communicating about the risks of AI in a way that ratchets up geopolitical tensions
      • Enacting a policy that has the opposite impact of its intended effect
      • Setting policy precedents that could be exploited by dangerous actors down the line
      • Funding projects in AI that turn out to be dangerous
      • Sending the message, implicitly or explicitly, that the risks are being managed when they aren’t, or that they’re lower than they in fact are
      • Suppressing technology that would actually be extremely beneficial for society

      The trouble is that we have to act with incomplete information, so it may never be very clear when or if people in AI governance are falling into these traps. Being aware that they are potential ways of causing harm will help you keep alert for these possibilities, though, and you should remain open to changing course if you find evidence that your actions may be damaging.

      And we recommend keeping in mind the following pieces of general guidance from our article on accidental harm:

      1. Ideally, eliminate courses of action that might have a big negative impact.
      2. Don’t be a naive optimizer.
      3. Have a degree of humility.
      4. Develop expertise, get trained, build a network, and benefit from your field’s accumulated wisdom.
      5. Follow cooperative norms
      6. Match your capabilities to your project and influence.
      7. Avoid hard-to-reverse actions.

      Burning out

      We think this work is exceptionally pressing and valuable, so we encourage our readers who might have a strong personal fit for governance work to test it out. But going into government, in particular, can be difficult. Some people we’ve advised have gone into policy roles with the hope of having an impact, only to burn out and move on.

      At the same time, many policy practitioners find their work very meaningful, interesting, and varied.

      Some roles in government may be especially challenging for the following reasons:

      • Some roles can be very fast-paced, involving relatively high stress and long hours. This is particularly true in Congress and senior executive branch positions and much less so in think tanks or junior agency roles.
      • It can take a long time to get into positions with much autonomy or decision-making authority.
      • Progress on the issues you care about can be slow, and you often have to work on other priorities. Congressional staffers in particular typically have very broad policy portfolios.
      • Work within bureaucracies faces many limitations, which can be frustrating.
      • It can be demotivating to work with people who don’t share your values. Though note that policy can select for altruistic people — even if they have different beliefs about how to do good.
      • The work isn’t typically well paid relative to comparable positions outside of government.

      So we recommend speaking to people in the kinds of positions you might aim to have in order to get a sense of whether the career path would be right for you. And if you do choose to pursue it, look out for signs that the work may be having a negative effect on you and seek support from people who understand what you care about.

      If you end up wanting or needing to leave and transition into a new path, that’s not necessarily a loss or a reason for regret. You will likely make important connections and learn a lot of useful information and skills. This career capital can be useful as you transition into another role, perhaps pursuing a complementary approach to AI governance and coordination.

      What the increased attention on AI means

      We’ve been concerned about risks posed by AI for years. Based on the arguments that this technology could potentially cause a global catastrophe, and otherwise have a dramatic impact on future generations, we’ve advised many people to work to mitigate the risks.

      The arguments for the risk aren’t completely conclusive, in our view. But the arguments are worth taking seriously, and given the fact that few others in the world seemed to be devoting much time to even figuring out how big the threat was or how to mitigate it (while at the same time progress in making AI systems more powerful was accelerating) we concluded it was worth ranking among our top priorities.

      Now that there’s increased attention on AI, some might conclude that it’s less neglected and thus less pressing to work on. However, the increased attention on AI also makes many interventions potentially more tractable than they had been previously, as policymakers and others are more open to the idea of crafting AI regulations.

      And while more attention is now being paid to AI, it’s not clear it will be focused on the most important risks. So there’s likely still a lot of room for important and pressing work positively shaping the development of AI policy.

      Read next

      If you’re interested in this career path, we recommend checking out some of the following articles next.

      Learn more

      Top recommendations

      Further recommendations

      Read next:  Learn about other high-impact careers

      Want to consider more paths? See our list of the highest-impact career paths according to our research.

      Plus, join our newsletter and we’ll mail you a free book

      Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

      The post AI governance and coordination appeared first on 80,000 Hours.

      ]]>
      Information security in high-impact areas https://80000hours.org/career-reviews/information-security/ Mon, 19 Dec 2022 23:00:00 +0000 https://80000hours.org/?post_type=career_profile&p=74534 The post Information security in high-impact areas appeared first on 80,000 Hours.

      ]]>
      As the 2016 US presidential campaign was entering a fractious round of primaries, Hillary Clinton’s campaign chair, John Podesta, opened a disturbing email.1 The March 19 message warned that his Gmail password had been compromised and that he urgently needed to change it.

      The email was a lie. It wasn’t trying to help him protect his account — it was a phishing attack trying to gain illicit access.

      Podesta was suspicious, but the campaign’s IT team erroneously wrote the email was “legitimate” and told him to change his password. The IT team provided a safe link for Podesta to use, but it seems he or one of his staffers instead clicked the link in the forged email. That link was used by Russian intelligence hackers known as “Fancy Bear,” and they used their access to leak private campaign emails for public consumption in the final weeks of the 2016 race, embarrassing the Clinton team.

      While there are plausibly many critical factors in any close election, it’s possible that the controversy around the leaked emails played a non-trivial role in Clinton’s subsequent loss to Donald Trump. This would mean the failure of the campaign’s security team to prevent the hack — which might have come down to a mere typo2 — was extraordinarily consequential.

      These events vividly illustrate how careers in infosecurity at key organisations have the potential for outsized impact. Ideally, security professionals can develop robust practices that reduce the likelihood that a single slip-up will result in a significant breach. But this key component for the continued and unimpaired functioning of important organisations is often neglected.

      And the need for such protection stretches far beyond hackers trying to cause chaos in an election season. Information security is vital to safeguard all kinds of critical organisations such as those storing extremely sensitive data about biological threats, nuclear weapons, or advanced artificial intelligence, that might be targeted by criminal hackers or aggressive nation states. Such attacks, if successful, could contribute to dangerous competitive dynamics (such as arms races) or directly lead to catastrophe.

      Some infosecurity roles involve managing and coordinating organisational policy, working on technical aspects of security, or a combination of both. We believe many such roles have thus far been underrated among those interested in effective altruism and reducing global catastrophic risks, and we’d be excited to see more altruistically motivated candidates move into this field.

      In a nutshell: Organisations with influence, financial power, and advanced technology are targeted by actors seeking to steal or abuse these assets. A career in information security is a promising avenue to support high-impact organisations by protecting against these attacks, which have the potential to disrupt an organisation’s mission or even increase existential risk.

      Recommended

      If you are well suited to this career, it may be the best way for you to have a social impact.

      Review status

      Based on a medium-depth investigation 

      Jeffrey Ladish contributed to this career review. We also thank Wim van der Schoot for his helpful comments.

      Why might information security be a high-impact career?

      Information security protects against events that hamper an organisation’s ability to fulfil its mission, such as attackers gaining access to confidential information. Information security specialists play a vital role in supporting the mission of organisations, similar to roles in operations.

      So if you want an impactful career, expertise in information security could enable you to make a significant positive difference in the world by helping important organisations and institutions be secure and successful.

      Compared to other roles in technology, an information security career can be a safe option because there may be less risk you could have a negative impact. In general, preventing attacks makes the world a safer place, even if it’s not clear whether potential victim organisations are providing net positive impact themselves. When a company is hacked, the harm can disproportionately fall on others — such as people who trusted the company with their private information.

      On the other hand, information security roles can sometimes have limited impact even when supporting high-impact areas, if the organisation does not genuinely value security. Many organisations have security functions primarily so that they can comply with regulations and compliance standards for doing business. These security standards have an important role, but when they are applied without care for achieving real security outcomes, it often leads to security theatre. It is not uncommon for security professionals to realise that they are having minimal impact on the security posture of their organisation.

      Protecting organisations working on the world’s most pressing problems

      Organisations working on pressing problems need cybersecurity expertise to protect their computer systems, financial resources, and confidential information from attack. In some ways, these challenges are similar to those faced by any other organisation; however, organisations working on major global problems are sometimes special targets for attacks.

      These organisations — such as those trying to monitor dangerous pathogens or coordinate to reduce global tensions — often work with international institutions, local political authorities, and governments. They may be targeted by state-sponsored attacks from countries with relevant geopolitical interests, either to steal information or to gain access to other high-value targets.

      Some high-impact organisations have confidential, sensitive discussions as part of their work, where a leak of information through a security compromise would damage trust and their ability to fulfil their mission. This is especially relevant when operating in countries with information control and censorship regimes.

      In addition to threats from state-sponsored attackers, cybercrime groups also raise serious risks.

      They seek financial gain through extortion and fraud — for example, by changing payment information, ransoming data, or threatening to leak confidential correspondence. Any organisation is vulnerable to these attacks. But organisations that handle particularly sensitive information or large value financial transactions, such as philanthropic grantmaking funds, are especially likely targets.

      In extreme cases, some organisations need help protecting information that could be harmful for the world if it was known more widely, such as harmful genetic sequences or powerful AI technology.

      The security of advanced AI systems

      While we think information security work can be valuable at many high-impact organisations, securing the most advanced AI systems may be among the highest-impact work you could do.

      We currently rank risks from artificial intelligence as the most pressing world problem because of the potential for future systems to cause catastrophes on a global scale. And to reduce the risk of an AI-related catastrophe, we’ve recommended some people work in the field of AI safety.

      But even if companies developing AI models use them responsibly and in accordance with high standards of safety, these efforts could be undermined if an outside actor steals the technology then deploys it irresponsibly. And because advanced AI models are expected to be powerful and extremely economically valuable, there are actors with both an interest in stealing them and a history of launching successful cyberattacks to steal technology.

      Because information security is a highly sought-after skill, some AI-related organisations have found it difficult to hire for these crucial roles. There could also be special demand for people who understand the particular information security challenges related to AI; working on this topic could have a high impact and make you a desirable job candidate.

      What does working in high-impact information security roles actually look like?


      “Defensive” cybersecurity roles — where the main job is to defend against attacks by outsiders — are most commonly in demand, especially in smaller nonprofit organisations and altruistically minded startups that don’t have the resources to hire more than a single security specialist.

      In some of these roles, you’ll find yourself doing a mix of hands-on technical work and communicating security risk. For example:

      • You will apply an understanding of how hackers work and how to stop them.
      • You will set up security systems, review IT configurations, and provide advice to the team about how to do their work securely.
      • You will test for bugs and vulnerabilities and design systems and policies that are robust to a range of possible attacks.

      Having security knowledge across a wide range of organisational IT topics will help you be most useful, such as laptop security, cloud administration, application security, and IT accounts (often called “identity and access management”).

      You can have an outsized impact relative to another potential hire by working for a high-impact organisation where you understand their cause area. This is because information security can be challenging for organisations that are focussed on social impact, as industry standard cybersecurity advice is built to support profit motives and regulatory frameworks. Tailoring cybersecurity to how an organisation is trying to achieve its mission — and to prevent the harmful events the organisation cares most about — could greatly increase your effectiveness.

      If you’re interested in reducing existential risks, we think you should consider joining an organisation working in relevant areas such artificial intelligence, as discussed above, or biorisk.

      An important part of this is bringing your team along for the journey. To do security well, you will regularly be asking people to change the way they work (likely adding hurdles!), so being an effective communicator can be as important as understanding the technical details. Helping everyone understand why certain security measures matter and how you’re balancing the costs and benefits is required for the team to accept additional effort or seemingly unnecessary steps.

      Ethical hacking roles, in which you’re tasked with breaking the defences of your clients or employers in order to ultimately improve them, are also important for cybersecurity — but only very large organisations have positions for these sorts of “offensive” (or “red teaming”) roles. More often, such roles are at cybersecurity services companies, which are paid to do short-term penetration testing exercises for clients.

      If you take such a role, it would be hard to focus on the security of impactful organisations in order to maximise your impact, because you often have little choice about which clients you’re supporting. But you could potentially build career capital in these kinds of positions before moving on to more impactful jobs.

      What kind of salaries do cybersecurity professionals earn?

      Professionals in information security roles such as cybersecurity earn high salaries. The US Bureau of Labor Statistics reported that the median salary for information security analysts was over $100,000 a year in 2021. In some key roles, such as those at top AI labs or major companies, the right candidates can make as much as $500,000 a year or more.

      While you’ll likely have a bigger impact supporting an organisation directly if the organisation is doing particularly important work, earning to give can still be a high-impact option, especially when you focus on donating to the most effective projects that could use the extra funds.

      How to assess your fit in advance?

      A great way to gauge your fit for information security is to try it out. There are many free online resources that can teach you the basics or give you hands-on experience with technical aspects of security.

      You can get an introduction through the Google Foundations of Cybersecurity course, which you can view for free if you select the ‘audit’ option on the bottom left of the enrollment pop-up. The full Google Cybersecurity Professional Certificate series is also worth watching to learn more on relevant technical topics.

      Some other ideas to get you started:

      Having a knack for figuring out how computer systems work, or enjoying deploying a security mindset are predictors that you might be a good fit — but they are not required to get started in information security.

      How to enter infosecurity

      Entering with a degree

      The traditional way to enter this field is to study an IT discipline — such as computer science, software engineering, computer engineering, or a related field — in a university that has a good range of cybersecurity courses. However, you shouldn’t think of this as a prerequisite — there are many successful security practitioners without a formal degree. A degree often makes it easier to get entry-level jobs though, because many organisations still require it.

      Aside from cybersecurity-labelled courses, a good grasp of the fundamentals of computer systems is useful. This includes topics on computer networks, operating systems, and the basics of how computer hardware works. We suggest you consider at least one course in machine learning — while it’s difficult to predict technology changes, it’s plausible that AI technologies will dramatically change the security landscape.

      Consider finding a part-time job in an IT area while studying (see the next section), or doing an internship. This doesn’t need to be in an information security capacity; it can just be a role where you get to see first-hand how IT works. What you learn in university and what happens in practice are different, and understanding how IT is applied in the real world is vital.

      In the final year of your degree, look for entry-level cybersecurity positions — or other IT positions, if you need to.

      We think that jobs in cybersecurity defensive roles are ideal for gaining the broad range of skills that are most likely to be relevant to high-impact organisations. These have role titles such as Security Analyst, Security Operations, IT Security Officer, Security Engineer, or even Application Security Engineer. “Offensive” roles such as penetration testing can also provide valuable experience, but you may not get as broad an overview across all of the fronts relevant to enterprise security, or experience the challenges with implementation first-hand.

      Entering with (just) IT experience

      It is also possible to enter this field without a degree.

      If you have a good working knowledge of IT or coding skills, a common path is to start in a junior role in internal IT support (or similar service desk or help desk positions) or software role. Many people working in cybersecurity today transitioned from other roles in IT. This can work well if you are especially interested in computers and are motivated to tinker with computer systems in your own time.

      A lot of what that you’ll learn in an organisational IT role will be useful for cybersecurity roles. Solid IT management requires day-to-day security, and understanding how the systems work and the challenges caused by security features is important if you’re going to be effective in cybersecurity.

      Do you need certifications?

      There are many cybersecurity certifications you can get. They aren’t mandatory, but having one may help you get into an entry-level job, especially if you don’t have a degree. The usefulness varies depending on how reputable the provider is, and the training and exams may be expensive.

      Some well-regarded certifications are CompTIA Security+, GIAC Security Essentials, OSCP Penetration Testing, and Certified Ethical Hacker. Vendor and technology certifications (e.g. Microsoft or AWS) generally aren’t valuable unless they’re specific to a job you’re pursuing.

      What sorts of places should you work?

      For your first few years, we recommend prioritising finding a role that will grow your knowledge and capability quickly. Some high-impact organisations are quite small, so they may not be well-placed to train you up early in your career, because they’ll likely have less capacity for mentorship in a range of technical areas.

      Find a job where you can learn good IT or cybersecurity management from others.

      The best places to work will already have relatively good security management practices and organisational maturity, so you can see what things are supposed to look like. You may also get a sense of the barriers that prevent organisations from having ideal security practices. Being able to ask questions from seasoned professionals and figure out what is actually feasible helps you learn more quickly than running up against all of the roadblocks yourself.

      Tech companies and financial organisations have a stronger reputation for cybersecurity. Security specialist organisations — such as in consulting, managed security providers, or security software companies — can also be great places to learn. Government organisations specialising in cybersecurity can provide valuable experience that is hard to get outside of specific roles.

      Once you’re skilled up, the main thing to look for is a place that is doing important work. This might be a government agency, a nonprofit, or even a for-profit. We list some high-impact organisations here. Information security is a support function needed by all organisations to different degrees. How positive your impact is will depend a lot on whether you’re protecting an organisation that does important and pressing work. Below we discuss specific areas where we think additional people could do the most impactful work.

      Safeguarding information hazards

      Protecting information that could be damaging for the world if it was stolen may be especially impactful and could help decrease existential risk.

      Some information could increase the risk that humanity becomes extinct if it were leaked. Organisations focussed on reducing this risk may need to create or use this information as part of their work, so working on their security means you can have a directly positive impact. Examples include:

      • AI research labs, as discussed above, which may discover technologies that could harm humanity in the wrong hands
      • Biorisk researchers who work on sensitive materials, such as harmful genetic sequences that could be used to engineer pandemics
      • Research and grantmaking foundations that have access to sensitive information on the strategies and results of existential risk reduction organisations

      Contributing to safe AI

      Security skills are relevant for preventing an AI-related catastrophe. Security professionals can bring a security mindset and technical skills that can mitigate the risk of an advanced AI leading to disaster.

      If advanced AI ends up radically transforming the global economy, as some believe it might, the security landscape and nature of threats discussed in this article could change in unexpected ways. Understanding the cutting-edge uses of AI by both malicious hackers and infosecurity professionals could allow you to have a large impact by helping ensure the world is protected from major catastrophic threats.

      Working in governments

      Governments also hold information that could negatively impact geopolitical stability if stolen, such as weapons technology and diplomatic secrets. But it may be more difficult to have a positive impact through this path working in government, as established bureaucracies are often resistant to change, and this resistance may prevent you from having impact.

      That said, the scale of government also means that if you are able to make a positive change in impactful areas, it has the potential for far-reaching effects.

      People working in this area should regularly reassess whether their work is, or is on a good path to, making a meaningful difference. There may be better opportunities inside or outside government.

      You may have a positive impact by working in cybersecurity for your country’s national security agencies, either as a direct employee or as a government contractor. In addition, these roles may give you the experience and professional contacts needed to work effectively in national cybersecurity policy.

      If you have the opportunity, working to set and enforce sensible cybersecurity policy could be highly impactful.

      Want one-on-one advice on pursuing this path?

      If you think this path might be a great option for you, but you need help deciding or thinking about what to do next, our team might be able to help.

      We can help you compare options, make connections, and possibly even help you find jobs or funding opportunities.

      APPLY TO SPEAK WITH OUR TEAM

      Learn more

      Read next:  Learn about other high-impact careers

      Want to consider more paths? See our list of the highest-impact career paths according to our research.

      Plus, join our newsletter and we’ll mail you a free book

      Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

      The post Information security in high-impact areas appeared first on 80,000 Hours.

      ]]>
      Elie Hassenfeld on two big-picture critiques of GiveWell’s approach, and six lessons from their recent work https://80000hours.org/podcast/episodes/elie-hassenfeld-givewell-critiques-and-lessons/ Fri, 02 Jun 2023 21:52:25 +0000 https://80000hours.org/?post_type=podcast&p=82103 The post Elie Hassenfeld on two big-picture critiques of GiveWell’s approach, and six lessons from their recent work appeared first on 80,000 Hours.

      ]]>
      The post Elie Hassenfeld on two big-picture critiques of GiveWell’s approach, and six lessons from their recent work appeared first on 80,000 Hours.

      ]]>
      Spencer Greenberg on stopping valueless papers from getting into top journals https://80000hours.org/podcast/episodes/spencer-greenberg-stopping-valueless-papers/ Fri, 24 Mar 2023 04:01:41 +0000 https://80000hours.org/?post_type=podcast&p=81212 The post Spencer Greenberg on stopping valueless papers from getting into top journals appeared first on 80,000 Hours.

      ]]>
      The post Spencer Greenberg on stopping valueless papers from getting into top journals appeared first on 80,000 Hours.

      ]]>
      Bear Braumoeller on the case that war isn’t in decline https://80000hours.org/podcast/episodes/bear-braumoeller-decline-of-war/ Tue, 08 Nov 2022 22:35:17 +0000 https://80000hours.org/?post_type=podcast&p=79838 The post Bear Braumoeller on the case that war isn’t in decline appeared first on 80,000 Hours.

      ]]>
      The post Bear Braumoeller on the case that war isn’t in decline appeared first on 80,000 Hours.

      ]]>
      Sam Bankman-Fried on taking a high-risk approach to crypto and doing good https://80000hours.org/podcast/episodes/sam-bankman-fried-high-risk-approach-to-crypto-and-doing-good/ Thu, 14 Apr 2022 20:24:54 +0000 https://80000hours.org/?post_type=podcast&p=77185 The post Sam Bankman-Fried on taking a high-risk approach to crypto and doing good appeared first on 80,000 Hours.

      ]]>
      The post Sam Bankman-Fried on taking a high-risk approach to crypto and doing good appeared first on 80,000 Hours.

      ]]>
      Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions https://80000hours.org/podcast/episodes/karen-levy-misaligned-incentives-in-global-development/ Mon, 21 Mar 2022 19:37:16 +0000 https://80000hours.org/?post_type=podcast&p=76963 The post Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions appeared first on 80,000 Hours.

      ]]>
      The post Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions appeared first on 80,000 Hours.

      ]]>
      David Denkenberger on using paper mills and seaweed to feed everyone in a catastrophe, ft Sahil Shah https://80000hours.org/podcast/episodes/david-denkenberger-sahil-shah-using-paper-mills-and-seaweed-in-catastrophes/ Mon, 29 Nov 2021 21:21:59 +0000 https://80000hours.org/?post_type=podcast&p=75064 The post David Denkenberger on using paper mills and seaweed to feed everyone in a catastrophe, ft Sahil Shah appeared first on 80,000 Hours.

      ]]>
      The post David Denkenberger on using paper mills and seaweed to feed everyone in a catastrophe, ft Sahil Shah appeared first on 80,000 Hours.

      ]]>