Software Engineering (Topic archive) - 80,000 Hours https://80000hours.org/topic/careers/sometimes-recommended-careers/software-engineering/ Wed, 31 Jan 2024 18:30:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 Software and tech skills https://80000hours.org/skills/software-tech/ Mon, 18 Sep 2023 13:00:13 +0000 https://80000hours.org/?post_type=skill_set&p=83654 The post Software and tech skills appeared first on 80,000 Hours.

]]>

In a nutshell:

You can start building software and tech skills by trying out learning to code, and then doing some programming projects before applying for jobs. You can apply (as well as continue to develop) your software and tech skills by specialising in a related area, such as technical AI safety research, software engineering, or information security. You can also earn to give, and this in-demand skill set has great backup options.

Key facts on fit

There’s no single profile for being great at software and tech skills. It’s particularly cheap and easy to try out programming (which is a core part of this skill set) via classes online or in school, so we’d suggest doing that. But if you’re someone who enjoys thinking systematically, building things, or has good quantitative skills, those are all good signs.

Why are software and tech skills valuable?

By “software and tech” skills we basically mean what your grandma would call “being good at computers.”

When investigating the world’s most pressing problems, we’ve found that in many cases there are software-related bottlenecks.

For example, machine learning (ML) engineering is a core skill needed to contribute to AI safety technical research. Experts in information security are crucial to reducing the risks of engineered pandemics, as well as other risks. And software engineers are often needed by nonprofits, whether they’re working on reducing poverty or mitigating the risks of climate change.

Also, having skills in this area means you’ll likely be highly paid, offering excellent options to earn to give.

Moreover, basic programming skills can be extremely useful whatever you end up doing. You’ll find ways to automate tasks or analyse data throughout your career.

What does a career using software and tech skills involve?

A career using these skills typically involves three steps:

  1. Learn to code with a university course or self-study and then find positions where you can get great mentorship. (Read more about how to get started.)
  2. Optionally, specialise in a particular area, for example, by building skills in machine learning or information security.
  3. Apply your skills to helping solve a pressing global problem. (Read more about how to have an impact with software and tech.)

There’s no general answer about when to switch from a focus on learning to a focus on impact. Once you have some basic programming skills, you should look for positions that both further improve your skills and have an impact, and then decide based on which specific opportunities seem best at the time.

Software and tech skills can also be helpful in other, less directly-related career paths, like being an expert in AI hardware (for which you’ll also need a specialist knowledge skill set) or founding a tech startup (for which you’ll also need an organisation-building skill set). Being good with computers is also often part of the skills required for quantitative trading.

Programming also tends to come in handy in a wide variety of situations and jobs; there will be other great career paths that will use these skills that we haven’t written about.

How to evaluate your fit

How to predict your fit in advance

Some indications you’ll be a great fit include:

  • The ability to break down problems into logical parts and generate and test hypotheses
  • Willingness to try out many different solutions
  • High attention to detail
  • Broadly good quantitative skills

The best way to gauge your fit is just to try out programming.

It seems likely that some software engineers are significantly better than average — and we’d guess this is also true for other technical roles using software. In particular, these very best software engineers are often people who spend huge amounts of time practicing. This means that if you enjoy coding enough to want to do it both as a job and in your spare time, you are likely to be a good fit.

How to tell if you’re on track

If you’re at university or in a bootcamp, it’s especially easy to tell if you’re on track. Good signs are that you’re succeeding at your assigned projects or getting good marks. An especially good sign is that you’re progressing faster than many of your peers.

In general, a great indicator of your success is that the people you work with most closely are enthusiastic about you and your work, especially if those people are themselves impressive!

If you’re building these skills at an organisation, signs you’re on track might include:

  • You get job offers at organisations you’d like to work for.
  • You’re promoted within your first two years.
  • You receive excellent performance reviews.
  • You’re asked to take on progressively more responsibility over time.
  • After some time, you’re becoming someone in your team who people look to solve their problems, and people want you to teach them how to do things.
  • You’re building things that others are able to use successfully without your input.
  • Your manager / colleagues suggest you might take on more senior roles in the future.
  • You ask your superiors for their honest assessment of your fit and they are positive (e.g. they tell you you’re in the top 10% of people they can imagine doing your role).

How to get started building software and tech skills

Independently learning to code

As a complete beginner, you can write a Python program in less than 20 minutes that reminds you to take a break every two hours.

A great way to learn the very basics is by working through a free beginner course like Automate the Boring Stuff with Python by Al Seigart.

Once you know the fundamentals, you could try taking an intro to computer science or intro to programming course. If you’re not at university, there are plenty of courses online, such as:

Don’t be discouraged if your code doesn’t work the first time — that’s what normally happens when people code!

A great next step is to try out doing a project with other people. This lets you test out writing programs in a team and working with larger codebases. It’s easy to come up with programming projects to do with friends — you can see some examples here.

Once you have some more experience, contributing to open-source projects in particular lets you work with very large existing codebases.

Attending a coding bootcamp

We’ve advised many people who managed to get junior software engineer jobs in less than a year by going to a bootcamp.

Coding bootcamps are focused on taking people with little knowledge of programming to as highly paid a job as possible within a couple of months. This is a great entry route if you don’t already have much background, though some claim the long-term prospects are not as good as if you studied at university or in a particularly thorough way independently because you lack a deep understanding of computer science. Course Report is a great guide to choosing a bootcamp. Be careful to avoid low-quality bootcamps. To find out more, read our interview with an App Academy instructor.

Studying at university

Studying computer science at university (or another subject involving lots of programming) is a great option because it allows you to learn to code in an especially structured way and while the opportunity cost of your time is lower.

It will also give you a better theoretical understanding of computing than a bootcamp (which can be useful for getting the most highly-paid and intellectually interesting jobs), a good network, some prestige, and a better understanding of lower-level languages like C. Having a computer science degree also makes it easier to get a US work visa if you’re not from the US.

Doing internships

If you can find internships, ideally at the sorts of organisations you might want to work for to build your skills (like big tech companies or startups), you’ll gain practical experience and the key skills you wouldn’t otherwise pick up from academic degrees (e.g. using version control systems and powerful text editors). Take a look at our our list of companies with software and machine learning internships.

AI-assisted coding

As you’re getting started, it’s probably worth thinking about how developments in AI are going to affect programming in the future — and getting used to AI-assisted coding.

We’d recommend trying out using GitHub CoPilot, which writes code for you based on your comments. Cursor is a popular AI-assisted code editor based on VSCode.

You can also just ask AI chat assistants for help. ChatGPT is particularly helpful (although only if you use the paid version).

We think it’s reasonably likely that many software and tech jobs in the future will be heavily based on using tools like these.

Building a specialty

Depending on how you’re going to use software and tech skills, it may be useful to build up your skills in a particular area. Here’s how to get started in a few relevant areas:

If you’re currently at university, it’s worth checking if you can take an ML course (even if you’re not majoring in computer science).

But if that’s not possible, here are some suggestions of places you might start if you want to self-study the basics:

PyTorch is a very common package used for implementing neural networks, and probably worth learning! When I was first learning about ML, my first neural network was a 3-layer convolutional neural network with L2 regularisation classifying characters from the MNIST database. This is a pretty common first challenge and a good way to learn PyTorch.

You may also need to learn some maths.

The maths of deep learning relies heavily on calculus and linear algebra, and statistics can be useful too — although generally learning the maths is much less important than programming and basic, practical ML.

Again, if you’re still at university we’d generally recommend studying a quantitative degree (like maths, computer science, or engineering), most of which will cover all three areas pretty well.

If you want to actually get good at maths, you have to be solving problems. So, generally, the most useful thing that textbooks and online courses provide isn’t their explanations — it’s a set of exercises to try to solve in order, with some help if you get stuck.

If you want to self-study (especially if you don’t have a quantitative degree) here are some possible resources:

You might be able to find resources that cover all these areas, like Imperial College’s Mathematics for Machine Learning.

Most people get started in information security by studying computer science (or similar) at a university, and taking some cybersecurity courses — although this is by no means necessary to be successful.

You can get an introduction through the Google Foundations of Cybersecurity course. The full Google Cybersecurity Professional Certificate series is also worth watching to learn more on relevant technical topics.

For more, take a look at how to try out and get started in information security.

Data science combines programming with statistics.

One way to get started is by doing a bootcamp. The bootcamps are a similar deal to programming, although they tend to mainly recruit science PhDs. If you’ve just done a science PhD and don’t want to continue with academia, this is a good option to consider (although you should probably consider other ways of using the software and tech skills first). Similarly, you can learn data analysis, statistics, and modelling by taking the right graduate programme.

Data scientists are well paid — offering the potential to earn to give — and have high job satisfaction.

To learn more, see our full career review of data science.

Depending on how you’re aiming to have an impact with these skills (see the next section), you may also need to develop other skills. We’ve written about some other relevant skill sets:

For more, see our full list of impactful skills.

Once you have these skills, how can you best apply them to have an impact?

The problem you work on is probably the biggest driver of your impact. The first step is to make an initial assessment of which problems you think are most pressing (even if you change your mind over time, you’ll need to decide where to start working).

Once you’ve done that, the next step is to identify the highest-potential ways to use software and tech skills to help solve your top problems.

There are five broad categories here:

While some of these options (like protecting dangerous information) will require building up some more specialised skills, being a great programmer will let you move around most of these categories relatively easily, and the earning to give options means you’ll always have a pretty good backup plan.

Find jobs that use software and tech skills

See our curated list of job opportunities for this path.

    View all opportunities

    Career paths we’ve reviewed that use these skills

    Read next:  Explore other useful skills

    Want to learn more about the most useful skills for solving global problems, according to our research? See our list.

    Plus, join our newsletter and we’ll mail you a free book

    Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

    The post Software and tech skills appeared first on 80,000 Hours.

    ]]>
    AI governance and coordination https://80000hours.org/career-reviews/ai-policy-and-strategy/ Tue, 20 Jun 2023 12:00:34 +0000 https://80000hours.org/?post_type=career_profile&p=74390 The post AI governance and coordination appeared first on 80,000 Hours.

    ]]>
    As advancing AI capabilities gained widespread attention in late 2022 and 2023 — particularly after the release of OpenAI’s ChatGPT and Microsoft’s Bing chatbot — interest in governing and regulating these systems has grown. Discussion of the potential catastrophic risks of misaligned or uncontrollable AI also became more prominent, potentially opening up opportunities for policy that could mitigate the threats.

    There’s still a lot of uncertainty about which strategies for AI governance and coordination would be best, though parts of the community of people working on this subject may be coalescing around some ideas. See, for example, a list of potential policy ideas from Luke Muehlhauser of Open Philanthropy1 and a survey of expert opinion on best practices in AI safety and governance.

    But there’s no roadmap here. There’s plenty of room for debate about which policies and proposals are needed.

    We may not have found the best ideas yet in this space, and many of the existing policy ideas haven’t yet been developed into concrete, public proposals that could actually be implemented. We hope to see more people enter this field to develop expertise and skills that will contribute to risk-reducing AI governance and coordination.

    In a nutshell: Advanced AI systems could have massive impacts on humanity and potentially pose global catastrophic risks. There are opportunities in AI governance and coordination around these threats to shape how society responds to and prepares for the challenges posed by the technology.

    Given the high stakes, pursuing this career path could be many people’s highest-impact option. But they should be very careful not to accidentally exacerbate the threats rather than mitigate them.

    Recommended

    If you are well suited to this career, it may be the best way for you to have a social impact.

    Review status

    Based on an in-depth investigation 

    “What you’re doing has enormous potential and enormous danger.” — US President Joe Biden, to the leaders of the top AI labs

    Why this could be a high-impact career path

    Artificial intelligence has advanced rapidly. In 2022 and 2023, new language and image generation models gained widespread attention for their abilities, blowing past previous benchmarks the technology had met.

    And the applications of these models are still new; with more tweaking and integration into society, the existing AI systems may become easier to use and more ubiquitous in our lives.

    We don’t know where all these developments will lead us. There’s reason to be optimistic that AI will eventually help us solve many of the world’s problems, raising living standards and helping us build a more flourishing society.

    But there are also substantial risks. AI can be used for both good and ill. And we have concerns that the technology could, without the proper controls, accidentally lead to a major catastrophe — and perhaps even cause human extinction. We discuss the arguments that these risks exist in our in-depth problem profile.

    Because of these risks, we encourage people to work on finding ways to reduce these risks through technical research and engineering.

    But a range of strategies for risk reduction will likely be needed. Government policy and corporate governance interventions in particular may be necessary to ensure that AI is developed to be as broadly beneficial as possible and without unacceptable risk.

    Governance generally refers to the processes, structures, and systems that carry out decision making for organisations and societies at a high level. In the case of AI, we expect the governance structures that matter most to be national governments and organisations developing AI — as well as some international organisations and perhaps subnational governments.

    Some aims of AI governance work could include:

    • Preventing the deployment of any AI systems that pose a significant and direct threat of catastrophe
    • Mitigating the negative impact of AI technology on other catastrophic risks, such as nuclear weapons and biotechnology
    • Guiding the integration of AI technology into our society and economy with limited harms and to the advantage of all
    • Reducing the risk of an “AI arms race,” in which competition leads to technological advancement without the necessary safeguards and caution — between nations and between companies
    • Ensuring that those creating the most advanced AI models are incentivised to be cooperative and concerned about safety
    • Slowing down the development and deployment of new systems if the advancements are likely to outpace our ability to keep them safe and under control

    We need a community of experts who understand the intersection of modern AI systems and policy, as well as the severe threats and potential solutions. This field is still young, and many of the paths within it aren’t clear and are not sure to pan out. But there are relevant professional paths that will provide you valuable career capital for a variety of positions and types of roles.

    The rest of this article explains what work in this area might involve, how you can develop career capital and test your fit, and where some promising places to work might be.

    What kinds of work might contribute to AI governance?

    What should governance-related work on AI actually involve? There are a variety of ways to pursue AI governance strategies, and as the field becomes more mature, the paths are likely to become clearer and more established.

    We generally don’t think people early in their careers should be aiming for a specific job that they think would be high-impact. They should instead aim to develop skills, experience, knowledge, judgement, networks, and credentials — what we call career capital — that they can later use when an opportunity to have a positive impact is ripe.

    This may involve following a pretty standard career trajectory, or it may involve bouncing around in different kinds of roles. Sometimes, you just have to apply to a bunch of different roles and test your fit for various types of work before you know what you’ll be good at. The main thing to keep in mind is that you should try to get excellent at something for which you have strong personal fit and that will let you contribute to solving pressing problems.

    In the AI governance and coordination space, we see at least six large categories of work that we expect to be important:

    There aren’t necessarily openings in all these categories at the moment for careers in AI governance, but they represent a range of sectors in which impactful work may potentially be done in the coming years and decades. Thinking about the different skills and forms of career capital that will be useful for the categories of work you could see yourself doing in the future can help you figure out what your immediate next steps should be. (We discuss how to assess your fit and enter this field below.)

    You may want to — and indeed it may be advantageous to — move between these different categories of work at different points in your career. You can also test out your fit for various roles by taking internships, fellowships, entry-level jobs, temporary placements, or even doing independent research, all of which can serve as career capital for a range of paths.

    We have also reviewed career paths in AI technical safety research and engineering and information security, which may be crucial to reducing risks from AI, and which may play a significant role in an effective governance agenda. People serious about pursuing a career in AI governance should familiarise themselves with these fields as well.

    Government work

    Taking a role within government could lead to playing an important role in the development, enactment, and enforcement of AI policy.

    Note that we generally expect that the US federal government will be the most significant player in AI governance for the foreseeable future. This is because of its global influence and its jurisdiction over much of the AI industry, including the top three AI labs training state-of-the-art, general-purpose models (Anthropic, OpenAI, and Google DeepMind) and key parts of the chip supply chain. Much of this article focuses on US policy and government.2

    But other governments and international institutions may also end up having important roles to play in certain scenarios. For example, the UK government, the European Union, China, and potentially others, may all present opportunities for impactful AI governance work. Some US state-level governments, such as California, may also offer opportunities for impact and gaining career capital.

    What would this work involve? Sections below discuss how to enter US policy work and which areas of the government that you might aim for.

    But at the broadest level, people interested in positively shaping AI policy should aim to gain the skills and experience to work in areas of government with some connection to AI or emerging technology policy.

    This can include roles in: legislative branches, domestic regulation, national security, diplomacy, appropriations and budgeting, and other policy areas.

    If you can get a role out of the gate that is already working directly on this issue, such as a staff position with a lawmaker who is focused on AI, that could be a great opportunity.

    Otherwise, you should seek to learn as much as you can about how policy works and which government roles might allow you to have the most impact, while establishing yourself as someone who’s knowledgeable about the AI policy landscape. Having almost any significant government role that touches on some aspect of AI, or having some impressive AI-related credential, may be enough to get you quite far.

    One way to advance your career in government on a specific topic is what some call “getting visibility” — that is, using your position to learn about the landscape and connect with the actors and institutions that affect the policy area you care about. You’ll want to be invited to meetings with other officials and agencies, be asked for input on decisions, and engage socially with others who work in the policy area. If you can establish yourself as a well-regarded expert on an important but neglected aspect of the issue, you’ll have a better shot at being included in key discussions and events.

    Career trajectories within government can be broken down roughly as follows:

    • Standard government track: This involves entering government at a relatively low level and building up your career capital on the inside by climbing the seniority ladder. For the highest impact, you’d ideally end up reaching senior levels by sticking around, gaining skills and experience, and getting promoted. You may move between agencies, departments, or branches.
    • Specialisation career capital: You can also move in and out of government throughout your career. People on this trajectory will also work at nonprofits, think tanks, industry labs, political parties, academia, and other organisations. But they will primarily focus on becoming an expert in a topic — such as AI. It can be harder to get seniority this way, but the value of expertise and experience can sometimes outweigh seniority.
    • Direct-impact work: Some people move into government jobs without a longer plan to build career capital because they see an opportunity for direct, immediate impact. This might look like getting tapped to lead an important commission or providing valuable input on an urgent project. We don’t generally recommend planning on this kind of strategy for your career, but it’s good to be aware of it as an opportunity that might be worth taking at some point.

    Research on AI policy and strategy

    There’s still a lot of research to be done on the most important avenues for AI governance approaches. While there are some promising proposals for a system of regulatory and strategic steps that can help reduce the risk of an AI catastrophe, there aren’t many concrete and publicly available policy proposals ready for adoption.

    The world needs more concrete proposals for AI policies that would really start to tackle the biggest threats; developing such policies, and deepening our understanding of the strategic needs of the AI governance space, should be high priorities.

    Other relevant research could involve surveys of public opinion that could inform communication strategies, legal research about the feasibility of proposed policies, technical research on issues like compute governance, and even higher-level theoretical research into questions about the societal implications of advanced AI. Some research, such as that done by Epoch AI, focuses on forecasting the future course of AI developments, which can influence AI governance decisions.

    However, several experts we’ve talked to warn that a lot of research on AI governance may prove to be useless, so it’s important to be reflective and seek input from others in the field — both from experienced policy practitioners and technical experts — about what kind of contribution you can make. We list several research organisations below that we think would be good to work at in order to pursue promising research on this topic.

    One potentially useful approach for testing your fit for this work — especially when starting out in this research — is to write up analyses and responses to existing work on AI policy or investigate some questions in this area that haven’t been the subject of much attention. You can then share your work widely, send it out for feedback from people in the field, and evaluate how much you enjoy the work and whether you might productively contribute to this research longer term.

    But it’s possible to spend too long testing your fit without making much progress, and some people find that they’re best able to contribute when they’re working on a team. So don’t overweight or over-invest in independent work, especially if there are few signs it’s working out especially well for you. This kind of project can make sense for maybe a month or a bit longer — but it’s unlikely to be a good idea to spend much more than that without meaningful funding or some really encouraging feedback from people working in the field.

    If you have the experience to be hired as a researcher, work on AI governance can be done in academia, nonprofit organisations, and think tanks. Some government agencies and committees, too, perform valuable research.

    Note that universities and academia have their own priorities and incentives that often aren’t aligned with producing the most impactful work. If you’re already an established researcher with tenure, it may be highly valuable to pivot into work on AI governance — this position may even give you a credible platform from which to advocate for important ideas.

    But if you’re just starting out a research career and want to focus on this issue, you should carefully consider whether your work will be best supported inside or outside of academia. For example, if you know of a specific programme with particular mentors who will help you pursue answers to critical questions in this field, it might be worth doing. We’re less inclined to encourage people to pursue generic academic-track roles with the vague hope that one day they can do important research on this topic.

    Advanced degrees in policy or relevant technical fields may well be valuable, though — see more discussion of this in the section on how to assess your fit and get started.

    Industry work

    While government policy is likely to play a key role in coordinating various actors interested in reducing the risks from advanced AI, internal policy and corporate governance at the largest AI labs themselves is also a powerful tool. We think people who care about reducing risk can potentially do valuable work internally at industry labs. (Read our career review of non-technical roles at AI labs.)

    At the highest level, deciding who sits on corporate boards, what kind of influence those boards have, and to what extent the organisation is structured to seek profit and shareholder value as opposed to other aims, can end up having a major impact on the direction a company takes. If you might be able to get a leadership role at a company developing frontier AI models, such as a management position or a seat on the board, it could potentially be a very impactful position.

    If you’re able to join a policy team at a major lab, you can model threats and help develop, implement, and evaluate promising proposals internally to reduce risks. And you can build consensus around best practices, such as strong information security policies, using outside evaluators to find vulnerabilities and dangerous behaviours in AI systems (red teaming), and testing out the latest techniques from the field of AI safety.

    And if, as we expect, AI labs face increasing government oversight, industry governance and policy work can ensure compliance with any relevant laws and regulations that get put in place. Interfacing with government actors and facilitating coordination over risk reduction approaches could be impactful work.

    In general, the more cooperative AI labs are with each other3 and outside groups seeking to minimise catastrophic risks from AI, the better. And this doesn’t seem to be an outlandish hope — many industry leaders have expressed concern about extinction risks and have even called for regulation of the frontier technology they’re creating.

    That said, we can expect this cooperation to take substantial work — it would be surprising if the best policies for reducing risks were totally uncontroversial in industry, since labs also face huge commercial incentives to build more powerful systems, which can carry more risk. The more everyone’s able to communicate and align their incentives, the better things seem likely to go.

    Advocacy and lobbying

    People outside of government or AI labs can influence the shape of public policy and corporate governance via advocacy and lobbying.

    As of this writing, there has not yet been a large public movement in favour of regulating or otherwise trying to reduce risks from AI, so there aren’t many openings that we know about in this category. But we expect growing interest in this area to open up new opportunities to press for political action and policy changes at AI labs, and it could make sense to start building career capital and testing your fit now for different kinds of roles that would fall into this category down the line.

    If you believe AI labs may be disposed to advocate for generally beneficial regulation, you might want to try to work for them, or become a lobbyist for the industry as a whole, to push the government to adopt specific policies. It’s plausible that AI labs will have by far the best understanding of the underlying technology, as well as the risks, failure modes, and safest paths forward.

    On the other hand, it could be the case that AI labs have too much of a vested interest in the shape of regulations to reliably advocate for broadly beneficial policies. If that’s right, it may be better to join or create advocacy organisations unconnected from the industry — supported by donations or philanthropic foundations — that can take stances that are opposed to the labs’ commercial interests.

    For example, it could be the case that the best approach from a totally impartial perspective would be at some point to deliberately slow down or halt the development of increasingly powerful AI models. Advocates could make this demand of the labs themselves or of the government to slow down AI progress. It may be difficult to come to this conclusion or advocate for it if you have strong connections to the companies creating these systems.

    It’s also possible that the best outcomes will be achieved with a balance of industry lobbyists and outside lobbyists and advocates making the case for their preferred policies — as both bring important perspectives.

    We expect there will be increasing public interest in AI policy as the technological advancements have ripple effects in the economy and wider society. And if there’s increasing awareness of the impact of AI on people’s lives, the risks the technology poses may become more salient to the public, which will give policymakers strong incentives to take the problem seriously. It may also bring new allies into the cause of ensuring that the development of advanced AI goes well.

    Advocacy can also:

    • Highlight neglected but promising approaches to governance that have been uncovered in research
    • Facilitate the work of policymakers by showcasing the public’s support for governance measures
    • Build bridges between researchers, policymakers, the media, and the public by communicating complicated ideas in an accessible way to many audiences
    • Pressure corporations themselves to proceed more cautiously
    • Change public sentiment around AI and discourage irresponsible behaviour by individual actors, such as the spreading of powerful open-source models

    However, note that advocacy can sometimes backfire. Predicting how information will be received is far from straightforward. Drawing attention to a cause area can sometimes trigger a backlash; presenting problems with certain styles of rhetoric can alienate people or polarise public opinion; spreading misleading or mistaken messages can discredit yourself and fellow advocates. It’s important that you are aware of the risks, consult with others (particularly those who you respect but might disagree with tactically), and commit to educating yourself deeply about the topic before expounding on it in public.

    You can read more in the section about doing harm below. We also recommend reading our article on ways people trying to do good accidentally make things worse and how to avoid them.

    Case study: the Future of Life Institute open letter

    In March 2023, the Future of Life Institute published an open letter calling for a pause of at least six months on training any new models more “powerful” than OpenAI’s GPT-4 — which had been released about a week earlier. GPT-4 is a state-of-the-art language model that can be used through ChatGPT to produce novel and impressive text responses to a wide range of prompts.

    The letter attracted a lot of attention, perhaps in part because it was signed by prominent figures such as Elon Musk. While it didn’t immediately achieve its explicit aims — the labs didn’t commit to a pause — it drew a lot of attention and fostered public conversations about the risks of AI and the potential benefits of slowing down. (An earlier article titled “Let’s think about slowing down AI” — by Katja Grace of the research organisation AI Impacts — aimed to have a similar effect.)

    There’s no clear consensus on whether the FLI letter was on the right track. Some critics of the letter, for example, said that its advice would actually lead to worse outcomes overall if followed, because it would slow down AI safety research while many of the innovations that drive AI capabilities progress, such as chip development, would continue to race forward. Proponents of the letter pushed back on these claims.4 It does seem clear that the letter changed the public discourse around AI safety in a way that few other efforts have achieved, which is proof of concept for what impactful advocacy can accomplish.

    Third-party auditing and evaluation

    If regulatory measures are put in place to reduce the risks of advanced AI, some agencies and organisations — within government or outside — will need to audit companies and systems to make sure that regulations are being followed.

    One nonprofit, the Alignment Research Center, has been at the forefront of this kind of work.5 In addition to its research work, it has launched a program to evaluate the capabilities of advanced AI models. In early 2023, the organisation partnered with two leading AI labs, OpenAI and Anthropic, to evaluate the capabilities of the latest versions of their chatbot models prior to their release. They sought to determine in a controlled environment if the models had any potentially dangerous capabilities.

    The labs voluntarily cooperated with ARC for this project, but at some point in the future, these evaluations may be legally required.

    Governments often rely on third-party auditors as crucial players in regulation, because the government may lack the expertise (or the capacity to pay for the expertise) that the private sector has. There aren’t many such opportunities available in this type of role that we know of as of this writing, but they may end up playing a critical part of an effective AI governance framework.

    Other types of auditing and evaluation may be required as well. ARC has said it intends to develop methods to determine which models are appropriately aligned — that is, that they will behave as their users intend them to behave — prior to release.

    Governments may also want to employ auditors to evaluate the amount of compute that AI developers have access to, their information security practices, the uses of models, the data used to train models, and more.

    Acquiring the technical skills and knowledge to perform these types of evaluations, and joining organisations that will be tasked to perform them, could be the foundation of a highly impactful career. This kind of work will also likely have to be facilitated by people who can manage complex relationships across industry and government. Someone with experience in both sectors could have a lot to contribute.

    Some of these types of roles may have some overlap with work in AI technical safety research.

    One potential advantage of working in the private sector for AI governance work is you may be significantly better paid than you would be in government.

    International work and coordination

    US-China

    For someone with the right fit, cooperation and coordination with China on the safe development of AI could be a particularly impactful approach within the broad AI governance career path.

    The Chinese government has been a major funder in the field of AI, and the country has giant tech companies that could potentially drive forward advances.

    Given tensions between the US and China, and the risks posed by advanced AI, there’s a lot to be gained from increasing trust, understanding, and coordination between the two countries. The world will likely be much better off if we can avoid a major conflict between great powers and if the most significant players in emerging technology can avoid exacerbating any global risks.

    We have a separate career review that goes into more depth on China-related AI safety and governance paths.

    Other governments and international organisations

    As we’ve said, we focus most on US policy and government roles. This is largely because we anticipate that the US is now and will likely continue to be the most pivotal actor when it comes to regulating AI, with a major caveat being China, as discussed in the previous section.

    But many people interested in working on this issue can’t or don’t want to work in US policy — perhaps because they live in another country and don’t intend on moving.

    Much of the advice above still applies to these people, because roles in AI governance research and advocacy can be done outside of the United States.6 And while we don’t think it’s generally as impactful in expectation as US government work, opportunities in other governments and international organisations can be complementary to the work to be done in the US.

    The United Kingdom, for instance, may present another strong opportunity for AI policy work that would complement US work. Top UK officials have expressed interest in developing policy around AI, perhaps even a new international agency, and reducing extreme risks. And the UK government announced in 2023 the creation of a new AI Foundation Model Taskforce, with the expressed intention to drive forward safety research.

    It’s possible that by taking significant steps to understand and regulate AI, the UK will encourage or inspire US officials to take similar steps by showing how it can work.

    And any relatively wealthy country could use portions of its budget to fund AI safety research. While a lot of the most important work likely needs to be done in the US, along with leading researchers and at labs with access to large amounts of compute, some lines of research may be productive even without these resources. Any significant advances in AI safety research, if communicated properly, could be used by researchers working on the most powerful models.

    Other countries might also develop liability standards for the creators of AI systems that could incentivise corporations to proceed more cautiously and judiciously before releasing models.

    The European Union has shown that its data protection standards — the General Data Protection Regulation (GDPR) — affect corporate behaviour well beyond its geographical boundaries. EU officials have also pushed forward on regulating AI, and some research has explored the hypothesis that the impact of the union’s AI regulations will extend far beyond the continent — the so-called “Brussels effect.”

    And at some point, we do expect there will be AI treaties and international regulations, just as the international community has created the International Atomic Energy Agency, the Biological Weapons Convention, and Intergovernmental Panel on Climate Change to coordinate around and mitigate other global catastrophic threats.

    Efforts to coordinate governments around the world to understand and share information about threats posed by AI may end up being extremely important in some future scenarios.

    The Organisation for Economic Cooperation and Development is one place where such work might occur. So far, it has been the most prominent international actor working on AI policy and has created the AI Policy Observatory.

    Third-party countries may also be able to facilitate cooperation and reduce tensions betweens the United States and China, whether around AI or other potential flashpoints, should such an intervention become necessary.

    How policy gets made

    What does it actually take to make policy?

    In this section, we’ll discuss three phases of policy making: agenda setting, policy creation and development, and implementation. We’ll generally discuss these as aspects of making government policy, but they could also be applied to organisational policy. The following section will discuss the types of work that you could do to positively contribute to the broad field of AI governance.

    Agenda setting

    To enact and implement a programme of government policies that have a positive impact, you have to first ensure that the subject of potential legislation and regulation is on the agenda for policymakers.

    Agenda setting for policy involves identifying and defining problems, drawing attention to the problems and raising their salience (at least to the relevant people), and promoting potential approaches to solving them.

    For example, when politicians take office, they often enter on a platform of promises made to their constituents and their supporters about which policy agendas they want to pursue. Those agendas are formed through public discussion, media narratives, internal party politics, deliberative debate, interest group advocacy, and other forms of input. The agenda can be, to varying degrees, problem-specific — having a broad remit of “improving health care.” Or it could be more solution-specific — aiming to create, for example, a single-payer health system.

    Issues don’t necessarily have to be unusually salient to get on the agenda. Policymakers or officials at various levels of government can prioritise solving certain problems or enacting specific proposals that aren’t the subject of national debate. In fact, sometimes making issues too salient, framing them in divisive ways, or allowing partisanship and political polarisation to shape the discussion, can make it harder to successfully put solutions on the agenda.

    What’s key for agenda setting as an approach to AI governance is that people with the authority have to buy into the idea of prioritising the issue, if they’re going to use their resources and political capital to focus on it.

    Policy creation and development

    While there does appear to be growing enthusiasm for a set or sets of policy proposals that could start to reduce the risk of an AI-related catastrophe, there’s still a lack of concrete policies that are ready to get off the ground.

    This is what the policy creation and development process is for. Researchers, advocates, civil servants, lawmakers and their staff, and others all can play a role in shaping the actual legislation and regulation that the government eventually enforces. In the corporate context, internal policy creation can serve similar functions, though it may be less enforceable unless backed up with contracts.

    Policy creation involves crafting solutions for the problem at hand with the policy tools available, usually requiring input from technical experts, legal experts, stakeholders, and the public. In countries with strong judicial review like the United States, special attention often has to be paid to make sure laws and regulations will hold up under the scrutiny of judges.

    Once concrete policy options are on the table, they must be put through the relevant decision-making process and negotiations. If the policy in question is a law that’s going to be passed, rather than a regulation, it needs to be crafted so that it will have enough support from lawmakers and other key decision makers to be enacted. This can happen in a variety of ways; it might be rolled into a larger piece of legislation that has wide support, or it may be rallied around and brought forward as its own package to be voted on individually.

    Policy creation can also be an iterative process, as policies are enacted, implemented, monitored, evaluated, and revised.

    For more details on the complex work of policy creation, we recommend Thomas Kalil’s article “Policy Entrepreneurship in the White House: Getting Things Done in Large Organisations.”

    Implementation

    Fundamentally, a policy is only an idea. For an idea to have an impact, someone actually has to carry it out. Any of the proposals for AI-related government policy — including standards and evaluations, licensing, and compute governance — will demand complex management and implementation.

    Policy implementation on this scale requires extensive planning, coordination in and out of government, communication, resource allocation, training and more — and every step in this process can be fraught with challenges. To rise to the occasion, any government implementing an AI policy regime will need talented individuals working at a high standard.

    The policy creation phase is critical and is probably the highest-priority work. But good ideas can be carried out badly, which is why policy implementation is also a key part of the AI governance agenda.

    Examples of people pursuing this path

    How to assess your fit and get started

    If you’re early on in your career, you should focus first on getting skills and other career capital to successfully contribute to the beneficial governance and regulation of AI.

    You can gain career capital for roles in many ways, and the best options will vary based on your route to impact. But broadly speaking, working in or studying fields such as politics, law, international relations, communications, and economics can all be beneficial for going into policy work.

    And expertise in AI itself, gained by studying and working in machine learning and technical AI safety, or potentially related fields such as computer hardware or information security, should also give you a big advantage.

    Testing your fit

    One general piece of career advice we give is to find relatively “cheap” tests to assess your fit for different paths. This could mean, for example, taking a policy internship, applying for a fellowship, doing a short bout of independent research as discussed above, or taking classes or courses on technical machine learning or computer engineering.

    It can also just involve talking to people currently doing a job you might consider having and finding out what the day-to-day experience of the work is like and what skills are needed.

    All of these factors can be difficult to predict in advance. While we grouped “government work” into a single category above, that label covers a wide range of positions and types of occupations in many different departments and agencies. Finding the right fit within a broad category like “government work” can take a while, and it can depend on a lot of factors out of your control, such as the colleagues you happen to work closely with. That’s one reason it can be useful to build broadly valuable career capital, so you have the option to move around to find the right role for you.

    And don’t underestimate the value at some point of just applying to many relevant openings in the field and sector you’re aiming for and seeing what happens. You’ll likely face a lot of rejection with this strategy, but you’ll be able to better assess your qualifications for different kinds of roles after you see how far you get in the process, if you take enough chances. This can give you a lot more information than just guessing about whether you have the right experience.

    It can be useful to rule out certain types of work if you gather evidence that you’re not a strong fit for the role. For example, if you invest a lot of time and effort trying to get into reputable universities or nonprofit institutions to do AI governance research, but you get no promising offers and receive little encouragement even after applying widely, this might be a significant signal that you’re unlikely to thrive in that particular path.

    That wouldn’t mean you have nothing to contribute, but your comparative advantage may lie elsewhere.

    Read the section of our career guide on finding a job that fits you.

    Types of career capital

    For a field like AI governance, a mix of people with technical and policy expertise — and some people with both — is needed.

    While anyone involved in this field should work to maintain an evolving understanding of both the technical and policy details, you’ll probably start out focusing on either policy or technical skills to gain career capital.

    This section covers:

    Much of this advice is geared toward roles in the US, though it may be relevant in other contexts.

    Generally useful career capital

    The chapter of the 80,000 Hours career guide on career capital lists five key components that will be useful in any path: skills and knowledge, connections, credentials, character, and runway.

    For most jobs touching on policy, social skills, networking, and — for lack of a better word — political skill will be a huge asset. This can probably be learned to some extent, but some people may find they don’t have these kinds of skills and can’t or don’t want to acquire them. That’s OK — there are many other routes to having a fulfilling and impactful career, and there may be some roles within this path that demand these skills to a much lesser extent. That’s why testing your fit is important.

    Read the full section of the career guide on career capital.

    To gain skills in policy, you can pursue education in many relevant fields, such as political science, economics, and law.

    Many master’s programmes offer specific coursework on public policy, science and society, security studies, international relations, and other topics; having a graduate degree or law degree will give you a leg up for many positions.

    In the US, a master’s, a law degree, or a PhD is particularly useful if you want to climb the federal bureaucracy. Our article on US policy master’s degrees provides detailed information about how to assess the many options.

    Internships in DC are a promising route to evaluate your aptitude for policy work and to establish early career capital. Many academic institutions now offer a strategic “Semester in DC” programme, which can let you explore placements of choice in Congress, federal agencies, or think tanks. The Virtual Student Federal Service (VSFS) also offers part-time, remote government internships. Balancing their academic commitments, students can access these opportunities during the academic year, further solidifying their grasp on the intricacies of policy work. This technological advance could be the stepping stone many aspiring policy professionals need to ascend in their future careers.

    Once you have a suitable background, you can take entry-level positions within parts of the government where you can build a professional network and develop your skills. In the US, you can become a congressional staffer, or take a position at a relevant federal department, such as the Department of Commerce, Department of Energy, or the Department of State. Alternatively, you can gain experience in think tanks — a particularly promising option if you have a strong aptitude for research — and government contractors, private sector companies providing services to the government.

    In Washington, DC, the culture is fairly unique. There’s a big focus on networking and internal bureaucratic politics to navigate. We’ve also been told that while merit matters to a degree in US government work, it is not the primary determinant of who is most successful. People who think they wouldn’t feel able or comfortable to be in this kind of environment for the long term should consider whether other paths would be best.

    If you find you can enjoy government and political work, impress your colleagues, and advance in your career, though, that’s a strong signal that you have the potential to make a real impact. Just being able to thrive in government work can be an extremely valuable comparative advantage.

    US citizenship

    Your citizenship may affect which opportunities are available to you. Many of the most important AI governance roles within the US — particularly in the executive branch and Congress — are only open to, or will at least heavily favour, American citizens. All key national security roles that might be especially important will be restricted to those with US citizenship, which is required to obtain a security clearance.

    This may mean that those who lack US citizenship will want to consider not pursuing roles that require it. Alternatively, they could plan to move to the US and pursue the long process of becoming a citizen. For more details on immigration pathways and types of policy work available to non-citizens, see this blog post on working in US policy as a foreign national. Consider also participating in the annual diversity visa lottery if you’re from an eligible country, as this is low effort and allows you to win a US green card if you’re lucky.

    Technical career capital

    Technical experience in machine learning, AI hardware, and related fields can be a valuable asset for an AI governance career. So it will be very helpful if you’ve studied a relevant subject area for an undergraduate or graduate degree, or a particularly productive course of independent study.

    We have a guide to technical AI safety careers, which explains how to learn the basics of machine learning.

    The following resources may be particularly useful for familiarising yourself with the field of AI safety:

    Working at an AI lab in technical roles, or other companies that use advanced AI systems and hardware, may also provide significant career capital in AI policy paths. (Read our career review discussing the pros and cons of working at a top AI lab.)

    We also have a separate career review on how becoming an expert in AI hardware could be very valuable in governance work.

    Many politicians and policymakers are generalists, as their roles require them to work in many different subject areas and on different types of problems. This means they’ll need to rely on expert knowledge when crafting and implementing policy on AI technology that they don’t fully understand. So if you can provide them this information, especially if you’re skilled at communicating it clearly, you can potentially fill influential roles.

    Some people who may have initially been interested in pursuing a technical AI safety career, but who have found that they either are no longer interested in that path or find more promising policy opportunities, might also decide that they can effectively pivot into a policy-oriented career.

    It is common for people with STEM backgrounds to enter and succeed in US policy careers. People with technical credentials that they may regard as fairly modest — such as computer science bachelor’s degrees or a master’s in machine learning — often find their knowledge is highly valued in Washington, DC.

    Most DC jobs don’t have specific degree requirements, so you don’t need to have a policy degree to work in DC. Roles specifically addressing science and technology policy are particularly well-suited for people with technical backgrounds, and people hiring for these roles will value higher credentials like a master’s or, better even, a terminal degree like a PhD or MD.

    There are many fellowship programmes specifically aiming to support people with STEM backgrounds to enter policy careers; some are listed below.

    This won’t be right for everybody — many people with technical skills may not have the disposition or skills necessary for engaging in policy. People in policy-related paths often benefit from strong writing and social skills as well as a comfort navigating bureaucracies and working with people holding very different motivations and worldviews.

    Other specific forms of career capital

    There are other ways to gain useful career capital that could be applied in this career path.

    • If you have or gain great communication skills as, say, a journalist or an activist, these skills could be very useful in advocacy and lobbying around AI governance.
      • Especially since advocacy around AI issues is still in its early stages, it will likely need people with experience advocating in other important cause areas to share their knowledge and skills.
    • Academics with relevant skill sets are sometimes brought into government for limited stints to serve as advisors in agencies such as the US Office of Science and Technology. This isn’t necessarily the foundation of a longer career in government, though it can be, and it can give an academic deeper insight into policy and politics than they might otherwise gain.
    • You can work at an AI lab in non-technical roles, gaining a deeper familiarity with the technology, the business, and the culture. (Read our career review discussing the pros and cons of working at a top AI lab.)
    • You could work on political campaigns and get involved in party politics. This is one way to get involved in legislation, learn about policy, and help impactful lawmakers, and you can also potentially help shape the discourse around AI governance. Note, though, the previously mentioned downsides of potentially polarising public opinion around AI policy; and entering party politics may limit your potential for impact whenever the party you’ve joined doesn’t hold power.
    • You could even try to become an elected official yourself, though it’s obviously competitive. If you take this route, make sure you find trustworthy and highly informed advisors to rely on to build expertise in AI, since politicians have many other responsibilities and won’t be able to focus as much on any particular issue.
    • You can focus on developing specific skill sets that might be valuable in AI governance, such as information security, intelligence work, diplomacy with China, etc.
      • Other skills: Organisational, entrepreneurial, management, diplomatic, and bureaucratic skills will also likely prove highly valuable in this career path. There may be new auditing agencies to set up or policy regimes to implement. Someone who has worked at high levels in other high-stakes industries, started an influential company, or coordinated complicated negotiations between various groups, would bring important skills to the table.

    Want one-on-one advice on pursuing this path?

    Because this is one of our priority paths, if you think this path might be a great option for you, we’d be especially excited to advise you on next steps, one-on-one. We can help you consider your options, make connections with others working in the same field, and possibly even help you find jobs or funding opportunities.

    APPLY TO SPEAK WITH OUR TEAM

    Where can this kind of work be done?

    Since successful AI governance will require work from governments, industry, and other parties, there will be many potential jobs and places to work for people in this path. The landscape will likely shift over time, so if you’re just starting out on this path, the places that seem most important might be different by the time you’re pivoting to using your career capital to make progress on the issue.

    Within the US government, for instance, it’s not clear which bodies will be most impactful when it comes to AI policy in five years. It will likely depend on choices that are made in the meantime.

    That said, it seems useful to give our understanding of which parts of the government are generally influential in technology governance and most involved right now to help orient. Gaining AI-related experience in government right now should still serve you well if you end up wanting to move into a more impactful AI-related role down the line when the highest-impact areas to work in are clearer.

    We’ll also give our current sense of important actors outside government where you might be able to build career capital and potentially have a big impact.

    Note that this list has by far the most detail about places to work within the US government. We would like to expand it to include more options as we learn more. You can use this form to suggest additional options for us to include. (And the fact that an option isn’t on this list shouldn’t be taken to mean we recommend against it or even that it would necessarily be less impactful than the places listed.)

    We have more detail on other options in separate (and older) career reviews, including the following:

    With that out of the way, here are some of the places where someone could do promising work or gain valuable career capital:

    In Congress, you can either work directly for lawmakers themselves or as staff on a legislative committee. Staff roles on the committees are generally more influential on legislation and more prestigious, but for that reason, they’re more competitive. If you don’t have that much experience, you could start out in an entry-level job staffing a lawmaker and then later try to transition to staffing a committee.

    Some people we’ve spoken to expect the following committees — and some of their subcommittees — in the House and Senate to be most impactful in the field of AI. You might aim to work on these committees or for lawmakers who have significant influence on these committees.

    House of Representatives

    • House Committee on Energy and Commerce
    • House Judiciary Committee
    • House Committee on Space, Science, and Technology
    • House Committee on Appropriations
    • House Armed Services Committee
    • House Committee on Foreign Affairs
    • House Permanent Select Committee on Intelligence

    Senate

    • Senate Committee on Commerce, Science, and Transportation
    • Senate Judiciary Committee
    • Senate Committee on Foreign Relations
    • Senate Committee on Homeland Security and Government Affairs
    • Senate Committee on Appropriations
    • Senate Committee on Armed Services
    • Senate Select Committee on Intelligence
    • Senate Committee on Energy & Natural Resources
    • Senate Committee on Banking, Housing, and Urban Affairs

    The Congressional Research Service, a nonpartisan legislative agency, also offers opportunities to conduct research that can impact policy design across all subjects.

    In general, we don’t recommend taking entry-level jobs within the executive branch for this path because it’s very difficult to progress your career through the bureaucracy at this level. It’s better to get a law degree or relevant master’s degree, which can give you the opportunity to start with more seniority.

    The influence of different agencies over AI regulation may shift over time, and there may even be entirely new agencies set up to regulate AI at some point, which could become highly influential. Whichever agency may be most influential in the future, it will be useful to have accrued career capital working effectively in government, creating a professional network, learning about day-to-day policy work, and deepening your knowledge of all things AI.

    We have a lot of uncertainty about this topic, but here are some of the agencies that may have significant influence on at least one key dimension of AI policy as of this writing:

    • Executive Office of the President (EOP)
      • Office of Management and Budget (OMB)
      • National Security Council (NSC)
      • Office of Science and Technology Policy (OSTP)
    • Department of State
      • Office of the Special Envoy for Critical and Emerging Technology (S/TECH)
      • Bureau of Cyberspace and Digital Policy (CDP)
      • Bureau of Arms Control, Verification and Compliance (AVC)
      • Office of Emerging Security Challenges (ESC)
    • Federal Trade Commission
    • Department of Defense (DOD)
      • Chief Digital and Artificial Intelligence Office (CDAO)
      • Emerging Capabilities Policy Office
      • Defense Advanced Research Projects Agency (DARPA)
      • Defense Technology Security Administration (DTSA)
    • Intelligence Community (IC)
      • Intelligence Advanced Research Projects Activity (IARPA)
      • National Security Agency (NSA)
      • Science advisor roles within the various agencies that make up the intelligence community
    • Department of Commerce (DOC)
      • The Bureau of Industry and Security (BIS)
      • The National Institute of Standards and Technology (NIST)
      • CHIPS Program Office
    • Department of Energy (DOE)
      • Artificial Intelligence and Technology Office (AITO)
      • Advanced Scientific Computing Research (ASCR) Program Office
    • National Science Foundation (NSF)
      • Directorate for Computer and Information Science and Engineering (CISE)
      • Directorate for Technology, Innovation and Partnerships (TIP)
    • Cybersecurity and Infrastructure Security Agency (CISA)

    Readers can find listings for roles in these departments and agencies at the federal government’s job board, USAJOBS; a more curated list of openings for potentially high impact roles and career capital is on the 80,000 Hours job board.

    We do not currently recommend attempting to join the US government via the military if you are aiming for a career in AI policy. There are many levels of seniority to rise through and many people competing for places, and initially you have to spend all of your time doing work unrelated to AI. However, having military experience already can be valuable career capital for other important roles in government, particularly national security positions. We would consider this route more competitive for military personnel who have been to an elite military academy, such as West Point, or for commissioned officers at rank O-3 or above.

    Policy fellowships are among the best entryways into policy work. They offer many benefits like first-hand policy experience, funding, training, mentoring, and networking. While many require an advanced degree, some are open to college graduates.

    • Center for Security and Emerging Technology (CSET)
    • Center for a New American Security
    • RAND Corporation
    • The MITRE Corporation
    • Brookings Institution
    • Carnegie Endowment for International Peace
    • Center for Strategic and International Studies (CSIS)
    • Federation of American Scientists (FAS)
    • Alignment Research Center
    • Open Philanthropy1
    • Institute for AI Policy and Strategy
    • Epoch AI
    • Centre for the Governance of AI (GovAI)
    • Center for AI Safety (CAIS)
    • Legal Priorities Project
    • Apollo Research
    • Centre for Long-Term Resilience
    • AI Impacts
    • Johns Hopkins Applied Physics Lab

    (Read our career review discussing the pros and cons of working at a top AI lab.)

    • Organisation for Economic Co-operation and Development (OECD)
    • International Atomic Energy Agency (IAEA)
    • International Telecommunication Union (ITU)
    • International Organization for Standardization (ISO)
    • European Union institutions (e.g., European Commission)
    • Simon Institute for Longterm Governance

    Our job board features opportunities in AI safety and policy:

      View all opportunities

      How this career path can go wrong

      Doing harm

      As we discuss in an article on accidental harm, there are many ways to set back a new field that you’re working in when you’re trying to do good, and this could mean your impact is negative rather than positive. (You may also want to read our article on harmful careers.)

      It seems likely there’s a lot of potential to inadvertently cause harm in the emerging field of AI governance. We discussed some possibilities in the section on advocacy and lobbying. Some other possibilities include:

      • Pushing for a given policy to the detriment of a superior policy
      • Communicating about the risks of AI in a way that ratchets up geopolitical tensions
      • Enacting a policy that has the opposite impact of its intended effect
      • Setting policy precedents that could be exploited by dangerous actors down the line
      • Funding projects in AI that turn out to be dangerous
      • Sending the message, implicitly or explicitly, that the risks are being managed when they aren’t, or that they’re lower than they in fact are
      • Suppressing technology that would actually be extremely beneficial for society

      The trouble is that we have to act with incomplete information, so it may never be very clear when or if people in AI governance are falling into these traps. Being aware that they are potential ways of causing harm will help you keep alert for these possibilities, though, and you should remain open to changing course if you find evidence that your actions may be damaging.

      And we recommend keeping in mind the following pieces of general guidance from our article on accidental harm:

      1. Ideally, eliminate courses of action that might have a big negative impact.
      2. Don’t be a naive optimizer.
      3. Have a degree of humility.
      4. Develop expertise, get trained, build a network, and benefit from your field’s accumulated wisdom.
      5. Follow cooperative norms
      6. Match your capabilities to your project and influence.
      7. Avoid hard-to-reverse actions.

      Burning out

      We think this work is exceptionally pressing and valuable, so we encourage our readers who might have a strong personal fit for governance work to test it out. But going into government, in particular, can be difficult. Some people we’ve advised have gone into policy roles with the hope of having an impact, only to burn out and move on.

      At the same time, many policy practitioners find their work very meaningful, interesting, and varied.

      Some roles in government may be especially challenging for the following reasons:

      • Some roles can be very fast-paced, involving relatively high stress and long hours. This is particularly true in Congress and senior executive branch positions and much less so in think tanks or junior agency roles.
      • It can take a long time to get into positions with much autonomy or decision-making authority.
      • Progress on the issues you care about can be slow, and you often have to work on other priorities. Congressional staffers in particular typically have very broad policy portfolios.
      • Work within bureaucracies faces many limitations, which can be frustrating.
      • It can be demotivating to work with people who don’t share your values. Though note that policy can select for altruistic people — even if they have different beliefs about how to do good.
      • The work isn’t typically well paid relative to comparable positions outside of government.

      So we recommend speaking to people in the kinds of positions you might aim to have in order to get a sense of whether the career path would be right for you. And if you do choose to pursue it, look out for signs that the work may be having a negative effect on you and seek support from people who understand what you care about.

      If you end up wanting or needing to leave and transition into a new path, that’s not necessarily a loss or a reason for regret. You will likely make important connections and learn a lot of useful information and skills. This career capital can be useful as you transition into another role, perhaps pursuing a complementary approach to AI governance and coordination.

      What the increased attention on AI means

      We’ve been concerned about risks posed by AI for years. Based on the arguments that this technology could potentially cause a global catastrophe, and otherwise have a dramatic impact on future generations, we’ve advised many people to work to mitigate the risks.

      The arguments for the risk aren’t completely conclusive, in our view. But the arguments are worth taking seriously, and given the fact that few others in the world seemed to be devoting much time to even figuring out how big the threat was or how to mitigate it (while at the same time progress in making AI systems more powerful was accelerating) we concluded it was worth ranking among our top priorities.

      Now that there’s increased attention on AI, some might conclude that it’s less neglected and thus less pressing to work on. However, the increased attention on AI also makes many interventions potentially more tractable than they had been previously, as policymakers and others are more open to the idea of crafting AI regulations.

      And while more attention is now being paid to AI, it’s not clear it will be focused on the most important risks. So there’s likely still a lot of room for important and pressing work positively shaping the development of AI policy.

      Read next

      If you’re interested in this career path, we recommend checking out some of the following articles next.

      Learn more

      Top recommendations

      Further recommendations

      Read next:  Learn about other high-impact careers

      Want to consider more paths? See our list of the highest-impact career paths according to our research.

      Plus, join our newsletter and we’ll mail you a free book

      Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

      The post AI governance and coordination appeared first on 80,000 Hours.

      ]]>
      AI safety technical research https://80000hours.org/career-reviews/ai-safety-researcher/ Mon, 19 Jun 2023 10:28:33 +0000 https://80000hours.org/?post_type=career_profile&p=74400 The post AI safety technical research appeared first on 80,000 Hours.

      ]]>
      Progress in AI — while it could be hugely beneficial — comes with significant risks. Risks that we’ve argued could be existential.

      But these risks can be tackled.

      With further progress in AI safety, we have an opportunity to develop AI for good: systems that are safe, ethical, and beneficial for everyone.

      This article explains how you can help.

      In a nutshell: Artificial intelligence will have transformative effects on society over the coming decades, and could bring huge benefits — but we also think there’s a substantial risk. One promising way to reduce the chances of an AI-related catastrophe is to find technical solutions that could allow us to prevent AI systems from carrying out dangerous behaviour.

      Pros

      • Opportunity to make a significant contribution to a hugely important area of research
      • Intellectually challenging and interesting work
      • The area has a strong need for skilled researchers and engineers, and is highly neglected overall

      Cons

      • Due to a shortage of managers, it’s difficult to get jobs and might take you some time to build the required career capital and expertise
      • You need a strong quantitative background
      • It might be very difficult to find solutions
      • There’s a real risk of doing harm

      Key facts on fit

      You’ll need a quantitative background and should probably enjoy programming. If you’ve never tried programming, you may be a good fit if you can break problems down into logical parts, generate and test hypotheses, possess a willingness to try out many different solutions, and have high attention to detail.

      If you already:

      • Are a strong software engineer, you could apply for empirical research contributor roles right now (even if you don’t have a machine learning background, although that helps)
      • Could get into a top 10 machine learning PhD, that would put you on track to become a research lead
      • Have a very strong maths or theoretical computer science background, you’ll probably be a good fit for theoretical alignment research

      Recommended

      If you are well suited to this career, it may be the best way for you to have a social impact.

      Review status

      Based on a medium-depth investigation 

      Thanks to Adam Gleave, Jacob Hilton and Rohin Shah for reviewing this article. And thanks to Charlie Rogers-Smith for his help, and his article on the topic — How to pursue a career in technical AI alignment.

      Why AI safety technical research is high impact

      As we’ve argued, in the next few decades, we might see the development of hugely powerful machine learning systems with the potential to transform society. This transformation could bring huge benefits — but only if we avoid the risks.

      We think that the worst-case risks from AI systems arise in large part because AI systems could be misaligned — that is, they will aim to do things that we don’t want them to do. In particular, we think they could be misaligned in such a way that they develop (and execute) plans that pose risks to humanity’s ability to influence the world, even when we don’t want that influence to be lost.

      We think this means that these future systems pose an existential threat to civilisation.

      Even if we find a way to avoid this power-seeking behaviour, there are still substantial risks — such as misuse by governments or other actors — which could be existential threats in themselves.

      Want to learn more about risks from AI? Read the problem profile.

      We think that technical AI safety could be the highest-impact career path we’ve identified to date. That’s because it seems like a promising way of reducing risks from AI. We’ve written an entire article about what those risks are and why they’re so important.

      Read more about preventing an AI-related catastrophe

      There are many ways in which we could go about reducing the risks that these systems might pose. But one of the most promising may be researching technical solutions that prevent unwanted behaviour — including misaligned behaviour — from AI systems. (Finding a technical way to prevent misalignment in particular is known as the alignment problem.)

      In the past few years, we’ve seen more organisations start to take these risks more seriously. Many of the leading industry labs developing AI — including Google DeepMind and OpenAI — have teams dedicated to finding these solutions, alongside academic research groups including at MIT, Oxford, Cambridge, Carnegie Mellon University, and UC Berkeley.

      That said, the field is still very new. We think there are only around 300 people working on technical approaches to reducing existential risks from AI systems,1 which makes this a highly neglected field.

      Finding technical ways to reduce this risk could be quite challenging. Any practically helpful solution must retain the usefulness of the systems (remaining economically competitive with less safe systems), and continue to work as systems improve over time (that is, it needs to be ‘scalable’). As we argued in our problem profile, it seems like it might be difficult to find viable solutions, particularly for modern ML (machine learning) systems.

      (If you don’t know anything about ML, we’ve written a very very short introduction to ML, and we’ll go into more detail on how to learn about ML later in this article. Alternatively, if you do have ML experience, talk to our team — they can give you personalised career advice, make introductions to others working on these issues, and possibly even help you find jobs or funding opportunities.)

      Although it seems hard, there are lots of avenues for more research — and the field really is very young, so there are new promising research directions cropping up all the time. So we think it’s moderately tractable, though we’re highly uncertain.

      In fact, we’re uncertain about all of this and have written extensively about reasons we might be wrong about AI risk.

      But, overall, we think that — if it’s a good fit for you — going into AI safety technical research may just be the highest-impact thing you can do with your career.

      What does this path involve?

      AI safety technical research generally involves working as a scientist or engineer at major AI labs, in academia, or in independent nonprofits.

      These roles can be very hard to get. You’ll likely need to build up career capital before you end up in a high-impact role (more on this later, in the section on how to enter). That said, you may not need to spend a long time building this career capital — we’ve seen exceptionally talented people move into AI safety from other quantitative fields, sometimes in less than a year.

      Most AI safety technical research falls on a spectrum between empirical research (experimenting with current systems as a way of learning more about what will work), and theoretical research (conceptual and mathematical research looking at ways of ensuring that future AI systems are safe).

      No matter where on this spectrum you end up working, your career path might look a bit different depending on whether you want to aim at becoming a research lead — proposing projects, managing a team and setting direction — or a contributor — focusing on carrying out the research.

      Finally, there are two slightly different roles you might aim for:

      • In academia, research is often led by professors — the key distinguishing feature of being a professor is that you’ll also teach classes and mentor grad students (and you’ll definitely need a PhD).
      • Many (but not all) contributor roles in empirical research are also engineers, often software engineers. Here, we’re focusing on software roles that directly contribute to AI safety research (and which often require some ML background) — we’ve written about software engineering more generally in a separate career review.

      4 kinds of AI safety role: empirical lead, empirical contributor, theoretical lead and theoretical contributor

      We think that research lead roles are probably higher-impact in general. But overall, the impact you could have in any of these roles is likely primarily determined by your personal fit for the role — see the section on how to predict your fit in advance.

      Next, we’ll take a look at what working in each path might involve. Later, we’ll go into how you might enter each path.

      What does work in the empirical AI safety path involve?

      Empirical AI safety tends to involve teams working directly with ML models to identify any risks and develop ways in which they might be mitigated.

      That means the work is focused on current ML techniques and techniques that might be applied in the very near future.

      Practically, working on empirical AI safety involves lots of programming and ML engineering. You might, for example, come up with ways you could test the safety of existing systems, and then carry out these empirical tests.

      You can find roles in empirical AI safety in industry and academia, as well as some in AI safety-focused nonprofits.

      Particularly in academia, lots of relevant work isn’t explicitly labelled as being focused on existential risk — but it can still be highly valuable. For example, work in interpretability, adversarial examples, diagnostics and backdoor learning, among other areas, could be highly relevant to reducing the chance of an AI-related catastrophe.

      We’re also excited by experimental work to develop safety standards that AI companies might adhere to in the future — for example, the work being carried out by METR.

      To learn more about the sorts of research taking place at labs focused on empirical AI safety, take a look at:

      While programming is central to all empirical work, generally, research lead roles will be less focused on programming; instead, they need stronger research taste and theoretical understanding. In comparison, research contributors need to be very good at programming and software engineering.

      What does work in the theoretical AI safety path involve?

      Theoretical AI safety is much more heavily conceptual and mathematical. Often it involves careful reasoning about the hypothetical behaviour of future systems.

      Generally, the aim is to come up with properties that it would be useful for safe ML algorithms to have. Once you have some useful properties, you can try to develop algorithms with these properties (bearing in mind that to be practically useful these algorithms will have to end up being adopted by industry). Alternatively, you could develop ways of checking whether systems have these properties. These checks could, for example, help hold future AI products to high safety standards.

      Many people working in theoretical AI safety will spend much of their time proving theorems or developing new mathematical frameworks. More conceptual approaches also exist, although they still tend to make heavy use of formal frameworks.

      Some examples of research in theoretical AI safety include:

      There are generally fewer roles available in theoretical AI safety work, especially as research contributors. Theoretical research contributor roles exist at nonprofits (primarily the Alignment Research Center), as well as at some labs (for example, Anthropic’s work on conditioning predictive models and the Causal Incentives Working Group at Google DeepMind). Most contributor roles in theoretical AI safety probably exist in academia (for example, PhD students in teams working on projects relevant to theoretical AI safety).

      Some exciting approaches to AI safety

      There are lots of technical approaches to AI safety currently being pursued. Here are just a few of them:

      It’s worth noting that there are many approaches to AI safety, and people in the field strongly disagree on what will or won’t work.

      This means that, once you’re working in the field, it can be worth being charitable and careful not to assume that others’ work is unhelpful just because it seemed so on a quick skim. You should probably be uncertain about your own research agenda as well.

      What’s more, as we mentioned earlier, lots of relevant work across all these areas isn’t explicitly labelled ‘safety.’

      So it’s important to think carefully about how or whether any particular research helps reduce the risks that AI systems might pose.

      What are the downsides of this career path?

      AI safety technical research is not the only way to make progress on reducing the risks that future AI systems might pose. Also, there are many other pressing problems in the world that aren’t the possibility of an AI-related catastrophe, and lots of careers that can help with them. If you’d be a better fit working on something else, you should probably do that.

      Beyond personal fit, there are a few other downsides to the career path:

      • It can be very competitive to enter (although once you’re in, the jobs are well paid, and there are lots of backup options).
      • You need quantitative skills — and probably programming skills.
      • The work is geographically concentrated in just a few places (mainly the California Bay Area and London, but there are also opportunities in places with top universities such as Oxford, New York, Pittsburgh, and Boston). That said, remote work is increasingly possible at many research labs.
      • It might not be very tractable to find good technical ways of reducing the risk. Although assessments of its difficulty vary, and while making progress is almost certainly possible, it may be quite hard to do so. This reduces the impact that you could have working in the field. That said, if you start out in technical work you might be able to transition to governance work, since that often benefits from technical training and experience with the industry, which most people do not have.)
      • Relatedly, there’s lots of disagreement in the field about what could work; you’ll probably be able to find at least some people who think what you’re working on is useless, whatever you end up doing.
      • Most importantly, there’s some risk of doing harm. While gaining career capital, and while working on the research itself, you’ll have to make difficult decisions and judgement calls about whether you’re working on something beneficial (see our anonymous advice about working in roles that advance AI capabilities). There’s huge disagreement on which technical approaches to AI safety might work — and sometimes this disagreement takes the form of thinking that a strategy will actively increase existential risks from AI.

      Finally, we’ve written more about the best arguments against AI being pressing in our problem profile on preventing an AI-related catastrophe. If those are right, maybe you could have more impact working on a different issue.

      How much do AI safety technical researchers earn?

      Many technical researchers work at companies or small startups that pay wages competitive with the Bay Area and Silicon Valley tech industry, and even smaller organisations and nonprofits will pay competitive wages to attract top talent. The median compensation for a software engineer in the San Francisco Bay area was $222,000 per year in 2020.3 (Read more about software engineering salaries).

      This $222,000 median may be an underestimate, as AI roles, especially in top AI labs that are rapidly scaling up their work in AI, often pay better than other tech jobs, and the same applies to safety researchers — even those in nonprofits.

      However, academia has lower salaries than industry in general, and we’d guess that AI safety research roles in academia pay less than commercial labs and nonprofits.

      Examples of people pursuing this path

      How to predict your fit in advance

      You’ll generally need a quantitative background (although not necessarily a background in computer science or machine learning) to enter this career path.

      There are two main approaches you can take to predict your fit, and it’s helpful to do both:

      • Try it out: try out the first few steps in the section below on learning the basics. If you haven’t yet, try learning some python, as well as taking courses in linear algebra, calculus, and probability. And if you’ve done that, try learning a bit about deep learning and AI safety. Finally, the best way to try this out for many people would be to actually get a job as a (non-safety) ML engineer (see more in the section on how to enter).
      • Talk to people about whether it would be a good fit for you: If you want to become a technical researcher, our team probably wants to talk to you. We can give you 1-1 advice, for free. If you know anyone working in the area (or something similar), discuss this career path with them and ask for their honest opinion. You may be able to meet people through our community. Our advisors can also help make connections.

      It can take some time to build expertise, and enjoyment can follow expertise — so be prepared to take some time to learn and practice before you decide to switch to something else entirely.

      If you’re not sure what roles you might aim for longer term, here are a few rough ways you could make a guess about what to aim for, and whether you might be a good fit for various roles on this path:

      • Testing your fit as an empirical research contributor: In a blog post about hiring for safety researchers, the Google DeepMind team said “as a rough test for the Research Engineer role, if you can reproduce a typical ML paper in a few hundred hours and your interests align with ours, we’re probably interested in interviewing you.”
        • Looking specifically at software engineering, one hiring manager at Anthropic said that if you could, with a few weeks’ work, write a complex new feature or fix a very serious bug in a major ML library, they’d want to interview you straight away. (Read more.)
      • Testing your fit for theoretical research: If you could have got into a top 10 maths or theoretical computer science PhD programme if you’d optimised your undergrad to do so, that’s a decent indication of your fit (and many researchers in fact have these PhDs). The Alignment Research Center (one of the few organisations that hires for theoretical research contributors, as of 2023) said that they were open to hiring people without any research background. They gave four tests of fit: creativity (e.g. you may have ideas for solving open problems in the field, like Eliciting Latent Knowledge); experience designing algorithms, proving theorems, or formalising concepts; broad knowledge of maths and computer science; and having thought a lot about the AI alignment problem in particular.
      • Testing your fit as a research lead (or for a PhD): The vast majority of research leads have a PhD. Also, many (but definitely not all) AI safety technical research roles will require a PhD — and if they don’t, having a PhD (or being the sort of person that could get one) would definitely help show that you’re a good fit for the work. To get into a top 20 machine learning PhD programme, you’d probably need to publish something like a first author workshop paper, as well as a third author conference paper at a major ML conference (like NeurIPS or ICML). (Read more about whether you should do a PhD).

      Read our article on personal fit to learn more about how to assess your fit for the career paths you want to pursue.

      How to enter

      You might be able to apply for roles right away — especially if you meet, or are near meeting, the tests we just looked at — but it also might take you some time, possibly several years, to skill up first.

      So, in this section, we’ll give you a guide to entering technical AI safety research. We’ll go through four key questions:

      1. How to learn the basics
      2. Whether you should do a PhD
      3. How to get a job in empirical research
      4. How to get a job in theoretical research

      Hopefully, by the end of the section, you’ll have everything you need to get going.

      Learning the basics

      To get anywhere in the world of AI safety technical research, you’ll likely need a background knowledge of coding, maths, and deep learning.

      You might also want to practice enough to become a decent ML engineer (although this is generally more useful for empirical research), and learn a bit about safety techniques in particular (although this is generally more useful for empirical research leads and theoretical researchers).

      We’ll go through each of these in turn.

      Learning to program

      You’ll probably want to learn to code in python, because it’s the most widely used language in ML engineering.

      The first step is probably just trying it out. As a complete beginner, you can write a Python program in less than 20 minutes that reminds you to take a break every two hours. Don’t be discouraged if your code doesn’t work the first time — that’s what normally happens when people code!

      Once you’ve done that, you have a few options:

      You can read more about learning to program — and how to get your first job in software engineering (if that’s the route you want to take) — in our career review on software engineering.

      Learning the maths

      The maths of deep learning relies heavily on calculus and linear algebra, and statistics can be useful too — although generally learning the maths is much less important than programming and basic, practical ML.

      We’d generally recommend studying a quantitative degree (like maths, computer science or engineering), most of which will cover all three areas pretty well.

      If you want to actually get good at maths, you have to be solving problems. So, generally, the most useful thing that textbooks and online courses provide isn’t their explanations — it’s a set of exercises to try to solve, in order, with some help if you get stuck.

      If you want to self-study (especially if you don’t have a quantitative degree) here are some possible resources:

      You might be able to find resources that cover all these areas, like Imperial College’s Mathematics for Machine Learning.

      Learning basic machine learning

      You’ll likely need to have a decent understanding of how AI systems are currently being developed. This will involve learning about machine learning and neural networks, before diving into any specific subfields of deep learning.

      Again, there’s the option of covering this at university. If you’re currently at college, it’s worth checking if you can take an ML course even if you’re not majoring in computer science.

      There’s one important caveat here: you’ll learn a huge amount on the job, and the amount you’ll need to know in advance for any role or course will vary hugely! Not even top academics know everything about their fields. It’s worth trying to find out how much you’ll need to know for the role you want to do before you invest hundreds of hours into learning about ML.

      With that caveat in mind, here are some suggestions of places you might start if you want to self-study the basics:

      PyTorch is a very common package used for implementing neural networks, and probably worth learning! When I was first learning about ML, my first neural network was a 3-layer convolutional neural network with L2 regularisation classifying characters from the MNIST database. This is a pretty common first challenge, and a good way to learn PyTorch.

      Learning about AI safety

      If you’re going to work as an AI safety researcher, it usually helps to know about AI safety.

      This isn’t always true — some engineering roles won’t require much knowledge of AI safety. But even then, knowing the basics will probably help land you a position, and can also help with things like making difficult judgement calls and avoiding doing harm. And if you want to be able to identify and do useful work, you’ll need to learn about the field eventually.

      Because the field is still so new, there probably aren’t (yet) university courses you can take. So you’ll need to do some self-study. Here are some places you might start:

      For more suggestions — especially when it comes to reading about the nature of the risks we might face from AI systems — take a look at the top resources to learn more from our problem profile.

      Should you do a PhD?

      Some technical research roles will require a PhD — but many won’t, and PhDs aren’t the best option for everyone.

      The main benefit of doing a PhD is probably practising setting and carrying out your own research agenda. As a result, getting a PhD is practically the default if you want to be a research lead.

      That said, you can also become a research lead without a PhD — in particular, by transitioning from a role as a research contributor. At some large labs, the boundary between being a contributor and a lead is increasingly blurry.

      Many people find PhDs very difficult. They can be isolating and frustrating, and take a very long time (4–6 years). What’s more, both your quality of life and the amount you’ll learn will depend on your supervisor — and it can be really difficult to figure out in advance whether you’re making a good choice.

      So, if you’re considering doing a PhD, here are some things to consider:

      • Your long-term vision: If you’re aiming to be a research lead, that suggests you might want to do a PhD — the vast majority of research leads have PhDs. If you mainly want to be a contributor (e.g. an ML or software engineer), that suggests you might not. If you’re unsure, you should try doing something to test your fit for each, like trying a project or internship. You might try a pre-doctoral research assistant role — if the research you do is relevant to your future career, these can be good career capital, whether or not you do a PhD.
      • The topic of your research: It’s easy to let yourself become tied down to a PhD topic you’re not confident in. If the PhD you’re considering would let you work on something that seems useful for AI safety, it’s probably — all else equal — better for your career, and the research itself might have a positive impact as well.
      • Mentorship: What are the supervisors or managers like at the opportunities open to you? You might be able to find ML engineering or research roles in industry where you could learn much more than you would in a PhD — or vice versa. When picking a supervisor, try reaching out to the current or former students of a prospective supervisor and asking them some frank questions. (Also, see this article on how to choose a PhD supervisor.)
      • Your fit for the work environment: Doing a PhD means working on your own with very little supervision or feedback for long periods of time. Some people thrive in these conditions! But some really don’t and find PhDs extremely difficult.

      Read more in our more detailed (but less up-to-date) review of machine learning PhDs.

      It’s worth remembering that most jobs don’t need a PhD. And for some jobs, especially empirical research contributor roles, even if a PhD would be helpful, there are often better ways of getting the career capital you’d need (for example, working as a software or ML engineer). We’ve interviewed two ML engineers who have had hugely successful careers without doing a PhD.

      Whether you should do a PhD doesn’t depend (much) on timelines

      We think it’s plausible that we will develop AI that could be hugely transformative for society by the end of the 2030s.

      All else equal, that possibility could argue for trying to have an impact right away, rather than spending five (or more) years doing a PhD.

      Ultimately, though, how well you, in particular, are suited to a particular PhD is probably a much more important factor than when AI will be developed.

      That is to say, we think the increase in impact caused by choosing a path that’s a good fit for you is probably larger than any decrease in impact caused by delaying your work. This is in part because the spread in impact caused by the specific roles available to you, as well as your personal fit for them, is usually very large. Some roles (especially research lead roles) will just require having a PhD, and others (especially more engineering-heavy roles) won’t — and people’s fit for these paths varies quite a bit.

      We’re also highly uncertain about estimates about when we might develop transformative AI. This uncertainty reduces the expected cost of any delay.

      Most importantly, we think PhDs shouldn’t be thought of as a pure delay to your impact. You can do useful work in a PhD, and generally, the first couple of years in any career path will involve a lot of learning the basics and getting up to speed. So if you have a good mentor, work environment, and choice of topic, your PhD work could be as good as, or possibly better than, the work you’d do if you went to work elsewhere early in your career. And if you suddenly receive evidence that we have less time than you thought, it’s relatively easy to drop out.

      There are lots of other considerations here — for a rough overview, and some discussion, see this post by 80,000 Hours advisor Alex Lawsen, as well as the comments.

      Overall, we’d suggest that instead of worrying about a delay to your impact, think instead about which longer-term path you want to pursue, and how the specific opportunities in front of you will get you there.

      How to get into a PhD

      ML PhDs can be very competitive. To get in, you’ll probably need a few publications (as we said above, something like a first author workshop paper, as well as a third author conference paper at a major ML conference (like NeurIPS or ICML), and references, probably from ML academics. (Although publications also look good whatever path you end up going down!)

      To end up at that stage, you’ll need a fair bit of luck, and you’ll also need to find ways to get some research experience.

      One option is to do a master’s degree in ML, although make sure it’s a research masters — most ML master’s degrees primarily focus on preparation for industry.

      Even better, try getting an internship in an ML research group. Opportunities include RISS at Carnegie Mellon University, UROP at Imperial College London, the Aalto Science Institute international summer research programme, the Data Science Summer Institute, the Toyota Technological Institute intern programme and MILA. You can also try doing an internship specifically in AI safety, for example at CHAI. However, there are sometimes disadvantages to doing internships specifically in AI safety directly — in general, it may be harder to publish and mentorship might be more limited.

      Another way of getting research experience is by asking whether you can work with researchers. If you’re already at a top university, it can be easiest to reach out to people working at the university you’re studying at.

      PhD students or post-docs can be more responsive than professors, but eventually, you’ll want a few professors you’ve worked with to provide references, so you’ll need to get in touch. Professors tend to get lots of cold emails, so try to get their attention! You can try:

      • Getting an introduction, for example from a professor who’s taught you
      • Mentioning things you’ve done (your grades, relevant courses you’ve taken, your GitHub, any ML research papers you’ve attempted to replicate as practice)
      • Reading some of their papers and the main papers in the field, and mention them in the email
      • Applying for funding that’s available to students who want to work in AI safety, and letting people know you’ve got funding to work with them

      Ideally, you’ll find someone who supervises you well and has time to work with you (that doesn’t necessarily mean the most famous professor — although it helps a lot if they’re regularly publishing at top conferences). That way, they’ll get to know you, you can impress them, and they’ll provide an amazing reference when you apply for PhDs.

      It’s very possible that, to get the publications and references you’ll need to get into a PhD, you’ll need to spend a year or two working as a research assistant, although these positions can also be quite competitive.

      This guide by Adam Gleave also goes into more detail on how to get a PhD, including where to apply and tips on the application process itself. We discuss ML PhDs in more detail in our career review on ML PhDs (though it’s outdated compared to this career review).

      Getting a job in empirical AI safety research

      Ultimately, the best way of learning to do empirical research — especially in contributor and engineering-focused roles — is to work somewhere that does both high-quality engineering and cutting-edge research.

      The top three labs are probably Google DeepMind (who offer internships to students), OpenAI (who have a 6-month residency programme) and Anthropic. (Working at a leading AI lab carries with it some risk of doing harm, so it’s important to think carefully about your options. We’ve written a separate article going through the major relevant considerations.)

      To end up working in an empirical research role, you’ll probably need to build some career capital.

      Whether you want to be a research lead or a contributor, it’s going to help to become a really good software engineer. The best ways of doing this usually involve getting a job as a software engineer at a big tech company or at a promising startup. (We’ve written an entire article about becoming a software engineer.)

      Many roles will require you to be a good ML engineer, which means going further than just the basics we looked at above. The best way to become a good ML engineer is to get a job doing ML engineering — and the best places for that are probably leading AI labs.

      For roles as a research lead, you’ll need relatively more research experience. You’ll either want to become a research contributor first, or enter through academia (for example by doing a PhD).

      All that said, it’s important to remember that you don’t need to know everything to start applying, as you’ll inevitably learn loads on the job — so do try to find out what you’ll need to learn to land the specific roles you’re considering.

      How much experience do you need to get a job? It’s worth reiterating the tests we looked at above for contributor roles:

      • In a blog post about hiring for safety researchers, the DeepMind team said “as a rough test for the Research Engineer role, if you can reproduce a typical ML paper in a few hundred hours and your interests align with ours, we’re probably interested in interviewing you.”
      • Looking specifically at software engineering, one hiring manager at Anthropic said that if you could, with a few weeks’ work, write a new feature or fix a serious bug in a major ML library, they’d want to interview you straight away. (Read more.)

      In the process of getting this experience, you might end up working in roles that advance AI capabilities. There are a variety of views on whether this might be harmful — so we’d suggest reading our article about working at leading AI labs and our article containing anonymous advice from experts about working in roles that advance capabilities. It’s also worth talking to our team about any specific opportunities you have.

      If you’re doing another job, or a degree, or think you need to learn some more before trying to change careers, there are a few good ways of getting more experience doing ML engineering that go beyond the basics we’ve already covered:

      • Getting some experience in software / ML engineering. For example, if you’re doing a degree, you might try an internship as a software engineer during the summer. DeepMind offer internships for students with at least two years of study in a technical subject,
      • Replicating papers. One great way of getting experience doing ML engineering, is to replicate some papers in whatever sub-field you might want to work in. Richard Ngo, an AI governance researcher at OpenAI, has written some advice on replicating papers. But bear in mind that replicating papers can be quite hard — take a look at Amid Fish’s blog on what he learned replicating a deep RL paper. Finally, Rogers-Smith has some suggestions on papers to replicate. If you do spend some time replicating papers, remember that when you get to applying for roles, it will be really useful to be able to prove you’ve done the work. So try uploading your work to GitHub, or writing a blog on your progress. And if you’re thinking about spending a long time on this (say, over 100 hours), try to get some feedback on the papers you might replicate before you start — you could even reach out to a lab you want to work for.
      • Taking or following a more in-depth course in empirical AI safety research. Redwood Research ran the MLAB bootcamp, and you can apply for access to their curriculum here. You could also take a look at this Deep Learning Curriculum by Jacob Hilton, a researcher at the Alignment Research Center — although it’s probably very challenging without mentorship.4 The Alignment Research Engineer Accelerator is a program that uses this curriculum. Some mentors on the SERI ML Alignment Theory Scholars Program focus on empirical research.
      • Learning about a sub-field of deep learning. In particular, we’d suggest natural language processing (in particular transformers — see this lecture as a starting point) and reinforcement learning (take a look at Pong from Pixels by Andrej Karpathy, and OpenAI’s Spinning up in Deep RL). Try to get to the point where you know about the most important recent advances.

      Finally, Athena is an AI alignment mentorship program for women with a technical background looking to get jobs in the alignment field

      Getting a job in theoretical AI safety research

      There are fewer jobs available in theoretical AI safety research, so it’s harder to give concrete advice. Having a maths or theoretical computer science PhD isn’t always necessary, but is fairly common among researchers in industry, and is pretty much required to be an academic.

      If you do a PhD, ideally it’d be in an area at least somewhat related to theoretical AI safety research. For example, it could be in probability theory as applied to AI, or in theoretical CS (look for researchers who publish in COLT or FOCS).

      Alternatively, one path is to become an empirical research lead before moving into theoretical research.

      Compared to empirical research, you’ll need to know relatively less about engineering, and relatively more about AI safety as a field.

      Once you’ve done the basics, one possible next step you could try is reading papers from a particular researcher, or on a particular topic, and summarising what you’ve found.

      You could also try spending some time (maybe 10–100 hours) reading about a topic and then some more time (maybe another 10–100 hours) trying to come up with some new ideas on that topic. For example, you could try coming up with proposals to solve the problem of eliciting latent knowledge. Alternatively, if you wanted to focus on the more mathematical side, you could try having a go at the assignment at the end of this lecture by Michael Cohen, a grad student at the University of Oxford.

      If you want to enter academia, reading a ton of papers seems particularly important. Maybe try writing a survey paper on a certain topic in your spare time. It’s a great way to master a topic, spark new ideas, spot gaps, and come up with research ideas. When applying to grad school or jobs, your paper is a fantastic way to show you love research so much you do it for fun.

      There are some research programmes aimed at people new to the field, such as the SERI ML Alignment Theory Scholars Program, to which you could apply.

      Other ways to get more concrete experience include doing research internships, working as a research assistant, or doing a PhD, all of which we’ve written about above, in the section on whether and how you can get into a PhD programme.

      One note is that a lot of people we talk to try to learn independently. This can be a great idea for some people, but is fairly tough for many, because there’s substantially less structure and mentorship.

      AI labs in industry that have empirical technical safety teams, or are focused entirely on safety:

      • Anthropic is an AI safety company working on building interpretable and safe AI systems. They focus on empirical AI safety research. Anthropic cofounders Daniela and Dario Amodei gave an interview about the lab on the Future of Life Institute podcast. On our podcast, we spoke to Chris Olah, who leads Anthropic’s research into interpretability, and Nova DasSarma, who works on systems infrastructure at Anthropic.
      • METR works on assessing whether cutting-edge AI systems could pose catastrophic risks to civilization, including early-stage, experimental work to develop techniques, and evaluating systems produced by Anthropic and OpenAI.
      • The Center for AI Safety is a nonprofit that does technical research and promotion of safety in the wider machine learning community.
      • FAR AI is a research nonprofit that incubates and accelerates research agendas that are too resource-intensive for academia but not yet ready for commercialisation by industry, including research in adversarial robustness, interpretability and preference learning.
      • Google DeepMind is probably the largest and most well-known research group developing general artificial machine intelligence, and is famous for its work creating AlphaGo, AlphaZero, and AlphaFold. It is not principally focused on safety, but has two teams focused on AI safety, with the Scalable Alignment Team focusing on aligning existing state-of-the-art systems, and the Alignment Team focused on research bets for aligning future systems.
      • OpenAI, founded in 2015, is a lab that is trying to build artificial general intelligence that is safe and benefits all of humanity. OpenAI is well known for its language models like GPT-4. Like DeepMind, it is not principally focused on safety, but has a safety team and a governance team. Jan Leike (co-lead of the superalignment team) has some blog posts on how he thinks about AI alignment, and has spoken on our podcast about the sorts of people he’d like to hire for his team.
      • Ought is a machine learning lab building Elicit, an AI research assistant. Their aim is to align open-ended reasoning by learning human reasoning steps, and to direct AI progress towards helping with evaluating evidence and arguments.
      • Redwood Research is an AI safety research organisation, whose first big project attempted to make sure language models (like GPT-3) produce output following certain rules with very high probability, in order to address failure modes too rare to show up in standard training.

      Theoretical / conceptual AI safety labs:

      • The Alignment Research Center (ARC) is attempting to produce alignment strategies that could be adopted in industry today while also being able to scale to future systems. They focus on conceptual work, developing strategies that could work for alignment and which may be promising directions for empirical work, rather than doing empirical AI work themselves. Their first project was releasing a report on Eliciting Latent Knowledge, the problem of getting advanced AI systems to honestly tell you what they believe (or ‘believe’) about the world. On our podcast, we interviewed ARC founder Paul Christiano about his research (before he founded ARC).
      • The Center on Long-Term Risk works to address worst-case risks from advanced AI. They focus on conflict between AI systems.
      • The Machine Intelligence Research Institute was one of the first groups to become concerned about the risks from machine intelligence in the early 2000s, and its team has published a number of papers on safety issues and how to resolve them.
      • Some teams in commercial labs also do some more theoretical and conceptual work on alignment, such as Anthropic’s work on conditioning predictive models and the Causal Incentives Working Group at Google DeepMind.

      AI safety in academia (a very non-comprehensive list; while the number of academics explicitly and publicly focused on AI safety is small, it’s possible to do relevant work at a much wider set of places):

      Want one-on-one advice on pursuing this path?

      We think that the risks posed by the development of AI may be the most pressing problem the world currently faces. If you think you might be a good fit for any of the above career paths that contribute to solving this problem, we’d be especially excited to advise you on next steps, one-on-one.

      We can help you consider your options, make connections with others working on reducing risks from AI, and possibly even help you find jobs or funding opportunities — all for free.

      APPLY TO SPEAK WITH OUR TEAM

      Find a job in this path

      If you think you might be a good fit for this path and you’re ready to start looking at job opportunities that are currently accepting applications, see our curated list of opportunities for this path:

        View all opportunities

        Learn more about AI safety technical research

        Top recommendations

        Further recommendations

        Here are some suggestions about where you could learn more:

        Read next:  Learn about other high-impact careers

        Want to consider more paths? See our list of the highest-impact career paths according to our research.

        Plus, join our newsletter and we’ll mail you a free book

        Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

        The post AI safety technical research appeared first on 80,000 Hours.

        ]]>
        Why we’re adding information security to our list of priority career paths https://80000hours.org/2023/04/why-were-adding-information-security-to-our-list-of-priority-career-paths/ Fri, 28 Apr 2023 16:26:47 +0000 https://80000hours.org/?p=81597 The post Why we’re adding information security to our list of priority career paths appeared first on 80,000 Hours.

        ]]>
        Information security could be a top option for people looking to have a high-impact career.

        This might be a surprising claim — information security is a relatively niche field, and it doesn’t typically appear on canonical lists of do-gooder careers.

        But we think there’s an unusually strong case that information security skills (which allow you to protect against unauthorised use, hacking, leaks, and tampering) will be key to addressing problems that are extremely important, neglected, and tractable. We now rank this career among the highest-impact paths we’ve researched.

        This blog post was first released to our newsletter subscribers.

        Join over 350,000 newsletter subscribers who get content like this in their inboxes weekly — and we’ll also send you a free ebook!

        In the introduction to our recently updated career review of information security, we discuss how poor information security decisions may have played a decisive role in the 2016 US presidential campaign. If an organisation is big and influential, it needs good information security to ensure that it functions as intended. This is true whether it’s a political campaign, a major corporation, a biolab, or an AI company.

        These last two cases could be quite important. We rank the risks from pandemic viruses and the chances of an AI-related catastrophe among the most pressing problems in the world — and information security is likely a key part of reducing these dangers.

        That’s because hackers and cyberattacks — from a range of actors with varying motives — could try to steal crucial information, such as instructions for making a super-virus or the details of an extremely powerful AI model.

        This means that even if the people developing advanced AI or biotechnology are sufficiently careful with how they use their inventions, they may fall into the hands of far more reckless or dangerous people. This increases the chances of misuse. Plus, it’s plausible that poor information security could contribute to unhealthy technological race dynamics. Strong information security makes this less likely to happen.

        So these careers have the potential to be really impactful. But we also think they’re underrated — which means that each new talented person to enter the field can potentially add a lot of value.

        And these skills are likely to be in demand for some time into the future. The US Bureau of Labor Statistics projects that the employment of information security analysts is likely to grow by 35% from 2021 to 2031, much faster than the average field.

        We’ve also spoken to a lot of people in key organisations, particularly AI labs, who often say that these roles are very important but difficult to hire for.

        A lot of these organisations want to hire people who care about doing good and reducing the risk of catastrophe, so having more altruistically motivated people enter the field could be really beneficial.

        Another reason we feel comfortable recommending information security as a top path is that even if someone follows this advice but doesn’t find a job at a highly impactful organisation, they will have gained valuable skills to use on the job market. In other words, it has limited personal downsides as a career path.

        Of course — as we discuss on the site — there’s no career path that’s right for everyone. You need to at least have some technical knack to enter this field, and you shouldn’t enter if you wouldn’t be happy doing the work!

        For more details on what the career involves:

        And if it’s not a good fit for you, you could share it with a friend who might be well-suited to the work.

        Learn more:

        The post Why we’re adding information security to our list of priority career paths appeared first on 80,000 Hours.

        ]]>
        Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities? https://80000hours.org/articles/ai-capabilities/ Mon, 03 Oct 2022 15:49:01 +0000 https://80000hours.org/?post_type=article&p=79254 The post Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities? appeared first on 80,000 Hours.

        ]]>
        We’ve argued that preventing an AI-related catastrophe may be the world’s most pressing problem, and that while progress in AI over the next few decades could have enormous benefits, it could also pose severe, possibly existential risks. As a result, we think that working on some technical AI research — research related to AI safety — may be a particularly high-impact career path.

        But there are many ways of approaching this path that involve researching or otherwise advancing AI capabilities — meaning making AI systems better at some specific skills — rather than only doing things that are purely in the domain of safety. In short, this is because:

        • Capabilities work and some forms of safety work are intertwined.
        • Many available ways of learning enough about AI to contribute to safety are via capabilities-enhancing roles.

        So if you want to help prevent an AI-related catastrophe, should you be open to roles that also advance AI capabilities, or steer clear of them?

        We think this is a hard question! Capabilities-enhancing roles could be beneficial or harmful. For any role, there are a range of considerations — and reasonable people disagree on whether, and in what cases, the risks outweigh the benefits.

        So we asked the 22 people we thought would be most informed about this issue — and who we knew had a range of views — to write a summary of their takes on the question. We received 11 really interesting responses, and think that these are likely a reasonable representation of the range of views held by the broader set of people.

        We hope that these responses will help inform people making difficult decisions about working in roles that might advance AI capabilities. We also used these responses to help write our review of working at leading AI labs.

        If you can’t follow some of the below, don’t worry! Check out our problem profile on preventing an AI-related catastrophe for an introduction to the terms, concepts, and arguments referenced here.

        The following are written by people whose work we respect and who would like to remain anonymous. These quotes don’t represent the views of 80,000 Hours, and in some cases, individual pieces of advice may explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

        We’ve included the responses from these 11 experts in full. We’ve only made minor edits for clarity and ease of reading.

        Advice from 11 anonymous experts

        Expert 1: Right now I wish AI labs would slow down on the margin, but…

        Right now I wish AI labs would slow down on the margin, but I don’t think it’s obvious that capabilities work is net negative and reasonable people can disagree on this point. What I say below would probably change if I were highly confident it’s extremely bad to advance capabilities (especially if I believed that advancing capabilities by a month is much more bad than advancing alignment research by a month is good). With that said:

        • AI labs are a great place to pick up good skills, especially for people doing technical roles (ML engineer, ML researcher) at those labs. If you’re early career and can get into one and think it’d be more interesting and a better personal fit for you than other roles you’re considering, you should probably go for it — on a pretty wide distribution of views (including those that think capabilities enhancement is pretty net negative), the investment in your human capital probably creates more good than your contribution to capabilities (at a junior level) creates harm.
        • It’s worth specifically asking about working on the safety teams of capabilities organizations and not assuming you have to choose between a pure capabilities role and no role; this won’t always work but I’d expect a reasonable fraction of the time you can tilt your role toward more safety projects (and this will probably — though not always — make your on-the-job learning a little more relevant/useful to future safety projects you might do).
        • There are some roles (usually more senior ones) that seem super leveraged for increasing overall capabilities and don’t really teach you skills that are highly transferable to safety-only projects — for example, fundraising for an AI lab or doing comms for an AI lab that involves creating hype around their capabilities results. These seem likely to be bad unless you really back the lab and feel aligned with it on views about safety and values.
        • You should always assume that you are psychologically affected by the environment you work in. In my experience, people who work at capabilities labs tend to systematically have or develop views that AI alignment is going to be fairly easy and I’d guess that this is in significant part due to motivated reasoning and social conformity effects. I think I’d be most excited by an AI alignment researcher who spends some time at AI labs and some time outside those environments (either at a safety-only company like ARC or Redwood, a safety-focused academic group, or doing independent research). It seems like you get important perspective from both environments, and it’s worth fighting the inertia of staying in a capabilities role that’s comfortable, continually getting promoted, and never quite getting around to working on the most core safety stuff.

        Expert 2: There are some portions of AGI safety that are closely tied to capabilities…

        There are some portions of AGI safety that are closely tied to capabilities, in particular amplification and variants, and I think these are worth pursuing on net (though without high confidence). That is, a common message expressed these days is that EA folk should only work on purer areas of safety, but I think that’s suboptimal given that a complete solution involves the capability-mixed areas as well.

        Expert 3: If humanity gets artificial general intelligence well before it knows how to aim it…

        If humanity gets artificial general intelligence well before it knows how to aim it, then humanity is likely to kill itself with it, because humanity can’t coordinate well enough to prevent the most-erroneously-optimistic people from unleashing a non-friendly superintelligence before anyone knows how to build a friendly one. As such, in the current environment — where capabilities are outpacing alignment — the first-order effect of working on AI capabilities is to hasten the destruction of everything (or, well, of the light-cone originating at Earth in the near future). This first-order effect seems to me to completely dominate various positive second-order effects (such as having more insight into the current state of capabilities research, and being able to culturally influence AI capabilities researchers). (There are also negative second-order effects to working on capabilities in spite of the negative first-order effects, like how it undermines the positive cultural effect that would come from all the conscientious people refusing to work on capabilities in lieu of a better alignment story.)

        That said, the case isn’t quite so cut-and-dry. Sometimes, research in pursuit of alignment naturally advances capabilities. And many people can’t be persuaded off of capabilities research regardless of the state of alignment research. As such, I’ll add that prosocial capabilities research is possible, so long as it is done strictly in private, among a team of researchers that understands what sort of dangers they’re toying with, and which is well capable of refraining from deploying unsafe systems. (Note that if there are multiple teams that believe they have this property, the forward light-cone gets destroyed by the most erroneously optimistic among them. The team needs to not only be trying to align their AIs, but capable of noticing when the deployed system would not be friendly; the latter is a much more difficult task.) By default, the teams that claim they will be private when it matters will happily publish a trail of breadcrumbs that lead anyone who’s paying attention straight to their capabilities insights (justified, perhaps, by arguments like “if we don’t publish then we won’t be able to hire the best people”), and will change over to privacy only when it’s already too late. But if you find a team that’s heavily focused on alignment, and that’s already refusing to publish, that’s somewhat better, in my estimation.

        My own guess is still that the people saying “we can’t do real alignment work until we have more capabilities” are, while not entirely wrong, burning through a scarce and necessary resource. Namely: yes, there is alignment work that will become much easier once we have real AGIs on our hands. But there are also predictable hurdles that will remain even then, that require serial time to solve. If humanity can last 5 years after inventing AGI before someone destroys the universe, and there’s a problem that takes 20 years to solve without an AGI in front of you and 10 years to solve with an AGI in front of you, then we’d better get started on that now, and speeding up capabilities isn’t helping. So even if the team is very private, my guess is that capabilities advancements are burning time that we need. The only place where it’s clearly good, in my book, to advance capabilities, is when those capabilities advances follow necessarily from advancing our knowledge of AI alignment in the most serially-bottlenecked ways, and where those advancements are kept private. But few can tell where the serial bottlenecks are, and so I think a good rule of thumb is: don’t advance capabilities; and if you have to, make sure it’s done in private.

        Expert 4: There are lots of considerations in both directions of comparable magnitudes…

        There are lots of considerations in both directions of comparable magnitudes (according to me), so I think you shouldn’t be confident in any particular answer. To name a few: (1) more aligned people gaining relevant skills who may later work directly on reducing x-risk increases the expected quality of x-risk reduction work (good) (2) shorter timelines mean less time to work on alignment and governance (bad), (3) shorter timelines mean fewer actors building AGI during crunch time (good), (4) more aligned people in relevant organizations can help build the political will in those organizations to address safety issues (good), (5) shorter timelines mean less time for aligned people to “climb the ladder” to positions of influence in relevant organizations (bad), (6) shorter timelines mean less time for geopolitics to change (sign unclear).

        My main piece of advice is not to underestimate the possibility of value drift. I would not be surprised to hear a story of someone who went to OpenAI or DeepMind to skill up in ML capabilities, built up a friend group of AI researchers, developed an appreciation of the AI systems we can build, and ultimately ended up professing agreement with some or the other reason to think AI risk is overblown, without ever encountering an argument for that conclusion that they would endorse from their starting point. If you are going to work in a capabilities-enhancing role, I want you to ensure that your social life continues to have “AI x-risk worriers” in it, and to continue reading ongoing work on AI x-risk.

        If I had to recommend a decision without knowing anything else about you, I’d guess that I’d be (a) in favor of a capabilities-enhancing role for skilling up if you took the precautions above (and against if you don’t), (b) in favor of a capabilities-enhancing role where you will lobby for work on AI x-risk if it is a very senior role (and against if it is junior).

        Expert 5: There isn’t at present any plan for not having AGI destroy the world…

        There isn’t at present any plan for not having AGI destroy the world. It’s been justly and validly compared to “Don’t Look Up,” but there’s companies pulling in the asteroid in the hope they can turn a profit, except they don’t even really have a plan for not everyone dying. Under those circumstances, I don’t think it’s acceptable — consequentialistically, deontologically, or as a human being — to “burn the capabilities commons” by publishing capabilities advances, opening models, opening source code, calling attention to techniques that you used to make closed capabilities advances, showing off really exciting capabilities that get other people excited and entering the field, or visibly making it look like AI companies are going to be super profitable and fundable and everyone else should start one too.

        There’s job openings in AI all over, unfortunately. Take a job with a company that isn’t going to push the edge of capabilities, isn’t going to open-source anything, isn’t going to excite more entrants to the ML field; ideally, get a clear statement from them that their research will always be closed, and make sure you won’t be working with other researchers that are going to be sad about not being able to publish exciting papers for great prestige.

        Closed is cooperating. Open is defecting. Make sure your work isn’t contributing to humanity’s destruction of humanity, or don’t work.

        Try not to fall for paper-thin excuses about far-flung dreams of alignment relevance either.

        Expert 6: I think that the simple take of “capabilities work brings AGI closer which is bad because of AI x-risk” is probably…

        I think that the simple take of “capabilities work brings AGI closer which is bad because of AI x-risk” is probably directionally correct on average, but such a vast oversimplification that it’s barely useful as heuristic.

        There are many different ways in which capabilities work can have both positive and negative effects, and these can vary a lot depending both on what the work is and how it is used and disclosed. Here are some questions I would want to consider when trying to judge the net effect of capabilities work:

        • What is the direct effect on AGI timelines? Capabilities work that directly chips away at plausible bottlenecks for AGI (I’ll call such work “AGI-bottleneck” work) is likely to make AGI arrive sooner. The biggest category I see here is work that improves the efficiency of training large models that have world understanding, whether via architectural improvements, optimizer improvements, improved reduced-precision training, improved hardware, etc. Some work of this kind may have less of a counterfactual impact: for example, the work may be hard to build upon because it is idiosyncratic to today’s hardware, software or models, or it may be very similar to work being done by others anyway.
        • What is the effect on acceleration? Capabilities work can have an indirect effect on AGI timelines by encouraging others to either (a) invest more in AGI-bottleneck capabilities work, or (b) spend more on training large models, leading to an accelerated spending timeline that eventually results in AGI. At the same time, some capabilities work might encourage others to work on alignment, perhaps depending on how it is presented.
        • What is the effect on takeoff speeds? Spending more on training large models now could lead to a slower rate of growth of spending around the time of AGI, by reducing the “spending overhang”. This could improve outcomes by giving the world longer with near-AGI models, having which would increase the attention on AI alignment, make it more empirically tractable, and make it easier for institutions to adapt. Of course, spending more on training large models likely involves some AGI-bottleneck capabilities work, and the benefits are limited by the fact that not all alignment research requires the most capable models and that the AI alignment community is growing at least somewhat independently of capability advancements.
        • What is the effect on misalignment risk? Some capabilities work can make models more useful without increasing misalignment risk. Indeed, aligning large language models makes them more useful (and so can be considered capabilities work), but doesn’t give the base model a non-trivially better understanding of the world, which is generally seen as a key driver of misalignment risk. This kind of work should directly reduce misalignment risk by improving our ability to achieve things (including outcompeting misaligned AI, conducting further alignment research, and implementing other mitigations) before and during the period of highest risk. It’s also worth considering the effects on other risks such as misuse risk, though they are generally considered less existentially severe.
        • What is the effect on alignment research? Some capabilities work would enable new alignment work to be done, including work on outer alignment schemes that involve AI-assisted evaluation such as debate, and empirical study of inner misalignment (though it’s hotly debated how far in advance of AGI we should expect the latter to be possible). Other capabilities work may enable models to assist or conduct alignment research. In fact, a lot of AGI-bottleneck work may fall into this category. Of course, a lot of current alignment research isn’t especially bottlenecked on model capabilities, including theoretical work and interpretability.
        • How will the work be used and disclosed? The potential downsides of capabilities work can often be mitigated, perhaps entirely, by using or disclosing the work in a certain way or not at all. However, such mitigations can be brittle, and they can also reduce the alignment upsides.

        Overall, I don’t think whether a project can be labeled “capabilities” at a glance tells you much about whether it is good or bad. I do think that publicly disclosed AGI-bottleneck work is probably net harmful, but not obviously so. Since this view is so sensitive to difficult judgment calls, e.g. about the relative value of empirical versus theoretical alignment work, my overall advice would be to be somewhat cautious about such work:

        • Avoid AGI-bottleneck work that doesn’t have either a clear alignment upside or careful mitigation, even if there is a learning or career upside. Note that I wouldn’t consider most academic ML work AGI-bottleneck work, since it’s not focused on things like improving training efficiency of large models that have world understanding.
        • For AGI-related work that targets alignment but also impacts AGI bottlenecks, it’s worth discussing the project with people in advance to check that it is worthwhile overall. I’d expect the correct outcome of most such discussions to be to go ahead with the project, simply because the effect on a single project that is not optimizing for something is likely very small compared to a large number of projects that are optimizing for that thing. But the stakes are high enough that it is worth going through the object-level considerations.
        • Work that is only tangentially AGI-related, such as an ML theory project or applying ML to some real-world problem, deserves less scrutiny from an AGI perspective, even if it can be labeled “capabilities”. The effect of such a project is probably dominated by its impact on the real-world problem and on your learning, career, etc.
        • Students: don’t sweat it. The vast majority of student projects don’t end up mattering very much, so you should probably choose a project you’ll learn the most from (though of course you’re more likely to learn about alignment if the project is alignment-related).

        Expert 7: Timelines are short, we are racing to the precipice…

        Timelines are short, we are racing to the precipice, and some capabilities-advancing research is worth it if it brings a big payoff in some other way, but you should by default be skeptical.

        Expert 8: Overall, I think there is a lot of value for people who are concerned about AI extreme/existential risks to…

        Overall, I think there is a lot of value for people who are concerned about AI extreme/existential risks to work in areas that may look like they primarily advance AI capabilities, if there are other good reasons for them to go in that direction. This is because: (1) the distinction between capabilities and safety is so fuzzy as to often not be useful; (2) I anticipate safety-relevant insight to come from areas that today might be coded as capabilities, and so recommend a much more diversified portfolio for where AI risk-concerned individuals skill-up; (3) there are significant other benefits from having AI risk-aware individuals prominent throughout AI organizations and the fields of ML; (4) capabilities work is and will be highly incentivized far in excess of the marginal boost from 80k individuals, in my view.

        1. The distinction between capabilities and safety is one that makes sense in the abstract, and is worth attending to. Labs that try to differentially work on and publish safety work, over capabilities work, should be commended. Philanthropic funders and other big actors should be thoughtful about how their investments might differentially boost safety/alignment, relative to capabilities, or not. That being said, in my view when one takes a sophisticated assessment, in practice it is very hard to clearly draw a distinction between safety and capabilities, and so often this distinction shouldn’t be used to guide action. Interpretability, robustness, alignment of near-term models, out-of-sample generalization are each important areas which plausibly advance safety as well capabilities. There are many circumstances where even a pure gain in safety can be endangering, such as if it hides evidence of later alignment risk, or incentivizes actors to deploy models that were otherwise too risky to deploy.

        2. In my judgment, the field of AI/AGI safety has gone through a process of expansion to include ever more approaches which were previously seen as too far away from the most extreme risks. Mechanistic interpretability or aligning existing deep learning models are today regarded by many as a valuable long-term safety bet, whereas several years ago were much more on the periphery of consideration. I expect in the future we will come to believe that expertise in areas that may look today like capabilities (robustness, out of sample generalization, security, other forms of interpretability, modularity, human-AI interaction, continuous learning, intrinsic motivations) are a critical component of our AGI safety portfolio. At the least it can be useful to have more “post-doctoral” level work throughout the space of ML, to then bring insights and skills into the most valuable bets in AI safety.

        3. Developing a career in other areas that may code as “capabilities” could lead to individuals being prominent in various fields of machine learning, and to having important and influential roles within AI organizations. I believe there is a lot of value to having the community of those concerned about AI risks to have broad understanding of the field of ML and of AI organizations, and broad ability to shape norms. In my view, much of the benefit of “AI safety researchers” does not come from the work they do, but their normative and organizational influence within ML science and the organizations where they work. I expect critical safety insights will have to be absorbed and implemented by “non-safety” fields, and so it is valuable to have safety-aware individuals in those fields. Given this view, it makes sense to diversify the specialisms which AI risk concerned individuals pursue, and to upweight those directions which are especially exciting scientifically or valuable to the AI organizations likely to build powerful AI systems.

        4. Capabilities work is already highly incentivized to the tune of billions of dollars and will become more so in the future, so I don’t think on the margin AI risk motivated individuals working in these spaces would boost capabilities much. To try to quantify things, there were around 6,000 authors attending NeurIPS in 2021. Increasing that number by 1 represents an increase of 1/6,000. By contrast, I think the above benefits to safety of having an individual learn from other fields, potentially be a leader of a new critical area in AI safety, and otherwise be in a potentially better position to shape norms and organizational decisions, are likely to be much larger. (A relevant belief in my thinking is that I don’t believe shortening timelines today costs us that much safety, relative to getting us in a better position closer to the critical period.) Note that this last argument doesn’t apply to big actors, like significant labs or funders.

        Expert 9: At the current speed of progress in AI capabilities compared to our advances on alignment…

        At the current speed of progress in AI capabilities compared to our advances on alignment, it’s unlikely that alignment will be solved on time before the first AGI is deployed. If you believe that alignment is unlikely by default, this is a pretty bad state of affairs.

        Given the current situation, any marginal slowdown of capabilities advancements and any marginal acceleration of work on alignment is important if we hope to solve the problem on time.

        For this reason, individuals concerned about AI safety should be very careful before deciding to work on capabilities, and should strongly consider working on alignment and AI safety directly whenever possible. This is especially the case as the AI field is small and has an extreme concentration of talent: top ML researchers and engineers single handedly contribute to large amounts of total progress.

        Therefore, it’s particularly important for very talented people to choose wisely on what they work: each talented individual choosing to work on AI safety over capabilities has double the amount of impact, simultaneously buying more time before AGI while also speeding up alignment work.

        A crucial thing to consider is not only the organization, but the team and type of work. Some organizations that are often criticized for their work on capabilities in the community have teams that genuinely care about alignment, and work there is probably helpful. Conversely, some organizations that are very vocal about their focus on safety have large teams focusing on accelerating capabilities, privately or publicly, and work in those teams is probably harmful.

        The relationship between capabilities and valuable alignment work is not binary, and much of the most promising alignment work also has capabilities implications, but the reverse is rarely true, and only accidentally so.

        Some organizations and individuals, including some closely affiliated to EA, take the view that speeding up progress now is fine as the alignment problem is primarily an engineering, empirical problem, and more advanced models would allow us to do better empirical research on how to control AGI.

        Another common view is that speeding up progress for “friendly” actors, such as those that claim to care more about safety, have ties to EA, and are located in non-authoritarian countries, is necessary, as we would rather have the most safety-minded actors to get to AGI first.

        Those acting upon these views are being extremely irresponsible, and individuals looking to work on AI should be defiant of these arguments as an excuse to accelerate capabilities.

        Expert 10: I think the EA community seems to generally overfocus on the badness of speeding capabilities…

        I think the EA community seems to generally overfocus on the badness of speeding capabilities. I do think speeding capabilities is bad (all else equal), but the marginal impact of an engineer or researcher is usually small, and it doesn’t seem hard to outweigh it with benefits including empowering an organization to do better safety research, be more influential, etc.; gaining career capital of all kinds for yourself (understanding of AI, connections, accomplishments, etc.)

        However, if you are in this category, I would make an extra effort to:

        • Be an employee who pays attention to the actions of the company you’re working for, asks that people help you understand the thinking behind them, and speaks up when you’re unhappy or uncomfortable. I think you should spend 95%+ of your work time focused on doing your job well, and criticism is far more powerful coming from a high performer (if you’re not performing well I would focus exclusively on that, and/or leave, rather than spend time/energy debating company strategy and decision making). But I think the remaining 5% can be important — employees are part of the “conscience” of an organization.
        • Avoid being in a financial or psychological situation where it’s overly hard for you to switch jobs into something more exclusively focused on doing good; constantly ask yourself whether you’d be able to make that switch, and whether you’re making decisions that could make it harder to do so in the future.

        Expert 11: In my expectation, the primary human-affectable determinant of existential risk from AI is…

        In my expectation, the primary human-affectable determinant of existential risk from AI is the degree to which the first 2-5 major AI labs to develop transformative AI will be able to interact with each other and the public in a good-faith manner, enough to agree on and enforce norms preventing the formation of many smaller AI labs that might do something foolish or rash with their tech (including foolish or rash attempts to reduce x-risk). For me, this leads to the following three points:

        1. Working on AI capabilities for a large lab, in a way that fosters good-faith relationships with that lab and other labs and the public, is probably a net positive in my opinion. Caveat: If you happen to have a stroke of genius insight that enables the development of AGI 6 months sooner than it otherwise would have been developed, then it’s probably a net negative to reveal that insight to your employer, but also you’d have some discretion in deciding whether to reveal it, such that you having an AI-capabilities-oriented job at a major (top 5) lab is probably worth it if you’re able to contribute significantly to good-faith relations between that lab and other labs and the public. I’d count a ‘significant positive contribution’ as something like “without deception, causing Major Lab A to lower by 1% its subjective probability that Major Lab B will defect against Major Lab A if Major Lab B gets transformative AI first.” Let’s call that a “1% good-faith-contribution”. I think a 0.1% good-faith-contribution might be too small to justify working on capabilities, and a 10% good-faith-contribution is more than enough.

        2. If you feel your ability to model social situations is not adequate to determine whether you are making a significant positive contribution to the good-faith-ness of the relationships between major (top 5) AI labs and the public, my suggestion is that you should probably just not try to work on AI capabilities research in any setting, because you will not be well positioned to judge whether the lab where you work is developing the capabilities in a way that increases good faith amongst and around AI labs.

        3. Work on rudimentary AI capabilities for a small lab is probably fine as long as you’re not pushing forward the state of the art, and as long as you’re not participating in large defections against major labs who are trying to prevent the spread of societally harmful tech. For instance, I think you should not attempt to reproduce GPT-4 and release or deploy it in ways that circumvent all the hard work the OpenAI team will have done to ensure their version of the model is being used ethically.

        Speak to our team one-on-one

        If you’re considering taking a role that might advance AI capabilities or are generally thinking through this question in relation to your own career, our advising team might be able to give you personal advice. (It’s free.) We’re excited about supporting anyone who wants to make reducing existential risks posed by AI a focus of their career. Our team can help you compare your options, make connections with others working on this issue, and possibly even help you find jobs or funding opportunities.

        SPEAK WITH OUR TEAM

        Learn more

        If you want to learn much more about risks from AI, here are a few general sources (rather than specific articles) that you might want to explore:

        • The AI Alignment Forum, which is aimed at researchers working in technical AI safety.
        • AI Impacts, a project that aims to improve society’s understanding of the likely impacts of human-level artificial intelligence.
        • The Alignment Newsletter, a weekly publication with recent content relevant to AI alignment with thousands of subscribers.
        • Import AI, a weekly newsletter about artificial intelligence by Jack Clark (cofounder of Anthropic), read by more than 10,000 experts.

        The post Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities? appeared first on 80,000 Hours.

        ]]>
        Data collection for AI alignment https://80000hours.org/career-reviews/alignment-data-expert/ Wed, 11 May 2022 21:52:52 +0000 https://80000hours.org/?post_type=career_profile&p=77035 The post Data collection for AI alignment appeared first on 80,000 Hours.

        ]]>

        In a nutshell:

        To reduce the risks posed by the rise of artificial intelligence, we need to figure out how to make sure that powerful AI systems do what we want. Many potential solutions to this problem will require a lot of high-quality data from humans to train machine learning models. Building excellent pipelines so that this data can be collected more easily could be an important way to support technical research into AI alignment, as well as lay the foundation for actually building aligned AIs in the future. If not handled correctly, this work risks making things worse, so this path needs people who can and will change directions if needed.

        Sometimes recommended — personal fit dependent

        This career will be some people's highest-impact option if their personal fit is especially good.

        Review status

        Based on a shallow investigation 

        Why might becoming an expert in data collection for AI alignment be high impact?

        We think it’s crucial that we work to positively shape the development of AI, including through technical research on how to ensure that any potentially transformative AI we develop does what we want it to do (known as the alignment problem). If we don’t find ways to align AI with our values and goals — or worse, don’t find ways to prevent AI from actively harming us or otherwise working against our values — the development of AI could pose an existential threat to humanity.

        There are lots of different proposals for building aligned AI, and it’s unclear which (if any) of these approaches will work. A sizeable subset of these approaches require humans to give data to machine learning models, including include AI safety via debate, microscope AI, and iterated amplification.

        These proposals involve collecting human data on tasks like:

        • Evaluating whether a critique of an argument was good
        • Breaking a difficult question into easier subquestions
        • Examining the outputs of tools that interpret deep neural networks
        • Using one model as a tool to make a judgement on how good or bad the outputs of another model are
        • Finding ways to make models behave badly (e.g. generating adversarial examples by hand)

        Collecting this data — ideally by setting up scalable systems to both contract people to carry out these sorts of tasks as well as collect and communicate the results — could be a valuable way to support alignment researchers who use it in their experiments.

        But also, once we have good alignment techniques, we may need AI companies around the world to have the capacity to implement them. That means developing systems and pipelines for the collection of this data now could make it easier to implement alignment solutions that require this data in the future. And if it’s easier, it’s more likely to actually happen.

        What does this path involve?

        Human data collection mostly involves hiring contractors to answer relevant questions and then creating well-designed systems to collect high-quality data from them.

        This includes:

        • Figuring out who will be good at actually generating this data (i.e. doing the sorts of tasks that we listed earlier, like evaluating arguments), as well as how to find and hire these people
        • Designing training materials, processes, pay levels, and incentivisation structures for contractors
        • Ensuring good communication between researchers and contractors, for example by translating researcher needs into clear instructions for contractors (as well as being able to predict and prevent people misinterpreting these instructions)
        • Designing user interfaces to make it easy for contractors to complete their tasks as well as for alignment researchers to design and update tasks for contractors to carry out
        • Scheduling workloads among contractors, for example making sure that when data needs to be moved in sequence among contractors, the entire data collection can happen reasonably quickly
        • Assessing data quality, including developing ways of rapidly detecting problems with your data or using hierarchical schemes of more and less trusted contractors

        Being able to do all these things well is a pretty unique and rare skill set (similar to entrepreneurship or operations), so if you’re a good fit for this type of work, it could be the most impactful thing you could do.

        Avoiding harm

        If you follow this path, it’s particularly important to make sure that you are able to exercise excellent judgement about when not to provide these services.

        We think it’s extremely difficult to make accurate calls about when research into AI capabilities could be harmful.

        For example, it sounds pretty likely to us that work that helps make current AI systems safe and useful will be fairly different from work that is useful for making transformative AI (when we’re able to build it) safe and useful. You’ll need to be able to make judgements about whether the work you are doing is good for this future task.

        We’ve written an article about whether working at a leading AI lab might cause harm, and how to avoid it.

        If you think you might be a good fit for this career path, but aren’t sure how to avoid doing harm, our advising team may be able to help you decide what to do.

        Example people

        How to predict your fit in advance

        The best experts at human data collection will have:

        • Experience designing surveys and social science experiments
        • Ability to analyse the data collected from experiments
        • Some familiarity with the field of AI alignment
        • Enough knowledge about machine learning to understand what sorts of data are useful to collect and the machine learning research process
        • At least some front-end software engineering knowledge
        • Some aptitude for entrepreneurship or operations

        Data collection is often considered somewhat less glamorous than research, making it especially hard to find good people. So if you have three or more of these skills, you’re likely a better candidate than most!

        How to enter

        If you already have experience in this area, there are two main ways you might get a job as a human data expert:

        If you don’t have enough experience to work directly on this now, you can gain experience in a few ways:

        • Do academic research, for example in psychology, sociology, economics, or another social science.
        • Work in human-computer interaction or software crowdsourcing.
        • Work for machine learning companies in labelling teams — and because these roles are less popular, they can be a great way to rapidly gain experience and promotions in machine learning organisations.

        The Effective Altruism Long-Term Future Fund and the Survival and Flourishing Fund may provide funding for promising individuals to learn skills relevant to helping future generations — including human data collection. As a way of learning the necessary skills (and directly helping at the same time), you could apply for a grant to build a dataset that you think could be useful for AI alignment. The Machine Intelligence Research Institute has put up a bounty for such a dataset.

        Find a job in this path

        If you think you might be a good fit for this path and you’re ready to start looking at job opportunities, you may find relevant roles on our job board:

          View all opportunities

          Want one-on-one advice on pursuing this path?

          If you think this path might be a great option for you, but you need help deciding or thinking about what to do next, our team might be able to help.

          We can help you compare options, make connections, and possibly even help you find jobs or funding opportunities.

          APPLY TO SPEAK WITH OUR TEAM

          Learn more about data collection for AI alignment

          Read next:  Learn about other high-impact careers

          Want to consider more paths? See our list of the highest-impact career paths according to our research.

          Plus, join our newsletter and we’ll mail you a free book

          Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

          The post Data collection for AI alignment appeared first on 80,000 Hours.

          ]]>
          Software engineering https://80000hours.org/career-reviews/software-engineering/ Fri, 04 Feb 2022 09:00:51 +0000 https://80000hours.org/?post_type=career_profile&p=75831 The post Software engineering appeared first on 80,000 Hours.

          ]]>
          On December 31, 2021, the most valuable company on Earth was Apple, worth around $3 trillion. After that came Microsoft, at $2.5 trillion, then Google (officially Alphabet) at $1.9 trillion, then Amazon at $1.5 trillion.

          On December 31, 2020, the four most valuable companies were: Apple, Microsoft, Amazon, and Google.

          On December 31, 2019, the four most valuable companies were: Apple, Microsoft, Google, and Amazon.

          And on December 31, 2018, the four most valuable companies were: Microsoft, Apple, Amazon, and Google.

          If you’re anything like me, you’re starting to spot a pattern here.

          Revenue in software has grown from $400 billion in 2016 to $500 billion in 2021, and is projected to reach $800 billion by 2026.

          Software has an increasing and overwhelming importance in our economy — and everything else in our society. High demand and low supply makes software engineering well-paid, and often enjoyable.

          But we also think that, if you’re trying to make the world a better place, software engineering could be a particularly good way to help.

          In a nutshell:

          Software engineering could be a great option for having a direct impact on the world’s most pressing problems. If you have good analytical skills (even if you have a humanities background), you might consider testing it. Basic programming skills can be easy to learn and extremely useful even if you decide not to go into software engineering, which means trying this out could be particularly low cost.

          Pros

          • Gain a flexible skill set.
          • Make a significant direct impact, either by working on AI safety, or in otherwise particularly effective organisations.
          • Have excellent working conditions, high pay, and good job security.

          Cons

          • Late-stage earnings are often lower than in many other professional jobs (especially high-paying roles such as quantitative trading), unless you help found a successful startup.
          • Likely only a small proportion of exceptional programmers will have a highly significant impact.
          • Initially, it could be relatively challenging to gain skills quickly compared to some other jobs, as you need a particular concrete skill set.

          Key facts on fit

          Willingness to teach yourself, ability to break problems down into logical parts and generate and test hypotheses, willingness to try out many different solutions, high attention to detail, quantitative degree useful but not required.

          Sometimes recommended — personal fit dependent

          This career will be some people's highest-impact option if their personal fit is especially good.

          Review status

          Based on an in-depth investigation 

          This review owes a lot to helpful discussions with (and comments from) Andy Jones, Ozzie Gooen, Jeff Kaufman, Sasha Cooper, Ben Kuhn, Nova DasSarma, Kamal Ndousse, Ethan Alley, Ben West, Ben Mann, Tom Conerly, Zac Hatfield-Dodds, and George McGowan. Special thanks go to Roman Duda for our previous review of software engineering, on which this was based.

          Why might software engineering be high impact?

          Software engineers are in a position to meaningfully contribute directly to solving a wide variety of the world’s most pressing problems.

          In particular, there is a shortage of software engineers at the cutting edge of research into AI safety.

          We’ve also found that software engineers can contribute greatly to work aiming at preventing pandemics and other global catastrophic biological risks.

          Aside from direct work on these crucial problems, while working for startups or larger tech companies you can gain excellent career capital (especially technical skills), and, if you choose, earn and donate substantial amounts to the world’s best charities.

          How to do good as a software engineer

          Even for skilled engineers who could command high salaries, we think that working directly on a problem will probably be more impactful than earning to give.

          Some examples of projects where software engineering is central to their impactful work:

          Most organisations, even ones that don’t focus on developing large software products, need software engineers to manage computer systems, apps, and websites. For example:

          Many people we’ve spoken to at these and other organisations have said that they have real difficulty hiring extremely talented software engineers. Many nonprofits want to hire people who believe in their missions (just as they do with operations staff), which indicates that talented, altruistic-minded software engineers are sorely needed and could do huge amounts of good.

          Smaller organisations that don’t focus on engineering often only have one or two software engineers. And because things at small organisations can change rapidly, they need unusually adaptable and flexible people who are able to maintain software with very little help from the wider team.1

          It seems likely that, as the community of people working on helping future generations grows, there will be more opportunities for practical software development efforts to help. This means that even if you don’t currently have any experience with programming, it could be valuable to begin developing expertise in software engineering now.

          Software engineers can help with AI safety

          We’ve argued before that artificial intelligence could have a deeply transformative impact on our society. There are huge opportunities associated with this ongoing transformation, but also extreme risks — potentially even threatening humanity’s survival.

          With the rise of machine learning, and the huge success of deep learning models like GPT-3, many experts now think it’s reasonably likely that our current machine learning methods could be used to create transformative artificial intelligence.

          This has led to an explosion in empirical AI safety research, where teams work directly with deep neural networks to identify risks and develop frameworks for mitigating them. Examples of organisations working in empirical AI safety research include Redwood Research, DeepMind, OpenAI, and Anthropic.

          These organisations are doing research directly with extremely large neural networks, which means each experiment can cost millions of dollars to run. This means that even small improvements to the efficiency of each experiment can be hugely beneficial.

          There’s also often overlap between experimental results that will help further AI safety and results that could accelerate the development of unsafe AI, so it’s also important that the results of these experiments are kept secure.

          As a result, it’s likely to remain incredibly valuable to have talented engineers working on ensuring that these experiments are as efficient and safe as possible. Experts we spoke to expect this to remain a key bottleneck in AI safety research for many years.

          However, there is a serious risk associated with this route: it seems possible for engineers to accidentally increase risks from AI by generally accelerating the technical development of the field. We’re not sure of the more precise contours of this risk (e.g. exactly what kinds of projects you should avoid), but think it’s important to watch out for. That said, there are many more junior non-safety roles out there than roles focused specifically on safety, and experts we’ve spoken to expect that most non-safety projects aren’t likely to be causing harm. If you’re uncertain about taking a job for this reason, our team may be able to help you decide.

          Software engineer salaries mean you can earn to give

          In general, if you can find a job you can do well, you’ll have a bigger impact working on a problem directly than you would by earning money and donating. However, earning to give can still be a high-impact option, especially if you focus on donating to the most effective projects that could use the extra funds.

          If you’re skilled enough to work at top companies, software engineering is a well-paid career. In the US, entry-level software engineer salaries start at around $110,000. Engineers at Microsoft start at $150,000, and engineers at Google start at around $180,000 (including stock and bonuses). If you’re successful, after a few years on the job you could be earning over $500,000 a year.

          Pay is generally much lower in other countries. Median salaries in Australia are around 20% lower than salaries in the US (approximately US$80,000), and around 40% lower in the UK, Germany, Canada, and Japan (approximately US$60,000). While much of your earnings as a software engineer come from bonuses and equity, rather than just your salary, these are also lower outside the US.

          If you do want to make a positive difference through donating part of your income as a software engineer, you may be able to increase your impact by using donation-matching programmes, which are common at large tech companies (although these are often capped at around US$10,000 per year).

          You can read more about salaries at large tech companies below.

          It’s important to note that many nonprofit organisations, including those focusing on AI safety, will offer salaries and benefits that compete with those at for-profit firms.

          If you work at or found a startup, your earnings will be highly variable. However, the expected value of your earnings — especially as a cofounder — could be extremely high. For this reason, if you’re a particularly good fit, founding a tech startup and donating your earnings could be hugely impactful, as you could earn and donate extraordinary amounts.

          What does a software engineering career involve?

          Ultimately, the best ways to have an impact with software engineering are probably things like working at an AI lab or a particularly effective nonprofit.

          To get there, there are two broad paths that you could follow to build software engineering skills (and, given the high salaries in software engineering, you can earn to give along the way):

          1. Working for a large, stable company (e.g. Microsoft, Google, Amazon)
          2. Working for a small, fast-growing startup

          In general, you will gain broadly transferable skills through either of these options. To gain experience as quickly and effectively as possible, look for roles that offer good management and mentorship opportunities. You should also make sure you gain a really deep understanding of the basics of software development.

          Working at a top-tier tech company also holds comparable prestige to working in finance or consulting, and gives you the opportunity to make connections with wealthy and influential people, many of whom are impact-minded and interested in doing good.

          You’ll need different skills, and work at different jobs, depending on whether you want to be a front-end, back-end (including machine learning), or full-stack developer.

          Working for a large software company

          The best way to develop software skills is to practise writing code and building software through years of experience. Direct one-on-one mentorship is extremely valuable when developing skills, and this is often provided through software engineering jobs at large tech companies.

          Top firms (e.g. Microsoft, Google, Amazon) are particularly good at providing training to develop particular skill sets, such as management and information security. After talking with people who have experience in training at both tech giants and elsewhere, we think that this internal training is likely the best way to develop knowledge in software engineering (other than on-the-job practice), and will be better than training provided outside of these big tech companies.

          However, it’s important to ensure that your role provides you with a variety of experiences: five years of software development experience is not the same as having the same year of experience five times over.

          For example, it can be harder to gain full-stack or transferable front-end development experience at a large company. Many large mature products have a large front-end team making many small tweaks and analysing their performance in experiments. This provides good training in experiment design and analysis, but often isn’t very transferable to the sorts of front-end work you’d do at smaller companies or nonprofits, where you’ll often be working in a much smaller team with a focus on developing the experience as a whole rather than running experiments on small changes.

          It generally takes around two years for new starters at big tech companies to have the experience they need to independently work on software, and another two years to reach a position where they are able to give advice and support to others in the company and manage projects.

          Key career stages at large tech companies

          First you’ll need some basic experience. You can get this from a relevant degree; working on a job at a smaller, less prestigious company; or from a bootcamp (see how to enter below for more).

          New graduates, and other people with a couple of years of relevant experience, will start out as junior engineers. As a junior engineer, you’d complete small, clearly specified tasks and gain a preliminary understanding of the software development lifecycle. You’ll generally be given lots of guidance and support from more experienced engineers. You usually stay in this role for around three years, gradually expanding your scope. In the US, you’d be paid an entry-level compensation of $100,000 to $200,000 (as of early 2022).

          Once you’ve successfully demonstrated that you can work on projects without needing much support, you’ll be given more responsibility. For a couple of years, you’ll work on more complex projects (often in one or two languages in which you’ve specialised), and with less support from others.

          After five to eight years2, you’ll generally progress to a senior engineer position. As a senior engineer, you write complex applications and have a deep understanding of the entire software lifecycle. You may lead small teams or projects, and you’ll be expected to provide mentorship and guidance to junior engineers. You can stay in this role for much of your career, though it becomes harder to compete with younger talent as you get older. Compensation in 2022 at this level is around $300,000 to $400,000 in the US.

          At this point you may have the skills to leave and become a technical founder or CTO of a startup. This is a highly variable option (since most startups fail), but could be one of the highest expected value ways to earn to give given a chance of wild success.

          Progressing past senior engineers, you’re typically responsible for defining as well as doing your job. You may go into management positions, or could become a staff engineer. Staff engineers, while still building software, also set technical direction, provide mentorship, input an engineering perspective to organisational decisions, and do exploratory work. At this level, at top firms in the US, you can earn upwards of $500,000 and sometimes more than $1,000,000 a year.

          Software engineering is unusual in that you can have a senior position without having to do management, and many see this as a unique benefit of the career. (To learn more about post-senior roles, we recommend The Staff Engineer’s Path by Tanya Reilly and the StaffEng website.)

          Working for a startup as a software engineer

          Working for a startup can give you a much broader range of experience, including problem-solving, project management, and other ‘soft’ skills — because unlike in large companies, there is no one else at the organisation to do these things for you. You can gain a strong understanding of the entire development process as well as general software engineering principles.

          Startups often have a culture that encourages creative thinking and resourcefulness. This can be particularly good experience for working in small software-focused nonprofits later in your career.

          However, the experience of working in small organisations varies wildly. You’ll be less likely to have many very senior experienced engineers around to give you the feedback you need to improve. At very small startups, the technical cofounder may be the only experienced engineer, and they are unlikely to provide the level of mentorship provided at big tech companies (in part because there’s so much else they will need to be doing). That said, we’ve spoken to some people who have had great mentorship at small startups.

          You also gain responsibility much faster at a fast-growing startup, as there is a desperate need for employees to take on new projects and gain the skills required. This can make startups a very fertile learning ground, if you can teach yourself what you need to know.

          Pay at startups is very variable, as you will likely be paid (in large part) in equity, and so your earnings will be heavily tied to the success of the organisation. However, the expected value of your earnings may be comparable to, and in some cases higher than, earnings at large companies.

          Many startups exit by selling to large tech companies. If this happens, you may end up working for a large company anyway.

          Take a look at our list of places to find startup roles.

          Moving to a direct impact software engineering role

          Working in AI safety

          If you are looking to work in an engineering role in an AI safety or other research organisation, you will probably want to focus on back-end software development (although there are also front-end roles, particularly those focusing on gathering data from humans on which models can be trained and tested). There are recurring opportunities for software engineers with a range of technical skills (to see examples, take a look at our job board).

          If you have the opportunity to choose areas in which you could gain expertise, the experienced engineers we spoke to suggested focusing on:

          • Distributed systems
          • Numerical systems
          • Security

          In general, it helps to have expertise in any specific, hard-to-find skill sets.

          This work uses a range of programming languages, including Python, Rust, C++ and JavaScript. Functional languages such as Haskell are also common.

          We’ve previously written about how to move into a machine learning career for AI safety. We now think it is easier than we previously thought to move into an AI-safety-related software engineering role without explicit machine learning experience.

          The Effective Altruism Long-Term Future Fund and the Survival and Flourishing Fund may provide funding for promising individuals to learn skills relevant to helping future generations, including new technologies such as machine learning. If you already have software engineering experience, but would benefit from explicit machine learning or AI safety experience, this could be a good option for you.

          If you think you could, with a few weeks’ work, write a new feature or fix a bug in a major machine learning library, then you could probably apply directly for engineering roles at top AI safety labs (such as Redwood Research, DeepMind, OpenAI, and Anthropic), without needing to spend more time building experience in software engineering. These top labs offer pay that is comparable to pay at large tech firms. (Read more about whether you should take a job at a top AI lab.)

          If you are considering joining an AI safety lab in the near future, our team may be able to help.

          Working on reducing global catastrophic biological risks

          Reducing global catastrophic biological risks — for example, research into screening for novel pathogens to prevent future pandemics — is likely to be one of the most important ways to help solve the world’s most pressing problems.

          Through organisations like Telis Bioscience and SecureDNA (and other projects that might be founded in the future), there are significant opportunities for software engineers to contribute to reducing these risks.

          Anyone with a good understanding of how to build software can be useful in these small organisations, even if they don’t have much experience. However, if you want to work in this space, you’ll need to be comfortable getting your hands dirty and doing whatever needs to be done, even when the work isn’t the most intellectually challenging. For this reason, it could be particularly useful to have experience working in a software-based startup.

          Much of the work in biosecurity is related to handling and processing large amounts of data, so knowledge of how to work with distributed systems is in demand. Expertise in adjacent fields such as data science could also be helpful.

          There is also a big focus on security, particularly at organisations like SecureDNA.

          Most code in biosecurity is written in Python.

          If you’re interested in working on biosecurity and pandemic preparedness as a software engineer, you can find open positions on our job board.

          Other important direct work

          Nonprofit organisations and altruistic-minded startups often have very few team members. And no matter what an organisation does, they almost always have some need for engineers (for example, 80,000 Hours is not a software organisation, but we employ two developers). So if you find an organisation you think is doing something really useful, working as a software engineer for them might be an excellent way to support that work.

          Engineering for a small organisation likely means doing work across the development process, since there are few other engineers.

          Often these organisations are focused on front-end development, with jobs ranging from application development and web development to data science and project management roles. There are often also opportunities for full-stack developers with a broad range of experience.

          Founding an organisation yourself is more challenging, but can be even more impactful. And if you’ve worked in a small organisation or a startup before, you might have the broad skills and entrepreneurialism that’s required to succeed. See our profile on founding new high-impact projects for more.

          Reasons not to go into software engineering

          We think that most people with good general intelligence will be able to do well at software engineering. And because it’s very easy to test out (see the section on how to predict your fit in advance), you’ll be able to tell early on whether you’re likely to be a good fit.

          However, there are lots of other paths that seem like particularly promising ways to help solve the world’s most pressing problems, and it’s worth looking into them. If you find programming difficult, or unenjoyable, your personal fit for other career paths may be higher. And even if you enjoy it and you’re good at it, we think that will be true for lots of people, so that’s not a good reason to think you won’t be even better at something else!

          As a result, it’s important to test your fit for a variety of options. Try taking a look at our other career reviews to find out more.

          How much do software engineers earn?

          It’s difficult to make claims about software engineer earnings in general.

          For a start, almost all of the official (especially government) data on this is on salaries rather than total compensation. By the time you’re a senior engineer, less than half of what you earn will be from your salary — the rest will be from bonuses, stock, and other benefits.

          Most government data also reports median salaries, but as we saw when looking at progression in big tech firms, very senior software engineers can earn seven-figure compensations. So we should expect the distribution of total compensation to be positively skewed, or possibly even bimodal.

          As a result, you should think of the figures below as representing salaries for early- to mid- career software developers.

          Even given all these caveats, the figures we present here are instructive for understanding the relative salary levels (e.g. between locations), even if the absolute values given aren’t perfect.

          More data is available at Levels.fyi, which collects data from people self-reporting their total compensation, and also has data on the distribution of what people earn, rather than just averages.

          Software engineering salaries in the US

          Here are the median US salaries for software developers, from the US Bureau of Labor Statistics:

          Median US salaries for software engineers in 2020 (excluding bonuses)3

          Mean Median
          Computer programmers $95,640 $89,190
          Software developers and software quality assurance analysts and testers $114,270 $110,140
          Web developers and digital interface designers $85,490 $77,200

          Here are the median salaries at different levels of progression, both in the US as a whole and in Mountain View and Palo Alto (i.e. Silicon Valley).4 In general, salaries rise quite rapidly in the early stages of the career, but then level off and grow by only a few percent per year after around a decade. However, this is probably offset by increases in other forms of compensation.

          Median US salaries for software engineers in 2020 at different levels of progression

          Stage Usual experience required US (median salary + bonus) Mountain View and Palo Alto, CA (median salary + bonus)
          Software engineer I (entry level) 0-2 years $75,000 $94,000
          Software engineer II 2-4 years $95,000 $120,000
          Software engineer III 4-6 years $120,000 $150,000
          Software engineer IV 6-8 years $147,000 $185,000
          Software engineer V 8-10 years $168,000 $211,000
          Software engineering manager 10+ years $155,000 $195,000
          Software engineer director 10+ years $226,000 $284,000
          Software engineer director 15+ years $303,000 $380,000

          For figures on total compensation, especially at top companies, we can again look at Levels.fyi. These figures are far higher. Entry-level compensation is around $150,000, rising to $300,000 to $400,000 for senior engineers, and above $500,000 for late-career engineers. The top compensation levels reported are over $1,000,000.

          Salaries also vary by location within the US; they are generally significantly higher in California (although web developers are best paid in Seattle).

          Mean salary by US region in 20205

          National Top-paying state Top-paying metro area
          Computer programmers $95,640 $107,300 (CA) $125,420 (San Francisco)
          Software developers and software quality assurance analysts and testers $114,270 $137,620 (CA) $157,480 (Silicon Valley)
          Web developers and digital interface designers $85,490 $94,960 (WA) $138,070 (Seattle)

          These data are supported by Levels.fyi data on various locations in the US (e.g. Atlanta, New York City, Seattle, and the Bay Area).

          Notably, the differences between locations in salaries at the 90th percentile is much higher than the differences in median salaries.

          Compensation by US region in 20206

          Median 90th percentile
          Atlanta $131,000 $216,000
          New York City $182,000 $365,000
          Seattle $218,000 $430,000
          San Francisco Bay area $222,000 $426,000

          It’s worth noting, however, that the cost of living in Silicon Valley is higher than in other parts of the US (Silicon Valley’s cost of living is 1.5 times the US national average7), reducing disposable income. (In general, data on average cost of living is particularly representative of the costs you’d expect to pay if you have a family or want to own a house.)

          If you want to estimate your own disposable income given different scenarios, you can try these tools:

          Software engineering pay in other countries

          Software engineers are paid significantly less outside the US. The UK Office for National Statistics found that the mean salary for “programmers and software development professionals” in 2020 was £46,000 (US$59,000 in 2020).8 Even when looking at full compensation, we see similar trends across the world.

          Software engineer compensation outside the US6

          Median 90th percentile
          Australia A$166,000
          (US$123,000)
          A$270,000
          (US$200,000)
          Canada C$143,000
          (US$115,000)
          C$270,000
          (US$218,000)
          Germany €86,000
          (US$98,000)
          €145,000
          (US$165,000)
          India ₹3,123,000
          (US$42,000)
          ₹7,435,000
          US$100,000)
          Ireland €101,000
          (US$115,000)
          €188,000
          (US$214,000)
          Israel ₪533,000
          (US$165,000)
          ₪866,000
          (US$268,000)
          Netherlands €108,000
          (US$123,000)
          €174,000
          (US$198,000)
          Russia ₽2,991,000
          (US$42,000)
          ₽6,410,000
          (US$90,000)
          Singapore S$143,000
          (US$106,000)
          S$263,000
          (US$195,000)
          Switzerland CHF 177,000
          (US$190,000)
          CHF 355,000
          (US$382,000)
          Taiwan NT$1,819,000
          (US$65,000)
          NT$3,387,000
          (US$121,000)
          United Kingdom £90,000
          (US$123,000)
          £166,000
          (US$228,000)

          The only countries with earnings as high as the US are Israel and Switzerland, and no countries have earnings as high as Seattle or the San Francisco Bay Area. The cost of living in major cities in Israel and Switzerland is around 20% higher than in Silicon Valley.9

          Compensation across the world is often higher if you work from a major city.

          Software engineer compensation in major cities outside the US6

          Median 90th percentile
          Bangalore, India ₹3,569,000
          (US$48,000)
          ₹7,583,000
          (US$102,000)
          Dublin, Ireland €106,000
          (US$120,000)
          €189,000
          (US$215,000)
          London, UK £95,000
          (US$130,000)
          £170,000
          (US$233,000)
          Toronto, Canada C$149,000
          (US$120,000)
          C$273,000
          (US$220,000)
          Vancouver, Canada C$156,000
          (US$126,000)
          C$306,000
          (US$247,000)

          It can be difficult to get a visa to work in the US. For example, US immigration law mandates that a maximum of 65,000 H-1B visas (one of the most common types for software engineers) are issued a year. Also, because of the cost of flying you out for an interview, there will often be a higher bar for international applicants passing phone interviews.

          There are some things that can make it easier to get a visa:

          • Having a degree in computer science or other field related to your job
          • Applying to companies with enough capital and flexibility to bear the time and financial costs of the visa process
          • Having a specific unusual skill set that may be hard to find in the US

          Take a look at this blog to find out more.

          Despite all of this, remote work in software development is becoming far more common. There’s a growing trend for a few companies to hire globally for remote roles, and pay US-market compensation. If you manage to get one of those roles, you can earn a lot from anywhere.

          Software engineering job outlook

          The future demand for software engineers is promising. The US Bureau of Labor Statistics projects 22% growth in US employment of software engineers from 2020–30, which is much higher than the growth rate for all occupations (8%). The main reason given for this growth is a large projected increase in the demand for software for mobile technology, the healthcare industry, and computer security.

          Software engineering job outlook according to the US Bureau of Labor Statistics

          The number of web development jobs is projected to grow by 13% from 2020–2030. The main reasons for this are the expected growth of e-commerce and an increase in mobile devices that access the web.

          What does this mean for future salaries? Strong growth in demand provides the potential for salary growth, but it also depends on how easily the supply of engineers can keep up with demand.

          Web development job outlook according to the US Bureau of Labor Statistics

          Software engineering job satisfaction

          The same high demand for software engineers that leads to high pay also leads to high bargaining power. As a result, job satisfaction among software engineers is high.

          Many software engineers we have spoken to say the work is engaging, often citing the puzzles and problems involved with programming, and being able to enter a state of flow (which is one of the biggest predictors of job satisfaction). On the other hand, working with large existing codebases and fixing bugs are often less pleasant. Read our five interviews with software engineers for more details.

          Work-life balance in software engineering is generally better than in jobs with higher or comparable pay. According to one survey, software engineers work 8.6 hours per day (though hours are likely to be longer in higher-paid roles and at startups).

          Tech companies are progressive, often having flexible hours, convenient perks, remote working, and a results-driven culture. The best companies are widely regarded as among the best places to work in the world.

          Examples of people pursuing this path

          How to predict your fit in advance

          The best way to gauge your fit is to try it out. You don’t need a computer science degree to do this. We recommend that you:

          1. Try out writing code — as a complete beginner, you can write a Python program in less than 20 minutes that reminds you to take a break every two hours. Once you know the fundamentals, try taking an intro to computer science and programming class, or work through free resources. If you’re in college, you could try taking CS 101 (or an equivalent course outside the US).
          2. Do a project with other people — this lets you test out writing programs in a team and working with larger codebases. It’s easy to come up with programming projects to do with friends — you can see some examples here. Contributing to open-source projects in particular lets you work with very large existing codebases.
          3. Take an internship or do a coding bootcamp.

          It seems likely that a few software engineers could be significantly better than average. These very best software engineers are often people who spend huge amounts of time practising. This means that if you enjoy coding enough to want to do it both as a job and in your spare time, you are likely to be a good fit.

          How to enter this field

          While a degree in computer science or a quantitative subject is often helpful, many entry-level jobs don’t require one, meaning that software engineering is open to people with backgrounds in humanities and social sciences.

          To enter, you need some basic programming skills and to be able to demonstrate a strong interest in software engineering. We’ve seen many people with humanities and social science degrees get junior software engineer jobs with high salaries, just through learning on their own or through coding bootcamps.

          Learning to program

          Basic computer programming skills can be extremely useful whatever you end up doing. You’ll find ways to automate tasks or analyse data throughout your career. This means that spending a little time learning to code is a very robustly useful option.

          • Learning on your own. There are many great introductory computer science and programming courses online, including: Udacity’s Intro to Computer Science, MIT’s Introduction to Computer Science and Programming, and Stanford’s Programming Methodology. Don’t be discouraged if your code doesn’t work the first time — that’s what normally happens when people code!
          • Attending a coding bootcamp. We’ve advised many people who managed to get junior software engineer jobs in less than a year through going to a bootcamp. Coding bootcamps are focused on taking people with little knowledge of programming to as highly paid a job as possible within a couple of months. This is a great entry route if you don’t already have much background, though some claim the long-term prospects are not as good because you lack a deep understanding of computer science. Course Report is a great guide to choosing a bootcamp. Be careful to avoid low-quality bootcamps. To find out more, read our interview with an App Academy instructor.
          • Studying computer science at university (or another subject involving lots of programming). If you’re in university, this is a great option because it allows you to learn programming while the opportunity cost of your time is lower. It will also give you a better theoretical understanding of computing than a bootcamp will (which can be useful for getting the most highly paid and intellectually interesting jobs), a good network, some prestige, and a better understanding of lower-level languages like C. Having a CS degree also makes it easier to get a US work visa if you’re not from the US.
          • Doing internships. If you can find internships, ideally at your target employers (whether big tech companies or nonprofits), you’ll gain practical experience and the key skills you otherwise wouldn’t pick up from academic degrees (e.g. using version control systems and powerful text editors). Take a look at our list of software engineering (and machine learning) internships at top companies.

          Getting your first job in software engineering

          Larger companies will broadly advertise entry-level roles. For smaller companies, you may have to reach out directly and through your network. You can find startup positions on job boards such as AngelList, and many top venture capital firms have job boards for their portfolio companies.

          Large software firms can have long and in-depth interview processes. You will be asked about general software knowledge, and later rounds of interviews are likely to give you problems around coding and algorithms, during which you will be asked to collaborate with the interviewer to solve the problem.

          It’s worth practising software engineering interview questions in advance; often this means apply for companies you are less likely to want to work at first, and use these applications to get used to the process. This can be a stressful process (in part because you might face some early rejections, in part because it’s tricky to navigate applying if you don’t really want the job that much), so it’s important to take care of your mental health throughout the process.

          It will also probably help to study the most popular interview guide, Cracking the Coding Interview. You can also practise by doing TopCoder problems.

          We think that this guide to getting a software engineering job is particularly helpful. There are six rough steps:

          1. Send a company your resume. Make it as specific as possible to the job you’re applying for, and proofread it carefully. If you can get a referral from a friend, that will significantly increase your chances of success.
          2. Speak to a recruiter. Read up about the company in advance, and make sure you have questions. Be nice — it’s going to help if the recruiter is on your side.
          3. Have a technical phone interview. You’ll solve some problems together. Make sure you ask questions to clarify the problem, and strategise about the best possible approach before you start writing code. Finish by checking for bugs and make sure you’re handling errors correctly. When you’re done, ask the interviewer some questions!
          4. Have a three- to six-hour on-site interview. It’s key to talk out loud as you work through a problem. And again, ask your interviewer some questions about them and the company.
          5. Get an offer from the recruiter. You should make sure they think you are seriously considering the company or you may not get an offer. If you don’t get an offer, ask for feedback (though it’s not always possible for companies to give detailed feedback). If you need more time to think (or to apply elsewhere), tell them in advance, and they may choose to wait to give you details when you’re more ready to go through with an offer.
          6. Accept the offer!

          Want one-on-one advice on pursuing this path?

          If you think software engineering might be a great option for you, but you need help deciding or thinking about what to do next, our team might be able to help.

          We can help you compare options, make connections, and possibly even help you find jobs or funding opportunities.

          APPLY TO SPEAK WITH OUR TEAM

          Learn more

          Top recommendations

          Further recommendations

          Find a job in this path

          If you think you might be a good fit for this path and you’re ready to start looking for jobs, see our curated list of opportunities:

            View all opportunities

            Read next:  Learn about other high-impact careers

            Want to consider more paths? See our list of the highest-impact career paths according to our research.

            Plus, join our newsletter and we’ll mail you a free book

            Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

            The post Software engineering appeared first on 80,000 Hours.

            ]]>
            Audrey Tang on what we can learn from Taiwan’s experiments with how to do democracy https://80000hours.org/podcast/episodes/audrey-tang-what-we-can-learn-from-taiwan/ Wed, 02 Feb 2022 22:43:27 +0000 https://80000hours.org/?post_type=podcast&p=75963 The post Audrey Tang on what we can learn from Taiwan’s experiments with how to do democracy appeared first on 80,000 Hours.

            ]]>
            The post Audrey Tang on what we can learn from Taiwan’s experiments with how to do democracy appeared first on 80,000 Hours.

            ]]>
            Brian Christian on the alignment problem https://80000hours.org/podcast/episodes/brian-christian-the-alignment-problem/ Fri, 05 Mar 2021 20:55:49 +0000 https://80000hours.org/?post_type=podcast&p=71879 The post Brian Christian on the alignment problem appeared first on 80,000 Hours.

            ]]>
            The post Brian Christian on the alignment problem appeared first on 80,000 Hours.

            ]]>
            ML engineering for AI safety & robustness: a Google Brain engineer’s guide to entering the field https://80000hours.org/articles/ml-engineering-career-transition-guide/ Fri, 02 Nov 2018 12:41:46 +0000 https://80000hours.org/?post_type=article&p=43501 The post ML engineering for AI safety & robustness: a Google Brain engineer’s guide to entering the field appeared first on 80,000 Hours.

            ]]>
            Technical AI safety is a multifaceted area of research, with many sub-questions in areas such as reward learning, robustness, and interpretability. These will all need to be answered in order to make sure AI development will go well for humanity as systems become more and more powerful.

            Not all of these questions are best tackled with abstract mathematics research; some can be approached with concrete coding experiments and machine learning (ML) prototypes. As a result, some AI safety research teams are looking to hire a growing number of Software Engineers and ML Research Engineers.

            Additionally, some research teams that may not think of themselves as focussed on ‘AI Safety’ per se, nonetheless work on related problems like verification of neural nets or learning from human feedback, and are often hiring engineers.

            Note that this guide was written in November 2018 to complement an in-depth conversation on the 80,000 Hours Podcast with Catherine Olsson and Daniel Ziegler on how to transition from computer science and software engineering in general into ML engineering, with a focus on alignment and safety. If you like this guide, we’d strongly encourage you to check out the podcast episode where we discuss some of the instructions here, and other relevant advice.

            Update Feb 2022: The need for software engineers in AI safety seems even greater today than when this post was written (e.g. see this post by Andy Jones). You also don’t need as much knowledge of AI safety to enter the field as this guide implies.

            What are the necessary qualifications for these positions?

            Software Engineering: Some engineering roles on AI safety teams do not require ML experience. You might already be prepared to apply to these positions if you have the following qualifications:

            • BSc/BEng degree in computer science or another technical field (or comparable experience)
            • Strong knowledge of software engineering (as a benchmark: could pass a Google software engineering interview)
            • Interest in working on AI safety
            • (usually) Willingness to move to London or the San Francisco Bay Area

            If you’re a software engineer with an interest in these roles, you may not need any additional preparation, and may be ready to apply right away.

            ML Engineering and/or Research Engineering: Some roles require experience implementing and debugging machine learning algorithms. If you don’t yet have ML implementation experience, you may be able to learn the necessary skills quickly, so long as you’re willing to spend a few months studying. Before deciding to do this, you should check that you meet all the following criteria:

            • BSc/BEng degree in computer science or another technical field (or comparable experience)
            • Strong knowledge of software engineering (as a benchmark: could pass a Google software engineering interview)
            • Interest in working on AI safety
            • (usually) Willingness to move to London or the San Francisco Bay Area

            How can I best learn Machine Learning engineering skills if I don’t yet have the necessary experience?

            Initial investigation

            Implementing and debugging ML algorithms is different from traditional software engineering. The following can help you determine whether you’ll like the day-to-day work:

            ML basics

            If you don’t have any experience in machine learning, start by familiarizing yourself with the basics. If you have some experience, but haven’t done a hands-on machine learning project recently, it’s also probably a good idea to brush up on the latest tools (writing TensorFlow, starting a virtual machine with a GPU, etc).

            Although it can be difficult to find time for self-study if you’re already employed full-time or have other responsibilities, it’s far from impossible. Here are some ideas of how you might get started:

            • Consider spending a few hours a week on an online course. We recommend either of these two:
            • If you’re employed full-time in a software engineering role, you might be able to learn ML basics without leaving your current job:
              • If you’re at a large tech company, take advantage of internal trainings, including full-time ML rotation programs.
              • Ask your manager if you can incorporate machine learning into your current role: for example, to spend 20% of your time learning ML, to see if it could improve one of the projects you work on.

            For simple ML problems, you can get pretty far just on CPU on your laptop, but for larger problems it’s useful to buy a GPU and/or rent some cloud GPUs. You can often get some cloud computing credits through a free trial, educational credits for students, or asking a friend with a startup.

            Learn ML implementation and debugging, and speak with the team you want to join

            Once you know the 101-level basics of ML, the next thing to learn is how to implement and debug ML algorithms. (Based on the experiences of others in the community who have taken this path, we expect this to take at minimum 200 hours of focused work, and likely more if you are starting out with less experience).

            Breadth of experience is not important here: you don’t need to read all the latest papers, or master an extensive reading list. You also don’t need to do novel research or come up with new algorithms. Nor do you need to focus on safety at this stage; in fact, focusing on well-known and established ML algorithms is probably better for your learning.

            What you do need is to get your hands dirty implementing and debugging ML algorithms, and to build evidence for job interviews that you have some experience doing this.

            You should strongly consider contacting the teams you’re interested in at this stage. Send them an email with the specifics of what you’re planning on spending your time on to get feedback on it. The manager of the team may suggest specific resources to use, and can help you avoid wasting time on extraneous skills you don’t need for the role.

            The most straightforward way to gain this experience is to choose a subfield of ML relevant to a lab you’re interested in. Then read a few dozen of the subfield’s key papers, and reimplement a few of the foundational algorithms that the papers are based on or reference most frequently. Potential sub-fields include the following:

            • Deep reinforcement learning
            • Defenses against adversarial examples
            • Verification and robustness proofs for neural nets
            • Interpretability & visualization

            If it isn’t clear how to get started – for example, if you don’t have access to a GPU, or don’t know how to write TensorFlow – many of the resources in the “basics” section above have useful tips.

            If you need to quit your job to make time for learning in this phase, but don’t have enough runway to self-fund your studies, consider applying for an EA grant when it next opens – they are open to funding career transitions such as this one.

            Case study: Daniel Ziegler’s ML self-study experience

            In January 2018, Daniel had strong software engineering skills but only basic ML knowledge. He decided that he wanted to work on an AI safety team as a research engineer, so he talked to Dario Amodei (the OpenAI Safety team lead). Based on Dario’s advice, Daniel spent around six full-time weeks diving into deep reinforcement learning together with a housemate. He also spent a little time reviewing basic ML and doing supervised learning on images and text. Daniel then interviewed and became an ML engineer on the safety team.

            Daniel and his housemate used Josh Achiam’s Key Papers in Deep RL list to guide their efforts. They got through about 20-30 of those papers, spending maybe 1.5 hours independently reading and half an hour discussing each paper.

            More importantly, they implemented a handful of the key algorithms in TensorFlow:

            • Q-learning: DQN and some of its extensions, including prioritized replay and double DQN
            • Policy gradients: A2C, PPO, DDPG

            They applied these algorithms to try to solve various OpenAI Gym environments, from the simple ‘Cartpole-v0’ to Atari games like ‘Breakout-v4’.

            They spent 2-10 days on each algorithm (in parallel as experiments ran), depending on how in-depth they wanted to go. For some, they only got far enough to have a more-or-less-working implementation. For one (PPO), they tried to fix bugs and tune things for long enough to come close to the performance of the OpenAI Baselines implementation.

            For each algorithm, they would first test on very easy environments, and then move to more difficult environments. Note that an easy environment for one algorithm may not be easy for another: for example, despite its simplicity, the Cartpole environment has a long time horizon, which can be challenging for some algorithms.

            Once the algorithm was partially working, they would attain higher performance by looking for remaining bugs, both by reviewing the code carefully, and by collecting metrics such as average policy entropy to perform sanity-checks, rather than just tune hyperparameters. Finally, when they wanted to match the performance of Baselines, they scrutinized the Baselines implementations for small important details, such as exactly how to preprocess and normalize observations.

            By the end of six weeks, Daniel was able to talk fluently about the key ideas in RL and the tradeoffs between different algorithms. Most importantly, he was able to implement and debug ML algorithms, going from math in a paper to running code. In retrospect, Daniel reports wishing he had spent a little more time on ML conceptual & mathematical fundamentals, but that overall this process prepared Daniel well for the interview and the role, and was particularly well-suited for OpenAI’s focus on reinforcement learning.

            Now apply for jobs

            These positions will eventually be filled, but you can find a constantly updated list of some of the most promising positions on the 80,000 Hours job board.

            The following example job postings for software engineers on AI safety research teams specify that machine learning experience is not required:

            • OpenAI’s safety team is currently hiring a software engineer for a range of projects, including interfaces for human-in-the-loop AI training and collecting data for larger language models. (Update: this job posting is now closed.)
            • MIRI is hiring software engineers.
            • Ought is hiring research engineers with a focus on candidates who are excited by functional programming, compilers, program analysis, and related topics.

            The following example job postings do expect experience with machine learning implementation:

            • DeepMind is hiring research engineers for their Technical AGI Safety team, Safe and Robust AI team – which works on neural net verification and robustness – and potentially others as well.
            • Google AI is hiring research software engineers in locations worldwide. Although Google AI does not have an “AI Safety” team, there are research efforts focused on robustness, security, interpretability, and learning from human feedback.
            • OpenAI’s safety team is hiring machine learning engineers to work on alignment and interpretability.
            • The Center for Human Compatible AI at Berkeley is hiring machine learning research engineers for 1-2 year visiting scholar positions to test alignment ideas for deep reinforcement learning systems.

            When you apply to a larger organization that has multiple areas of research, specify in your application which of them you are most interested in working on. Investigate the company’s research areas in advance, in order to make sure that the areas you list are in fact ones that the company works on. For example, don’t specify “value alignment” on an application to a company that does not have any researchers working on value alignment.

            If you find that you cannot get a role contributing to safety research right now, you might look for a role in which you can gain relevant experience, and transition to a safety position later.

            Non-safety-related research engineering positions are also available at other industry AI labs though these are likely to be more competitive than roles on AGI safety teams.

            Finally, you could consider applying to a 1-year fellowship/residency program at Google, OpenAI, Facebook, Uber, or Microsoft.

            Learn more

            The post ML engineering for AI safety & robustness: a Google Brain engineer’s guide to entering the field appeared first on 80,000 Hours.

            ]]>