Research in relevant areas (Topic archive) - 80,000 Hours https://80000hours.org/topic/careers/categories-of-impactful-careers/in-research/ Wed, 31 Jan 2024 18:30:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 Research skills https://80000hours.org/skills/research/ Mon, 18 Sep 2023 15:15:19 +0000 https://80000hours.org/?post_type=skill_set&p=83656 The post Research skills appeared first on 80,000 Hours.

]]>
Norman Borlaug was an agricultural scientist. Through years of research, he developed new, high-yielding, disease-resistant varieties of wheat.

It might not sound like much, but as a result of Borlaug’s research, wheat production in India and Pakistan almost doubled between 1965 and 1970, and formerly famine-stricken countries across the world were suddenly able to produce enough food for their entire populations. These developments have been credited with saving up to a billion people from famine,1 and in 1970, Borlaug was awarded the Nobel Peace Prize.

Many of the highest-impact people in history, whether well-known or completely obscure, have been researchers.

In a nutshell: Talented researchers are a key bottleneck facing many of the world’s most pressing problems. That doesn’t mean you need to become an academic. While that’s one option (and academia is often a good place to start), lots of the most valuable research happens elsewhere. It’s often cheap to try out developing research skills while at university, and if it’s a good fit for you, research could be your highest impact option.

Key facts on fit

You might be a great fit if you have the potential to become obsessed with high-impact questions, have high levels of grit and self-motivation, are open to new ideas, are intelligent, and have a high degree of intellectual curiosity. You’ll also need to be a good fit for the particular area you’re researching (e.g. you might need quantitative ability).

Why are research skills valuable?

Not everyone can be a Norman Borlaug, and not every discovery gets adopted. Nevertheless, we think research can often be one of the most valuable skill sets to build — if you’re a good fit.

We’ll argue that:

Together, this suggests that research skills could be particularly useful for having an impact.

Later, we’ll look at:

Research seems to have been extremely high-impact historically

If we think about what has most improved the modern world, much can be traced back to research: advances in medicine such as the development of vaccines against infectious diseases, developments in physics and chemistry that led to steam power and the industrial revolution, and the invention of the modern computer, an idea which was first proposed by Alan Turing in his seminal 1936 paper On Computable Numbers.2

Many of these ideas were discovered by a relatively small number of researchers — but they changed all of society. This suggests that these researchers may have had particularly large individual impacts.

Dr Nalin helped to invent oral rehydration therapy
Dr. Nalin helped to save millions of lives with a simple innovation: giving patients with diarrhoea water mixed with salt and sugar.

That said, research today is probably lower-impact than in the past. Research is much less neglected than it used to be: there are nearly 25 times as many researchers today as there were in 1930.3 It also turns out that more and more effort is required to discover new ideas, so each additional researcher probably has less impact than those that came before.4

However, even today, a relatively small fraction of people are engaged in research. As an approximation, only 0.1% of the population are academics,5 and only about 2.5% of GDP is spent on research and development. If a small number of people account for a large fraction of progress, then on average each person’s efforts are significant.

Moreover, we still think there’s a good case to be made for research being impactful on average today, which we cover in the next two sections.

There are good theoretical reasons to think that research will be high-impact

There’s little commercial incentive to focus on the most socially valuable research. And most researchers don’t get rich, even if their discoveries are extremely valuable. Alan Turing made no money from the discovery of the computer, and today it’s a multibillion-dollar industry. This is because the benefits of research often come a long time in the future and can’t usually be protected by patents. This means if you care more about social impact than profit, then it’s a good opportunity to have an edge.

Research is also a route to leverage. When new ideas are discovered, they can be spread incredibly cheaply, so it’s a way that a single person can change a field. And innovations are cumulative — once an idea has been discovered, it’s added to our stock of knowledge and, in the ideal case, becomes available to everyone. Even ideas that become outdated often speed up the important future discoveries that supersede it.

Research skills seem extremely useful to the problems we think are most pressing

When you look at our list of the world’s most pressing problems — like preventing future pandemics or reducing risks from AI systems — expert researchers seem like a key bottleneck.

For example, to reduce the risk posed by engineered pandemics, we need people who are talented at research to identify the biggest biosecurity risks and to develop better vaccines and treatments.

To ensure that developments in AI are implemented safely and for the benefit of humanity, we need technical experts thinking hard about how to design machine learning systems safely and policy researchers to think about how governments and other institutions should respond. (See this list of relevant research questions.)

And to decide which global priorities we should spend our limited resources on, we need economists, mathematicians, and philosophers to do global priorities research. For example, see the research agenda of the Global Priorities Institute at Oxford.

We’re not sure why so many of the most promising ways to make progress on the problems we think are most pressing involve research, but it may well be due to the reasons in the section above — research offers huge opportunities for leverage, so if you take a hits-based approach to finding the best solutions to social problems, it’ll often be most attractive.

In addition, our focus on neglected problems often means we focus on smaller and less developed areas, and it’s often unclear what the best solutions are in these areas. This means that research is required to figure this out.

For more examples, and to get a sense of what you might be able to work on in different fields, see this list of potentially high-impact research questions, organised by discipline.

If you’re a good fit, you can have much more impact than the average

The sections above give reasons why research can be expected to be impactful in general. But as we’ll show below, the productivity of individual researchers probably varies a great deal (and more than in most other careers). This means that if you have reason to think your degree of fit is better than average, your expected impact could be much higher than the average.

Depending on which subject you focus on, you may have good backup options

Pursuing research helps you develop deep expertise on a topic, problem-solving, and writing skills. These can be useful in many other career paths. For example:

  • Many research areas can lead to opportunities in policymaking, since relevant technical expertise is valued in some of these positions. You might also have opportunities to advise policymakers and the public as an expert.
  • The expertise and credibility you can develop by focusing on research (especially in academia) can put you in a good position to switch your focus to communicating important ideas, especially those related to your speciality, either to the general public, policymakers, or your students.
  • If you specialise in an applied quantitative subject, it can open up certain high-paying jobs, such as quantitative trading or data science, which offer good opportunities for earning to give.

Some research areas will have much better backup options than others — lots of jobs value applied quantitative skills, so if your research is quantitative you may be able to transition into work in effective nonprofits or government. A history academic, by contrast, has many fewer clear backup options outside of academia.

What does building research skills typically involve?

By ‘research skills’ we broadly mean the ability to make progress solving difficult intellectual problems.

We find it especially useful to roughly divide research skills into three forms:

Academic research

Building academic research skills is the most predefined route. The focus is on answering relatively fundamental questions which are considered valuable by a specific academic discipline. This can be impactful either through generally advancing a field of research that’s valuable to society or finding opportunities to work on socially important questions within that field.

Turing was an academic. He didn’t just invent the computer — during World War II he developed code-breaking machines that allowed the Allies to be far more effective against Nazi U-boats. Some historians estimate this enabled D-Day to happen a year earlier than it would have otherwise.6 Since World War II resulted in 10 million deaths per year, Turing may have saved about 10 million lives.

Alan Turing aged 16
Turing was instrumental in developing the computer. Sadly, he was prosecuted for being gay, perhaps contributing to his suicide in 1954.

We’re particularly excited about academic research in subfields of machine learning relevant to reducing risks from AI, subfields of biology relevant to preventing catastrophic pandemics, and economics — we discuss which fields you should enter below.

Academic careers are also excellent for developing credibility, leading to many of the backup options we looked at above, especially options in communicating important ideas or policymaking.

Academia is relatively unique in how flexibly you can use your time. This can be a big advantage — you really get time to think deeply and carefully about things — but can be a hindrance, depending on your work style.

See more about what academia involves in our career review on academia.

Practical but big picture research

Academia rewards a focus on questions that can be decisively answered with the methods of the field. However, the most important questions can rarely be answered rigorously — the best we can do is look at many weak forms of evidence and come to a reasonable overall judgement. which means while some of this research happens in academia, it can be hard to do that.

Instead, this kind of research is often done in nonprofit research institutes, e.g. the Centre for the Governance of AI or Our World in Data, or independently.

Your focus should be on answering the questions that seem most important (given your view of which global problems most matter) through whatever means are most effective.

Some examples of questions in this category that we’re especially interested in include:

  • How likely is a pandemic worse than COVID-19 in the next 10 years?
  • How difficult is the AI alignment problem going to be to solve?
  • Which global problems are most pressing?
  • Is the world getting better or worse over time?
  • What can we learn from the history of philanthropy about which forms of philanthropy might be most effective?

You can see a longer list of ideas in this article.

Someone we know who’s had a big impact with research skills is Ajeya Cotra. Ajeya initially studied electrical engineering and computer science at UC Berkeley. In 2016, she joined Open Philanthropy as a grantmaker.7 Since then she’s worked on a framework for estimating when transformative AI might be developed, how worldview diversification could be applied to allocating philanthropic budgets, and how we might accidentally teach AI models to deceive us.

Ajeya Cotra
Ajeya was moved by many of the conclusions of effective altruism, which eventually led to her researching the transformative effects of AI.

Applied research

Then there’s applied research. This is often done within companies or nonprofits, like think tanks (although again, there’s also plenty of applied research happening in academia). Here the focus is on solving a more immediate practical problem (and if pursued by a company, where it might be possible to make profit from the solution) — and there’s lots of overlap with engineering skills. For example:

  • Developing new vaccines
  • Creating new types of solar cells or nuclear reactors
  • Developing meat substitutes

Neel was doing an undergraduate degree in maths when he decided that he wanted to work in AI safety. Our team was able to introduce Neel to researchers in the field and helped him secure internships in academic and industry research groups. Neel didn’t feel like he was a great fit for academia — he hates writing papers — so he applied to roles in commercial AI research labs. He’s now a research engineer at DeepMind. He works on mechanistic interpretability research which he thinks could be used in the future to help identify potentially dangerous AI systems before they can cause harm.

Neel Nanda
Neel’s machine learning research is heavily mathematical — but has clear applications to reducing the risks from advanced AI.

We also see “policy research” — which aims to develop better ideas for public policy — as a form of applied research.

Stages of progression through building and using research skills

These different forms of research blur into each other, and it’s often possible to switch between them during a career. In particular, it’s common to begin in academic research and then switch to more applied research later.

However, while the skill sets contain a common core, someone who can excel in intellectual academic research might not be well-suited to big picture practical or applied research.

The typical stages in an academic career involve the following steps:

  1. Pick a field. This should be heavily based on personal fit (where you expect to be most successful and enjoy your work the most), though it’s also useful to think about which fields offer the best opportunities to help tackle the problems you think are most pressing, give you expertise that’s especially useful given these problems, and use that at least as a tie-breaker. (Read more about choosing a field.)
  2. Earn a PhD.
  3. Learn your craft and establish your career — find somewhere you can get great mentorship and publish a lot of impressive papers. This usually means finding a postdoc with a good group and then temporary academic positions.
  4. Secure tenure.
  5. Focus on the research you think is most socially valuable (or otherwise move your focus towards communicating ideas or policy).

Academia is usually seen as the most prestigious path…within academia. But non-academic positions can be just as impactful — and often more so since you can avoid some of the dysfunctions and distractions of academia, such as racing to get publications.

At any point after your PhD (and sometimes with only a master’s), it’s usually possible to switch to applied research in industry, policy, nonprofits, and so on, though typically you’ll still focus on getting mentorship and learning for at least a couple of years. And you may also need to take some steps to establish your career enough to turn your attention to topics that seem more impactful.

Note that from within academia, the incentives to continue with academia are strong, so people often continue longer than they should!

If you’re focused on practical big picture research, then there’s less of an established pathway, and a PhD isn’t required.

Besides academia, you could attempt to build these skills in any job that involves making difficult, messy intellectual judgement calls, such as investigative journalism, certain forms of consulting, buy-side research in finance, think tanks, or any form of forecasting.

Personal fit is perhaps more important for research than other skills

The most talented researchers seem to differ hugely in their impact compared to typical researchers across a wide variety of metrics and according to the opinions of other researchers.

For instance, when we surveyed biomedical researchers, they said that very good researchers were rare, and they’d be willing to turn down large amounts of money if they could get a good researcher for their lab.8 Professor John Todd, who works on medical genetics at Cambridge, told us:

The best people are the biggest struggle. The funding isn’t a problem. It’s getting really special people[…] One good person can cover the ground of five, and I’m not exaggerating.

This makes sense if you think the distribution of research output is very wide — that the very best researchers have a much greater output than the average researcher.

How much do researchers differ in productivity?

It’s hard to know exactly how spread out the distribution is, but there are several strands of evidence that suggest the variability is very high.

Firstly, most academic papers get very few citations, while a few get hundreds or even thousands. An analysis of citation counts in science journals found that ~47% of papers had never been cited, more than 80% had been cited 10 times or less, but the top 0.1% had been cited more than 1,000 times. A similar pattern seems to hold across individual researchers, meaning that only a few dominate — at least in terms of the recognition their papers receive.

Citation count is a highly imperfect measure of research quality, so these figures shouldn’t be taken at face-value. For instance, which papers get cited the most may depend at least partly on random factors, academic fashions, and “winner takes all” effects — papers that get noticed early end up being cited by everyone to back up a certain claim, even if they don’t actually represent the research that most advanced the field.

However, there are other reasons to think the distribution of output is highly skewed.

William Shockley, who won the Nobel Prize for the invention of the transistor, gathered statistics on all the research employees in national labs, university departments, and other research units, and found that productivity (as measured by total number of publications, rate of publication, and number of patents) was highly skewed, following a log-normal distribution.

Shockley suggests that researcher output is the product of several (normally distributed) random variables — such as the ability to think of a good question to ask, figure out how to tackle the question, recognize when a worthwhile result has been found, write adequately, respond well to feedback, and so on. This would explain the skewed distribution: if research output depends on eight different factors and their contribution is multiplicative, then a person who is 50% above average in each of the eight areas will in expectation be 26 times more productive than average.9

When we looked at up-to-date data on how productivity differs across many different areas, we found very similar results. The bottom line is that research seems to perhaps be the area where we have the best evidence for output being heavy-tailed.

Interestingly, while there’s a huge spread in productivity, the most productive academic researchers are rarely paid 10 times more than the median, since they’re on fixed university pay-scales. This means that the most productive researchers yield a large “excess” value to their field. For instance, if a productive researcher adds 10 times more value to the field than average, but is paid the same as average, they will be producing at least nine times as much net benefit to society. This suggests that top researchers are underpaid relative to their contribution, discouraging them from pursuing research and making research skills undersupplied compared to what would be ideal.

Can you predict these differences in advance?

Practically, the important question isn’t how big the spread is, but whether you could — early on in your career — identify whether or not you’ll be among the very best researchers.

There’s good news here! At least in scientific research, these differences also seem to be at least somewhat predictable ahead of time, which means the people entering research with the best fit could have many times more expected impact.

In a study, two IMF economists looked at maths professors’ scores in the International Mathematical Olympiad — a prestigious maths competition for high school students. They concluded that each additional point scored on the International Mathematics Olympiad “is associated with a 2.6 percent increase in mathematics publications and a 4.5 percent increase in mathematics citations.”

We looked at a range of data on how predictable productivity differences are in various areas and found that they’re much more predictable in research.

What does this mean for building research skills?

The large spread in productivity makes building strong research skills a lot more promising if you’re a better fit than average. And if you’re a great fit, research can easily become your best option.

And while these differences in output are not fully predictable at the start of a career, the spread is so large that it’s likely still possible to predict differences in productivity with some reliability.

This also means you should mainly be evaluating your long-term expected impact in terms of your chances of having a really big success.

That said, don’t rule yourself out too early. Firstly, many people systematically underestimate their skills. (Though others overestimate them!) Also, the impact of research can be so large that it’s often worth trying it out, even if you don’t expect you’ll succeed. This is especially true because the early steps of a research career often give you good career capital for many other paths.

How to evaluate your fit

How to predict your fit in advance

It’s hard to predict success in advance, so we encourage an empirical approach: see if you can try it out and look at your track record.

You probably have some track record in research: many of our readers have some experience in academia from doing a degree, whether or not they intended to go into academic research. Standard academic success can also point towards being a good fit (though is nowhere near sufficient!):

  • Did you get top grades at undergraduate level (a 1st in the UK or a GPA over 3.5 in the US)?
  • If you do a graduate degree, what’s your class rank (if you can find that out)? If you do a PhD, did you manage to author an article in a top journal (although note that this is easier in some disciplines than others)?

Ultimately, though, your academic track record isn’t going to tell you anywhere near as much as actually trying out research. So it’s worth looking for ways to cheaply try out research (which can be easy if you’re at college). For example, try doing a summer research project and see how it goes.

Some of the key traits that suggest you might be a good fit for a research skills seem to be:

  • Intelligence (Read more about whether intelligence is important for research.)
  • The potential to become obsessed with a topic (Becoming an expert in anything can take decades of focused practice, so you need to be able to stick with it.)
  • Relatedly, high levels of grit, self-motivation, and — especially for independent big picture research, but also for research in academia — the ability to learn and work productively without a traditional manager or many externally imposed deadlines
  • Openness to new ideas and intellectual curiosity
  • Good research taste, i.e. noticing when a research question matters a lot for solving a pressing problem

There are a number of other cheap ways you might try to test your fit.

Something you can do at any stage is practice research and research-based writing. One way to get started is to try learning by writing.

You could also try:

  • Finding out what the prerequisites/normal backgrounds of people who go into a research area are to compare your skills and experience to them
  • Reading key research in your area, trying to contribute to discussions with other researchers (e.g. via a blog or twitter), and getting feedback on your ideas
  • Talking to successful researchers in a field and asking what they look for in new researchers

How to tell if you’re on track

Here are some broad milestones you could aim for while becoming a researcher:

  • You’re successfully devoting time to building your research skills and communicating your findings to others. (This can often be the hardest milestone to hit for many — it can be hard to simply sustain motivation and productivity given how self-directed research often needs to be.)
  • In your own judgement, you feel you have made and explained multiple novel, valid, nontrivially important (though not necessarily earth-shattering) points about important topics in your area.
  • You’ve had enough feedback (comments, formal reviews, personal communication) to feel that at least several other people (whose judgement you respect and who have put serious time into thinking about your area) agree, and (as a result) feel they’ve learned something from your work. For example, lots of this feedback could come from an academic supervisor. Make sure you’re asking people in a way that gives them affordance to say you’re not doing well.
  • You’re making meaningful connections with others interested in your area — connections that seem likely to lead to further funding and/or job opportunities. This could be from the organisations most devoted to your topics of interest; but, there could also be a “dissident” dynamic in which these organisations seem uninterested and/or defensive, but others are noticing this and offering help.

If you’re finding it hard to make progress in a research environment, it’s very possible that this is the result of that particular environment, rather than the research itself. So it can be worth testing out multiple different research jobs before deciding this skill set isn’t for you.

Within academic research

Academia has clearly defined stages, so you can see how you’re performing at each of these.

Very roughly, you can try asking “How quickly and impressively is my career advancing, by the standards of my institution and field?” (Be careful to consider the field as a whole, rather than just your immediate peers, who might be very different from average.) Academics with more experience than you may be able to help give you a clear idea of how things are going.

We go through this in detail in our review of academic research careers.

Within independent research

As a very rough guideline, people who are an excellent fit for independent research can often reach the broad milestones above with a year of full-time effort purely focusing on building a research skill set, or 2–3 years of 20%-time independent effort (i.e. one day per week).

Within research in industry or policy

The stages here can look more like an organisation-building career, and you can also assess your fit by looking at your rate of progression through the organisation.

How to get started building research skills

As we mentioned above, if you’ve done an undergraduate degree, one obvious pathway into research is to go to graduate school (read our advice on choosing a graduate programme) and then attempt to enter academia before deciding whether to continue or pursue positions outside of academia later in your career.

If you take the academic path, then the next steps are relatively clear. You’ll want to try to get excellent grades in undergraduate and in your master’s, ideally gain some kind of research experience in your summers, and then enter the best PhD programme you can. From there, focus on learning your craft by working under the best researcher you can find as a mentor and working in a top hub for your field. Try to publish as many papers as possible since that’s required to land an academic position.

It’s also not necessary to go to graduate school to become a great researcher (though this depends a lot on the field), especially if you’re very talented.
For instance, we interviewed Chris Olah, who is working on AI research without even an undergraduate degree.

You can enter many non-academic research jobs without a background in academia. So one starting point for building up research skills would be getting a job at an organisation specifically focused on the type of question you’re interested in. For examples, take a look at our list of recommended organisations, many of which conduct non-academic research in areas relevant to pressing problems.

More generally, you can learn research skills in any job that heavily features making difficult intellectual judgement calls and bets, preferably on topics that are related to the questions you’re interested in researching. These might include jobs in finance, political analysis, or even nonprofits.

Another common route — depending on your field — is to develop software and tech skills and then apply them at research organisations. For instance, here’s a guide to how to transition from software engineering into AI safety research.

If you’re interested in doing practical big-picture research (especially outside academia), it’s also possible to establish your career through self-study and independent work — during your free time or on scholarships designed for this (such as EA Long-Term Future Fund grants and Open Philanthropy support for individuals working on relevant topics).

Some example approaches you might take to self-study:

  • Closely and critically review some pieces of writing and argumentation on relevant topics. Explain the parts you agree with as clearly as you can and/or explain one or more of your key disagreements.
  • Pick a relevant question and write up your current view and reasoning on it. Alternatively, write up your current view and reasoning on some sub-question that comes up as you’re thinking about it.
  • Then get feedback, ideally from professional researchers or those who use similar kinds of research in their jobs.

It could also be beneficial to start with some easier versions of this sort of exercise, such as:

  • Explaining or critiquing interesting arguments made on any topic you find motivating to write about
  • Writing fact posts
  • Reviewing the academic literature on any topic of interest and trying to reach and explain a bottom-line conclusion

In general, it’s not necessary to obsess over being “original” or having some new insight at the beginning. You can learn a lot just by trying to write up your current understanding.

Choosing a research field

When you’re getting started building research skills, there are three factors to consider in choosing a field:

  1. Personal fit — what are your chances of being a top researcher in the area? Even if you work on an important question, you won’t make much difference if you’re not particularly good at it or motivated to work on the problem.
  2. Impact — how likely is it that research in your field will contribute to solving pressing problems?
  3. Back-up options — how will the skills you build open up other options if you decide to change fields (or leave research altogether)?

One way to go about making a decision is to roughly narrow down fields by relevance and back-up options and then pick among your shortlist based on personal fit.

We’ve found that, especially when they’re getting started building research skills, people sometimes think too narrowly about what they can be good at and enjoy. Instead, they end up pigeonholing themselves in a specific area (for example being restricted by the field of their undergraduate degree). This can be harmful because it means people who could contribute to highly important research don’t even consider it. This increases the importance of writing a broad list of possible areas to research.

Given our list of the world’s most pressing problems, we think some of the most promising fields to do research within are as follows:

  • Fields relevant to artificial intelligence, especially machine learning, but also computer science more broadly. This is mainly to work on AI safety directly, though there are also many opportunities to apply machine learning to other problems (as well as many back-up options).
  • Biology, particularly synthetic biology, virology, public health, and epidemiology. This is mainly for biosecurity.
  • Economics. This is for global priorities research, development economics, or policy research relevant to any cause area, especially global catastrophic risks.
  • Engineering — read about developing and using engineering skills to have an impact.
  • International relations/political science, including security studies and public policy — these enable you to do research into policy approaches to mitigating catastrophic risks and are also a good route into careers in government and policy more broadly.
  • Mathematics, including applied maths or statistics (or even physics). This may be a good choice if you’re very uncertain, as it teaches you skills that can be applied to a whole range of different problems — and lets you move into most of the other fields we list. It’s relatively easy to move from a mathematical PhD into machine learning, economics, biology, or political science, and there are opportunities to apply quantitative methods to a wide range of other fields. They also offer good back-up options outside of research.
  • There are many important topics in philosophy and history, but these fields are unusually hard to advance within, and don’t have as good back-up options. (We do know lots of people with philosophy PhDs who have gone on to do other great, non-philosophy work!)

However, many different kinds of research skills can play a role in tackling pressing global problems.

Choosing a sub-field can sometimes be almost as important as choosing a field. For example, in some sciences the particular lab you join will determine your research agenda — and this can shape your entire career.

And as we’ve covered, personal fit is especially important in research. This can mean it’s easily worth going into a field that seems less relevant on average if you are an excellent fit. (This is due both to the value of the research you might produce and the excellent career capital that comes from becoming top of an academic field.)

For instance, while we most often recommend the fields above, we’d be excited to see some of our readers go into history, psychology, neuroscience, and a whole number of other fields. And if you have a different view of global priorities from us, there might be many other highly relevant fields.

Once you have these skills, how can you best apply them to have an impact?

Richard Hamming used to annoy his colleagues by asking them “What’s the most important question in your field?”, and then after they’d explained, following up with “And why aren’t you working on it?”

You don’t always need to work on the very most important question in your field, but Hamming has a point. Researchers often drift into a narrow speciality and can get detached from the questions that really matter.

Now let’s suppose you’ve chosen a field, learned your craft, and are established enough that you have some freedom about where to focus. Which research questions should you focus on?

Which research topics are the highest-impact?

Charles Darwin travelled the oceans to carefully document different species of birds on a small collection of islands — documentation which later became fuel for the theory of evolution. This illustrates how hard it is to predict which research will be most impactful.

What’s more, we can’t know what we’re going to discover until we’ve discovered it, so research has an inherent degree of unpredictability. There’s certainly an argument for curiosity-driven research without a clear agenda.

That said, we think it’s also possible to increase your chances of working on something relevant, and the best approach is to try to find topics that both personally motivate you and seem more likely than average to matter. Here are some approaches to doing that.

Using the problem framework

One approach is to ask yourself which global problems you think are most pressing, and then try to identify research questions that are:

  • Important to making progress on those problems (i.e. if this question were answered, it would lead to more progress on these problems)
  • Neglected by other researchers (e.g. because they’re at the intersection of two fields, unpopular for bad reasons, or new)
  • Tractable (i.e. you can see a path to making progress)

The best research questions will score at least moderately well on all parts of this framework. Building a perpetual motion machine is extremely important — if we could do it, then we’d solve our energy problems — but we have good reason to think it’s impossible, so it’s not worth working on. Similarly, a problem can be important but already have the attention of many extremely talented researchers, meaning your extra efforts won’t go very far.

Finding these questions, however, is difficult. Often, the only way to identify a particularly promising research question is to be an expert in that field! That’s because (when researchers are doing their jobs), they will be taking the most obvious opportunities already.

However, the incentives within research rarely perfectly line up with the questions that most matter (especially if you have unusual values, like more concern for future generations or animals). This means that some questions often get unfairly neglected. If you’re someone who does care a lot about positive impact and have some slack, you can have a greater-than-average impact by looking for them.

Below are some more ways of finding those questions (which you can use in addition to directly applying the framework above).

Rules of thumb for finding unfairly neglected questions

  • There’s little money in answering the question. This can be because the problem mostly affects poorer people, people who are in the future, or non-humans, or because it involves public goods. This means there’s little incentive for businesses to do research on this question.
  • The political incentives to answer the question are missing. This can happen when the problem hurts poorer or otherwise marginalised people, people who tend not to organise politically, people in countries outside the one where the research is most likely to get done, people who are in the future, or non-humans. This means there’s no incentive for governments or other public actors to research this question.
  • It’s new, doesn’t already have an established discipline, or is at the intersection of two disciplines. The first researchers in an area tend to take any low hanging fruit, and it gets harder and harder from there to make big discoveries. For example, the rate of progress within machine learning is far higher than the rate of progress within theoretical physics. At the same time, the structure of academia means most researchers stay stuck within the field they start in, and it can be hard to get funding to branch out into other areas. This means that new fields or questions at the intersection of two disciplines often get unfairly neglected and therefore provide opportunities for outsized impact.
  • There is some aspect of human irrationality that means people don’t correctly prioritise the issue. For instance, some issues are easy to visualise, which makes them more motivating to work on. People are scope blind which means they’re likely to neglect the issues with the very biggest scale. They’re also bad at reasoning about issues with low probability, which can make them either over-invest or under-invest in them.
  • Working on the question is low status. In academia, research that’s intellectually interesting and fits the research standards of the discipline are high status. Also, mathematical and theoretical work tends to be seen as higher status (and therefore helps to progress your career). But these don’t correlate that well with the social value of the question.
  • You’re bringing new skills or a new perspective to an established area. Progress often comes in science from bringing the techniques and insights of one field into another. For instance, Kahneman started a revolution in economics by applying findings from psychology. Cross-over is an obvious approach but is rarely used because researchers tend to be immersed in their own particular subject.

If you think you’ve found a research question that’s short on talent, it’s worth checking whether the question is answerable. People might be avoiding the question because it’s just extremely difficult to find an answer. Or perhaps progress isn’t possible at all. Ask yourself, “If there were progress on this question, how would we know?”

Finally, as we’ve discussed, personal fit is particularly important in research. So position yourself to work on questions where you maximise your chances of producing top work.

Find jobs that use a research skills

If you have these skills already or are developing it and you’re ready to start looking at job opportunities that are currently accepting applications, see our curated list of opportunities for this skill set:

    View all opportunities

    Career paths we’ve reviewed that use these skills

    Learn more about research

    See all our articles and podcasts on research careers.

    Read next:  Explore other useful skills

    Want to learn more about the most useful skills for solving global problems, according to our research? See our list.

    Plus, join our newsletter and we’ll mail you a free book

    Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

    The post Research skills appeared first on 80,000 Hours.

    ]]>
    Seren Kell on the research gaps holding back alternative proteins from mass adoption https://80000hours.org/podcast/episodes/seren-kell-alternative-proteins/ Wed, 18 Oct 2023 20:30:45 +0000 https://80000hours.org/?post_type=podcast&p=84243 The post Seren Kell on the research gaps holding back alternative proteins from mass adoption appeared first on 80,000 Hours.

    ]]>
    The post Seren Kell on the research gaps holding back alternative proteins from mass adoption appeared first on 80,000 Hours.

    ]]>
    Hannah Ritchie on why it makes sense to be optimistic about the environment https://80000hours.org/podcast/episodes/hannah-ritchie-environmental-optimism/ Mon, 14 Aug 2023 21:16:39 +0000 https://80000hours.org/?post_type=podcast&p=83016 The post Hannah Ritchie on why it makes sense to be optimistic about the environment appeared first on 80,000 Hours.

    ]]>
    The post Hannah Ritchie on why it makes sense to be optimistic about the environment appeared first on 80,000 Hours.

    ]]>
    AI governance and coordination https://80000hours.org/career-reviews/ai-policy-and-strategy/ Tue, 20 Jun 2023 12:00:34 +0000 https://80000hours.org/?post_type=career_profile&p=74390 The post AI governance and coordination appeared first on 80,000 Hours.

    ]]>
    As advancing AI capabilities gained widespread attention in late 2022 and 2023 — particularly after the release of OpenAI’s ChatGPT and Microsoft’s Bing chatbot — interest in governing and regulating these systems has grown. Discussion of the potential catastrophic risks of misaligned or uncontrollable AI also became more prominent, potentially opening up opportunities for policy that could mitigate the threats.

    There’s still a lot of uncertainty about which strategies for AI governance and coordination would be best, though parts of the community of people working on this subject may be coalescing around some ideas. See, for example, a list of potential policy ideas from Luke Muehlhauser of Open Philanthropy1 and a survey of expert opinion on best practices in AI safety and governance.

    But there’s no roadmap here. There’s plenty of room for debate about which policies and proposals are needed.

    We may not have found the best ideas yet in this space, and many of the existing policy ideas haven’t yet been developed into concrete, public proposals that could actually be implemented. We hope to see more people enter this field to develop expertise and skills that will contribute to risk-reducing AI governance and coordination.

    In a nutshell: Advanced AI systems could have massive impacts on humanity and potentially pose global catastrophic risks. There are opportunities in AI governance and coordination around these threats to shape how society responds to and prepares for the challenges posed by the technology.

    Given the high stakes, pursuing this career path could be many people’s highest-impact option. But they should be very careful not to accidentally exacerbate the threats rather than mitigate them.

    Recommended

    If you are well suited to this career, it may be the best way for you to have a social impact.

    Review status

    Based on an in-depth investigation 

    “What you’re doing has enormous potential and enormous danger.” — US President Joe Biden, to the leaders of the top AI labs

    Why this could be a high-impact career path

    Artificial intelligence has advanced rapidly. In 2022 and 2023, new language and image generation models gained widespread attention for their abilities, blowing past previous benchmarks the technology had met.

    And the applications of these models are still new; with more tweaking and integration into society, the existing AI systems may become easier to use and more ubiquitous in our lives.

    We don’t know where all these developments will lead us. There’s reason to be optimistic that AI will eventually help us solve many of the world’s problems, raising living standards and helping us build a more flourishing society.

    But there are also substantial risks. AI can be used for both good and ill. And we have concerns that the technology could, without the proper controls, accidentally lead to a major catastrophe — and perhaps even cause human extinction. We discuss the arguments that these risks exist in our in-depth problem profile.

    Because of these risks, we encourage people to work on finding ways to reduce these risks through technical research and engineering.

    But a range of strategies for risk reduction will likely be needed. Government policy and corporate governance interventions in particular may be necessary to ensure that AI is developed to be as broadly beneficial as possible and without unacceptable risk.

    Governance generally refers to the processes, structures, and systems that carry out decision making for organisations and societies at a high level. In the case of AI, we expect the governance structures that matter most to be national governments and organisations developing AI — as well as some international organisations and perhaps subnational governments.

    Some aims of AI governance work could include:

    • Preventing the deployment of any AI systems that pose a significant and direct threat of catastrophe
    • Mitigating the negative impact of AI technology on other catastrophic risks, such as nuclear weapons and biotechnology
    • Guiding the integration of AI technology into our society and economy with limited harms and to the advantage of all
    • Reducing the risk of an “AI arms race,” in which competition leads to technological advancement without the necessary safeguards and caution — between nations and between companies
    • Ensuring that those creating the most advanced AI models are incentivised to be cooperative and concerned about safety
    • Slowing down the development and deployment of new systems if the advancements are likely to outpace our ability to keep them safe and under control

    We need a community of experts who understand the intersection of modern AI systems and policy, as well as the severe threats and potential solutions. This field is still young, and many of the paths within it aren’t clear and are not sure to pan out. But there are relevant professional paths that will provide you valuable career capital for a variety of positions and types of roles.

    The rest of this article explains what work in this area might involve, how you can develop career capital and test your fit, and where some promising places to work might be.

    What kinds of work might contribute to AI governance?

    What should governance-related work on AI actually involve? There are a variety of ways to pursue AI governance strategies, and as the field becomes more mature, the paths are likely to become clearer and more established.

    We generally don’t think people early in their careers should be aiming for a specific job that they think would be high-impact. They should instead aim to develop skills, experience, knowledge, judgement, networks, and credentials — what we call career capital — that they can later use when an opportunity to have a positive impact is ripe.

    This may involve following a pretty standard career trajectory, or it may involve bouncing around in different kinds of roles. Sometimes, you just have to apply to a bunch of different roles and test your fit for various types of work before you know what you’ll be good at. The main thing to keep in mind is that you should try to get excellent at something for which you have strong personal fit and that will let you contribute to solving pressing problems.

    In the AI governance and coordination space, we see at least six large categories of work that we expect to be important:

    There aren’t necessarily openings in all these categories at the moment for careers in AI governance, but they represent a range of sectors in which impactful work may potentially be done in the coming years and decades. Thinking about the different skills and forms of career capital that will be useful for the categories of work you could see yourself doing in the future can help you figure out what your immediate next steps should be. (We discuss how to assess your fit and enter this field below.)

    You may want to — and indeed it may be advantageous to — move between these different categories of work at different points in your career. You can also test out your fit for various roles by taking internships, fellowships, entry-level jobs, temporary placements, or even doing independent research, all of which can serve as career capital for a range of paths.

    We have also reviewed career paths in AI technical safety research and engineering and information security, which may be crucial to reducing risks from AI, and which may play a significant role in an effective governance agenda. People serious about pursuing a career in AI governance should familiarise themselves with these fields as well.

    Government work

    Taking a role within government could lead to playing an important role in the development, enactment, and enforcement of AI policy.

    Note that we generally expect that the US federal government will be the most significant player in AI governance for the foreseeable future. This is because of its global influence and its jurisdiction over much of the AI industry, including the top three AI labs training state-of-the-art, general-purpose models (Anthropic, OpenAI, and Google DeepMind) and key parts of the chip supply chain. Much of this article focuses on US policy and government.2

    But other governments and international institutions may also end up having important roles to play in certain scenarios. For example, the UK government, the European Union, China, and potentially others, may all present opportunities for impactful AI governance work. Some US state-level governments, such as California, may also offer opportunities for impact and gaining career capital.

    What would this work involve? Sections below discuss how to enter US policy work and which areas of the government that you might aim for.

    But at the broadest level, people interested in positively shaping AI policy should aim to gain the skills and experience to work in areas of government with some connection to AI or emerging technology policy.

    This can include roles in: legislative branches, domestic regulation, national security, diplomacy, appropriations and budgeting, and other policy areas.

    If you can get a role out of the gate that is already working directly on this issue, such as a staff position with a lawmaker who is focused on AI, that could be a great opportunity.

    Otherwise, you should seek to learn as much as you can about how policy works and which government roles might allow you to have the most impact, while establishing yourself as someone who’s knowledgeable about the AI policy landscape. Having almost any significant government role that touches on some aspect of AI, or having some impressive AI-related credential, may be enough to get you quite far.

    One way to advance your career in government on a specific topic is what some call “getting visibility” — that is, using your position to learn about the landscape and connect with the actors and institutions that affect the policy area you care about. You’ll want to be invited to meetings with other officials and agencies, be asked for input on decisions, and engage socially with others who work in the policy area. If you can establish yourself as a well-regarded expert on an important but neglected aspect of the issue, you’ll have a better shot at being included in key discussions and events.

    Career trajectories within government can be broken down roughly as follows:

    • Standard government track: This involves entering government at a relatively low level and building up your career capital on the inside by climbing the seniority ladder. For the highest impact, you’d ideally end up reaching senior levels by sticking around, gaining skills and experience, and getting promoted. You may move between agencies, departments, or branches.
    • Specialisation career capital: You can also move in and out of government throughout your career. People on this trajectory will also work at nonprofits, think tanks, industry labs, political parties, academia, and other organisations. But they will primarily focus on becoming an expert in a topic — such as AI. It can be harder to get seniority this way, but the value of expertise and experience can sometimes outweigh seniority.
    • Direct-impact work: Some people move into government jobs without a longer plan to build career capital because they see an opportunity for direct, immediate impact. This might look like getting tapped to lead an important commission or providing valuable input on an urgent project. We don’t generally recommend planning on this kind of strategy for your career, but it’s good to be aware of it as an opportunity that might be worth taking at some point.

    Research on AI policy and strategy

    There’s still a lot of research to be done on the most important avenues for AI governance approaches. While there are some promising proposals for a system of regulatory and strategic steps that can help reduce the risk of an AI catastrophe, there aren’t many concrete and publicly available policy proposals ready for adoption.

    The world needs more concrete proposals for AI policies that would really start to tackle the biggest threats; developing such policies, and deepening our understanding of the strategic needs of the AI governance space, should be high priorities.

    Other relevant research could involve surveys of public opinion that could inform communication strategies, legal research about the feasibility of proposed policies, technical research on issues like compute governance, and even higher-level theoretical research into questions about the societal implications of advanced AI. Some research, such as that done by Epoch AI, focuses on forecasting the future course of AI developments, which can influence AI governance decisions.

    However, several experts we’ve talked to warn that a lot of research on AI governance may prove to be useless, so it’s important to be reflective and seek input from others in the field — both from experienced policy practitioners and technical experts — about what kind of contribution you can make. We list several research organisations below that we think would be good to work at in order to pursue promising research on this topic.

    One potentially useful approach for testing your fit for this work — especially when starting out in this research — is to write up analyses and responses to existing work on AI policy or investigate some questions in this area that haven’t been the subject of much attention. You can then share your work widely, send it out for feedback from people in the field, and evaluate how much you enjoy the work and whether you might productively contribute to this research longer term.

    But it’s possible to spend too long testing your fit without making much progress, and some people find that they’re best able to contribute when they’re working on a team. So don’t overweight or over-invest in independent work, especially if there are few signs it’s working out especially well for you. This kind of project can make sense for maybe a month or a bit longer — but it’s unlikely to be a good idea to spend much more than that without meaningful funding or some really encouraging feedback from people working in the field.

    If you have the experience to be hired as a researcher, work on AI governance can be done in academia, nonprofit organisations, and think tanks. Some government agencies and committees, too, perform valuable research.

    Note that universities and academia have their own priorities and incentives that often aren’t aligned with producing the most impactful work. If you’re already an established researcher with tenure, it may be highly valuable to pivot into work on AI governance — this position may even give you a credible platform from which to advocate for important ideas.

    But if you’re just starting out a research career and want to focus on this issue, you should carefully consider whether your work will be best supported inside or outside of academia. For example, if you know of a specific programme with particular mentors who will help you pursue answers to critical questions in this field, it might be worth doing. We’re less inclined to encourage people to pursue generic academic-track roles with the vague hope that one day they can do important research on this topic.

    Advanced degrees in policy or relevant technical fields may well be valuable, though — see more discussion of this in the section on how to assess your fit and get started.

    Industry work

    While government policy is likely to play a key role in coordinating various actors interested in reducing the risks from advanced AI, internal policy and corporate governance at the largest AI labs themselves is also a powerful tool. We think people who care about reducing risk can potentially do valuable work internally at industry labs. (Read our career review of non-technical roles at AI labs.)

    At the highest level, deciding who sits on corporate boards, what kind of influence those boards have, and to what extent the organisation is structured to seek profit and shareholder value as opposed to other aims, can end up having a major impact on the direction a company takes. If you might be able to get a leadership role at a company developing frontier AI models, such as a management position or a seat on the board, it could potentially be a very impactful position.

    If you’re able to join a policy team at a major lab, you can model threats and help develop, implement, and evaluate promising proposals internally to reduce risks. And you can build consensus around best practices, such as strong information security policies, using outside evaluators to find vulnerabilities and dangerous behaviours in AI systems (red teaming), and testing out the latest techniques from the field of AI safety.

    And if, as we expect, AI labs face increasing government oversight, industry governance and policy work can ensure compliance with any relevant laws and regulations that get put in place. Interfacing with government actors and facilitating coordination over risk reduction approaches could be impactful work.

    In general, the more cooperative AI labs are with each other3 and outside groups seeking to minimise catastrophic risks from AI, the better. And this doesn’t seem to be an outlandish hope — many industry leaders have expressed concern about extinction risks and have even called for regulation of the frontier technology they’re creating.

    That said, we can expect this cooperation to take substantial work — it would be surprising if the best policies for reducing risks were totally uncontroversial in industry, since labs also face huge commercial incentives to build more powerful systems, which can carry more risk. The more everyone’s able to communicate and align their incentives, the better things seem likely to go.

    Advocacy and lobbying

    People outside of government or AI labs can influence the shape of public policy and corporate governance via advocacy and lobbying.

    As of this writing, there has not yet been a large public movement in favour of regulating or otherwise trying to reduce risks from AI, so there aren’t many openings that we know about in this category. But we expect growing interest in this area to open up new opportunities to press for political action and policy changes at AI labs, and it could make sense to start building career capital and testing your fit now for different kinds of roles that would fall into this category down the line.

    If you believe AI labs may be disposed to advocate for generally beneficial regulation, you might want to try to work for them, or become a lobbyist for the industry as a whole, to push the government to adopt specific policies. It’s plausible that AI labs will have by far the best understanding of the underlying technology, as well as the risks, failure modes, and safest paths forward.

    On the other hand, it could be the case that AI labs have too much of a vested interest in the shape of regulations to reliably advocate for broadly beneficial policies. If that’s right, it may be better to join or create advocacy organisations unconnected from the industry — supported by donations or philanthropic foundations — that can take stances that are opposed to the labs’ commercial interests.

    For example, it could be the case that the best approach from a totally impartial perspective would be at some point to deliberately slow down or halt the development of increasingly powerful AI models. Advocates could make this demand of the labs themselves or of the government to slow down AI progress. It may be difficult to come to this conclusion or advocate for it if you have strong connections to the companies creating these systems.

    It’s also possible that the best outcomes will be achieved with a balance of industry lobbyists and outside lobbyists and advocates making the case for their preferred policies — as both bring important perspectives.

    We expect there will be increasing public interest in AI policy as the technological advancements have ripple effects in the economy and wider society. And if there’s increasing awareness of the impact of AI on people’s lives, the risks the technology poses may become more salient to the public, which will give policymakers strong incentives to take the problem seriously. It may also bring new allies into the cause of ensuring that the development of advanced AI goes well.

    Advocacy can also:

    • Highlight neglected but promising approaches to governance that have been uncovered in research
    • Facilitate the work of policymakers by showcasing the public’s support for governance measures
    • Build bridges between researchers, policymakers, the media, and the public by communicating complicated ideas in an accessible way to many audiences
    • Pressure corporations themselves to proceed more cautiously
    • Change public sentiment around AI and discourage irresponsible behaviour by individual actors, such as the spreading of powerful open-source models

    However, note that advocacy can sometimes backfire. Predicting how information will be received is far from straightforward. Drawing attention to a cause area can sometimes trigger a backlash; presenting problems with certain styles of rhetoric can alienate people or polarise public opinion; spreading misleading or mistaken messages can discredit yourself and fellow advocates. It’s important that you are aware of the risks, consult with others (particularly those who you respect but might disagree with tactically), and commit to educating yourself deeply about the topic before expounding on it in public.

    You can read more in the section about doing harm below. We also recommend reading our article on ways people trying to do good accidentally make things worse and how to avoid them.

    Case study: the Future of Life Institute open letter

    In March 2023, the Future of Life Institute published an open letter calling for a pause of at least six months on training any new models more “powerful” than OpenAI’s GPT-4 — which had been released about a week earlier. GPT-4 is a state-of-the-art language model that can be used through ChatGPT to produce novel and impressive text responses to a wide range of prompts.

    The letter attracted a lot of attention, perhaps in part because it was signed by prominent figures such as Elon Musk. While it didn’t immediately achieve its explicit aims — the labs didn’t commit to a pause — it drew a lot of attention and fostered public conversations about the risks of AI and the potential benefits of slowing down. (An earlier article titled “Let’s think about slowing down AI” — by Katja Grace of the research organisation AI Impacts — aimed to have a similar effect.)

    There’s no clear consensus on whether the FLI letter was on the right track. Some critics of the letter, for example, said that its advice would actually lead to worse outcomes overall if followed, because it would slow down AI safety research while many of the innovations that drive AI capabilities progress, such as chip development, would continue to race forward. Proponents of the letter pushed back on these claims.4 It does seem clear that the letter changed the public discourse around AI safety in a way that few other efforts have achieved, which is proof of concept for what impactful advocacy can accomplish.

    Third-party auditing and evaluation

    If regulatory measures are put in place to reduce the risks of advanced AI, some agencies and organisations — within government or outside — will need to audit companies and systems to make sure that regulations are being followed.

    One nonprofit, the Alignment Research Center, has been at the forefront of this kind of work.5 In addition to its research work, it has launched a program to evaluate the capabilities of advanced AI models. In early 2023, the organisation partnered with two leading AI labs, OpenAI and Anthropic, to evaluate the capabilities of the latest versions of their chatbot models prior to their release. They sought to determine in a controlled environment if the models had any potentially dangerous capabilities.

    The labs voluntarily cooperated with ARC for this project, but at some point in the future, these evaluations may be legally required.

    Governments often rely on third-party auditors as crucial players in regulation, because the government may lack the expertise (or the capacity to pay for the expertise) that the private sector has. There aren’t many such opportunities available in this type of role that we know of as of this writing, but they may end up playing a critical part of an effective AI governance framework.

    Other types of auditing and evaluation may be required as well. ARC has said it intends to develop methods to determine which models are appropriately aligned — that is, that they will behave as their users intend them to behave — prior to release.

    Governments may also want to employ auditors to evaluate the amount of compute that AI developers have access to, their information security practices, the uses of models, the data used to train models, and more.

    Acquiring the technical skills and knowledge to perform these types of evaluations, and joining organisations that will be tasked to perform them, could be the foundation of a highly impactful career. This kind of work will also likely have to be facilitated by people who can manage complex relationships across industry and government. Someone with experience in both sectors could have a lot to contribute.

    Some of these types of roles may have some overlap with work in AI technical safety research.

    One potential advantage of working in the private sector for AI governance work is you may be significantly better paid than you would be in government.

    International work and coordination

    US-China

    For someone with the right fit, cooperation and coordination with China on the safe development of AI could be a particularly impactful approach within the broad AI governance career path.

    The Chinese government has been a major funder in the field of AI, and the country has giant tech companies that could potentially drive forward advances.

    Given tensions between the US and China, and the risks posed by advanced AI, there’s a lot to be gained from increasing trust, understanding, and coordination between the two countries. The world will likely be much better off if we can avoid a major conflict between great powers and if the most significant players in emerging technology can avoid exacerbating any global risks.

    We have a separate career review that goes into more depth on China-related AI safety and governance paths.

    Other governments and international organisations

    As we’ve said, we focus most on US policy and government roles. This is largely because we anticipate that the US is now and will likely continue to be the most pivotal actor when it comes to regulating AI, with a major caveat being China, as discussed in the previous section.

    But many people interested in working on this issue can’t or don’t want to work in US policy — perhaps because they live in another country and don’t intend on moving.

    Much of the advice above still applies to these people, because roles in AI governance research and advocacy can be done outside of the United States.6 And while we don’t think it’s generally as impactful in expectation as US government work, opportunities in other governments and international organisations can be complementary to the work to be done in the US.

    The United Kingdom, for instance, may present another strong opportunity for AI policy work that would complement US work. Top UK officials have expressed interest in developing policy around AI, perhaps even a new international agency, and reducing extreme risks. And the UK government announced in 2023 the creation of a new AI Foundation Model Taskforce, with the expressed intention to drive forward safety research.

    It’s possible that by taking significant steps to understand and regulate AI, the UK will encourage or inspire US officials to take similar steps by showing how it can work.

    And any relatively wealthy country could use portions of its budget to fund AI safety research. While a lot of the most important work likely needs to be done in the US, along with leading researchers and at labs with access to large amounts of compute, some lines of research may be productive even without these resources. Any significant advances in AI safety research, if communicated properly, could be used by researchers working on the most powerful models.

    Other countries might also develop liability standards for the creators of AI systems that could incentivise corporations to proceed more cautiously and judiciously before releasing models.

    The European Union has shown that its data protection standards — the General Data Protection Regulation (GDPR) — affect corporate behaviour well beyond its geographical boundaries. EU officials have also pushed forward on regulating AI, and some research has explored the hypothesis that the impact of the union’s AI regulations will extend far beyond the continent — the so-called “Brussels effect.”

    And at some point, we do expect there will be AI treaties and international regulations, just as the international community has created the International Atomic Energy Agency, the Biological Weapons Convention, and Intergovernmental Panel on Climate Change to coordinate around and mitigate other global catastrophic threats.

    Efforts to coordinate governments around the world to understand and share information about threats posed by AI may end up being extremely important in some future scenarios.

    The Organisation for Economic Cooperation and Development is one place where such work might occur. So far, it has been the most prominent international actor working on AI policy and has created the AI Policy Observatory.

    Third-party countries may also be able to facilitate cooperation and reduce tensions betweens the United States and China, whether around AI or other potential flashpoints, should such an intervention become necessary.

    How policy gets made

    What does it actually take to make policy?

    In this section, we’ll discuss three phases of policy making: agenda setting, policy creation and development, and implementation. We’ll generally discuss these as aspects of making government policy, but they could also be applied to organisational policy. The following section will discuss the types of work that you could do to positively contribute to the broad field of AI governance.

    Agenda setting

    To enact and implement a programme of government policies that have a positive impact, you have to first ensure that the subject of potential legislation and regulation is on the agenda for policymakers.

    Agenda setting for policy involves identifying and defining problems, drawing attention to the problems and raising their salience (at least to the relevant people), and promoting potential approaches to solving them.

    For example, when politicians take office, they often enter on a platform of promises made to their constituents and their supporters about which policy agendas they want to pursue. Those agendas are formed through public discussion, media narratives, internal party politics, deliberative debate, interest group advocacy, and other forms of input. The agenda can be, to varying degrees, problem-specific — having a broad remit of “improving health care.” Or it could be more solution-specific — aiming to create, for example, a single-payer health system.

    Issues don’t necessarily have to be unusually salient to get on the agenda. Policymakers or officials at various levels of government can prioritise solving certain problems or enacting specific proposals that aren’t the subject of national debate. In fact, sometimes making issues too salient, framing them in divisive ways, or allowing partisanship and political polarisation to shape the discussion, can make it harder to successfully put solutions on the agenda.

    What’s key for agenda setting as an approach to AI governance is that people with the authority have to buy into the idea of prioritising the issue, if they’re going to use their resources and political capital to focus on it.

    Policy creation and development

    While there does appear to be growing enthusiasm for a set or sets of policy proposals that could start to reduce the risk of an AI-related catastrophe, there’s still a lack of concrete policies that are ready to get off the ground.

    This is what the policy creation and development process is for. Researchers, advocates, civil servants, lawmakers and their staff, and others all can play a role in shaping the actual legislation and regulation that the government eventually enforces. In the corporate context, internal policy creation can serve similar functions, though it may be less enforceable unless backed up with contracts.

    Policy creation involves crafting solutions for the problem at hand with the policy tools available, usually requiring input from technical experts, legal experts, stakeholders, and the public. In countries with strong judicial review like the United States, special attention often has to be paid to make sure laws and regulations will hold up under the scrutiny of judges.

    Once concrete policy options are on the table, they must be put through the relevant decision-making process and negotiations. If the policy in question is a law that’s going to be passed, rather than a regulation, it needs to be crafted so that it will have enough support from lawmakers and other key decision makers to be enacted. This can happen in a variety of ways; it might be rolled into a larger piece of legislation that has wide support, or it may be rallied around and brought forward as its own package to be voted on individually.

    Policy creation can also be an iterative process, as policies are enacted, implemented, monitored, evaluated, and revised.

    For more details on the complex work of policy creation, we recommend Thomas Kalil’s article “Policy Entrepreneurship in the White House: Getting Things Done in Large Organisations.”

    Implementation

    Fundamentally, a policy is only an idea. For an idea to have an impact, someone actually has to carry it out. Any of the proposals for AI-related government policy — including standards and evaluations, licensing, and compute governance — will demand complex management and implementation.

    Policy implementation on this scale requires extensive planning, coordination in and out of government, communication, resource allocation, training and more — and every step in this process can be fraught with challenges. To rise to the occasion, any government implementing an AI policy regime will need talented individuals working at a high standard.

    The policy creation phase is critical and is probably the highest-priority work. But good ideas can be carried out badly, which is why policy implementation is also a key part of the AI governance agenda.

    Examples of people pursuing this path

    How to assess your fit and get started

    If you’re early on in your career, you should focus first on getting skills and other career capital to successfully contribute to the beneficial governance and regulation of AI.

    You can gain career capital for roles in many ways, and the best options will vary based on your route to impact. But broadly speaking, working in or studying fields such as politics, law, international relations, communications, and economics can all be beneficial for going into policy work.

    And expertise in AI itself, gained by studying and working in machine learning and technical AI safety, or potentially related fields such as computer hardware or information security, should also give you a big advantage.

    Testing your fit

    One general piece of career advice we give is to find relatively “cheap” tests to assess your fit for different paths. This could mean, for example, taking a policy internship, applying for a fellowship, doing a short bout of independent research as discussed above, or taking classes or courses on technical machine learning or computer engineering.

    It can also just involve talking to people currently doing a job you might consider having and finding out what the day-to-day experience of the work is like and what skills are needed.

    All of these factors can be difficult to predict in advance. While we grouped “government work” into a single category above, that label covers a wide range of positions and types of occupations in many different departments and agencies. Finding the right fit within a broad category like “government work” can take a while, and it can depend on a lot of factors out of your control, such as the colleagues you happen to work closely with. That’s one reason it can be useful to build broadly valuable career capital, so you have the option to move around to find the right role for you.

    And don’t underestimate the value at some point of just applying to many relevant openings in the field and sector you’re aiming for and seeing what happens. You’ll likely face a lot of rejection with this strategy, but you’ll be able to better assess your qualifications for different kinds of roles after you see how far you get in the process, if you take enough chances. This can give you a lot more information than just guessing about whether you have the right experience.

    It can be useful to rule out certain types of work if you gather evidence that you’re not a strong fit for the role. For example, if you invest a lot of time and effort trying to get into reputable universities or nonprofit institutions to do AI governance research, but you get no promising offers and receive little encouragement even after applying widely, this might be a significant signal that you’re unlikely to thrive in that particular path.

    That wouldn’t mean you have nothing to contribute, but your comparative advantage may lie elsewhere.

    Read the section of our career guide on finding a job that fits you.

    Types of career capital

    For a field like AI governance, a mix of people with technical and policy expertise — and some people with both — is needed.

    While anyone involved in this field should work to maintain an evolving understanding of both the technical and policy details, you’ll probably start out focusing on either policy or technical skills to gain career capital.

    This section covers:

    Much of this advice is geared toward roles in the US, though it may be relevant in other contexts.

    Generally useful career capital

    The chapter of the 80,000 Hours career guide on career capital lists five key components that will be useful in any path: skills and knowledge, connections, credentials, character, and runway.

    For most jobs touching on policy, social skills, networking, and — for lack of a better word — political skill will be a huge asset. This can probably be learned to some extent, but some people may find they don’t have these kinds of skills and can’t or don’t want to acquire them. That’s OK — there are many other routes to having a fulfilling and impactful career, and there may be some roles within this path that demand these skills to a much lesser extent. That’s why testing your fit is important.

    Read the full section of the career guide on career capital.

    To gain skills in policy, you can pursue education in many relevant fields, such as political science, economics, and law.

    Many master’s programmes offer specific coursework on public policy, science and society, security studies, international relations, and other topics; having a graduate degree or law degree will give you a leg up for many positions.

    In the US, a master’s, a law degree, or a PhD is particularly useful if you want to climb the federal bureaucracy. Our article on US policy master’s degrees provides detailed information about how to assess the many options.

    Internships in DC are a promising route to evaluate your aptitude for policy work and to establish early career capital. Many academic institutions now offer a strategic “Semester in DC” programme, which can let you explore placements of choice in Congress, federal agencies, or think tanks. The Virtual Student Federal Service (VSFS) also offers part-time, remote government internships. Balancing their academic commitments, students can access these opportunities during the academic year, further solidifying their grasp on the intricacies of policy work. This technological advance could be the stepping stone many aspiring policy professionals need to ascend in their future careers.

    Once you have a suitable background, you can take entry-level positions within parts of the government where you can build a professional network and develop your skills. In the US, you can become a congressional staffer, or take a position at a relevant federal department, such as the Department of Commerce, Department of Energy, or the Department of State. Alternatively, you can gain experience in think tanks — a particularly promising option if you have a strong aptitude for research — and government contractors, private sector companies providing services to the government.

    In Washington, DC, the culture is fairly unique. There’s a big focus on networking and internal bureaucratic politics to navigate. We’ve also been told that while merit matters to a degree in US government work, it is not the primary determinant of who is most successful. People who think they wouldn’t feel able or comfortable to be in this kind of environment for the long term should consider whether other paths would be best.

    If you find you can enjoy government and political work, impress your colleagues, and advance in your career, though, that’s a strong signal that you have the potential to make a real impact. Just being able to thrive in government work can be an extremely valuable comparative advantage.

    US citizenship

    Your citizenship may affect which opportunities are available to you. Many of the most important AI governance roles within the US — particularly in the executive branch and Congress — are only open to, or will at least heavily favour, American citizens. All key national security roles that might be especially important will be restricted to those with US citizenship, which is required to obtain a security clearance.

    This may mean that those who lack US citizenship will want to consider not pursuing roles that require it. Alternatively, they could plan to move to the US and pursue the long process of becoming a citizen. For more details on immigration pathways and types of policy work available to non-citizens, see this blog post on working in US policy as a foreign national. Consider also participating in the annual diversity visa lottery if you’re from an eligible country, as this is low effort and allows you to win a US green card if you’re lucky.

    Technical career capital

    Technical experience in machine learning, AI hardware, and related fields can be a valuable asset for an AI governance career. So it will be very helpful if you’ve studied a relevant subject area for an undergraduate or graduate degree, or a particularly productive course of independent study.

    We have a guide to technical AI safety careers, which explains how to learn the basics of machine learning.

    The following resources may be particularly useful for familiarising yourself with the field of AI safety:

    Working at an AI lab in technical roles, or other companies that use advanced AI systems and hardware, may also provide significant career capital in AI policy paths. (Read our career review discussing the pros and cons of working at a top AI lab.)

    We also have a separate career review on how becoming an expert in AI hardware could be very valuable in governance work.

    Many politicians and policymakers are generalists, as their roles require them to work in many different subject areas and on different types of problems. This means they’ll need to rely on expert knowledge when crafting and implementing policy on AI technology that they don’t fully understand. So if you can provide them this information, especially if you’re skilled at communicating it clearly, you can potentially fill influential roles.

    Some people who may have initially been interested in pursuing a technical AI safety career, but who have found that they either are no longer interested in that path or find more promising policy opportunities, might also decide that they can effectively pivot into a policy-oriented career.

    It is common for people with STEM backgrounds to enter and succeed in US policy careers. People with technical credentials that they may regard as fairly modest — such as computer science bachelor’s degrees or a master’s in machine learning — often find their knowledge is highly valued in Washington, DC.

    Most DC jobs don’t have specific degree requirements, so you don’t need to have a policy degree to work in DC. Roles specifically addressing science and technology policy are particularly well-suited for people with technical backgrounds, and people hiring for these roles will value higher credentials like a master’s or, better even, a terminal degree like a PhD or MD.

    There are many fellowship programmes specifically aiming to support people with STEM backgrounds to enter policy careers; some are listed below.

    This won’t be right for everybody — many people with technical skills may not have the disposition or skills necessary for engaging in policy. People in policy-related paths often benefit from strong writing and social skills as well as a comfort navigating bureaucracies and working with people holding very different motivations and worldviews.

    Other specific forms of career capital

    There are other ways to gain useful career capital that could be applied in this career path.

    • If you have or gain great communication skills as, say, a journalist or an activist, these skills could be very useful in advocacy and lobbying around AI governance.
      • Especially since advocacy around AI issues is still in its early stages, it will likely need people with experience advocating in other important cause areas to share their knowledge and skills.
    • Academics with relevant skill sets are sometimes brought into government for limited stints to serve as advisors in agencies such as the US Office of Science and Technology. This isn’t necessarily the foundation of a longer career in government, though it can be, and it can give an academic deeper insight into policy and politics than they might otherwise gain.
    • You can work at an AI lab in non-technical roles, gaining a deeper familiarity with the technology, the business, and the culture. (Read our career review discussing the pros and cons of working at a top AI lab.)
    • You could work on political campaigns and get involved in party politics. This is one way to get involved in legislation, learn about policy, and help impactful lawmakers, and you can also potentially help shape the discourse around AI governance. Note, though, the previously mentioned downsides of potentially polarising public opinion around AI policy; and entering party politics may limit your potential for impact whenever the party you’ve joined doesn’t hold power.
    • You could even try to become an elected official yourself, though it’s obviously competitive. If you take this route, make sure you find trustworthy and highly informed advisors to rely on to build expertise in AI, since politicians have many other responsibilities and won’t be able to focus as much on any particular issue.
    • You can focus on developing specific skill sets that might be valuable in AI governance, such as information security, intelligence work, diplomacy with China, etc.
      • Other skills: Organisational, entrepreneurial, management, diplomatic, and bureaucratic skills will also likely prove highly valuable in this career path. There may be new auditing agencies to set up or policy regimes to implement. Someone who has worked at high levels in other high-stakes industries, started an influential company, or coordinated complicated negotiations between various groups, would bring important skills to the table.

    Want one-on-one advice on pursuing this path?

    Because this is one of our priority paths, if you think this path might be a great option for you, we’d be especially excited to advise you on next steps, one-on-one. We can help you consider your options, make connections with others working in the same field, and possibly even help you find jobs or funding opportunities.

    APPLY TO SPEAK WITH OUR TEAM

    Where can this kind of work be done?

    Since successful AI governance will require work from governments, industry, and other parties, there will be many potential jobs and places to work for people in this path. The landscape will likely shift over time, so if you’re just starting out on this path, the places that seem most important might be different by the time you’re pivoting to using your career capital to make progress on the issue.

    Within the US government, for instance, it’s not clear which bodies will be most impactful when it comes to AI policy in five years. It will likely depend on choices that are made in the meantime.

    That said, it seems useful to give our understanding of which parts of the government are generally influential in technology governance and most involved right now to help orient. Gaining AI-related experience in government right now should still serve you well if you end up wanting to move into a more impactful AI-related role down the line when the highest-impact areas to work in are clearer.

    We’ll also give our current sense of important actors outside government where you might be able to build career capital and potentially have a big impact.

    Note that this list has by far the most detail about places to work within the US government. We would like to expand it to include more options as we learn more. You can use this form to suggest additional options for us to include. (And the fact that an option isn’t on this list shouldn’t be taken to mean we recommend against it or even that it would necessarily be less impactful than the places listed.)

    We have more detail on other options in separate (and older) career reviews, including the following:

    With that out of the way, here are some of the places where someone could do promising work or gain valuable career capital:

    In Congress, you can either work directly for lawmakers themselves or as staff on a legislative committee. Staff roles on the committees are generally more influential on legislation and more prestigious, but for that reason, they’re more competitive. If you don’t have that much experience, you could start out in an entry-level job staffing a lawmaker and then later try to transition to staffing a committee.

    Some people we’ve spoken to expect the following committees — and some of their subcommittees — in the House and Senate to be most impactful in the field of AI. You might aim to work on these committees or for lawmakers who have significant influence on these committees.

    House of Representatives

    • House Committee on Energy and Commerce
    • House Judiciary Committee
    • House Committee on Space, Science, and Technology
    • House Committee on Appropriations
    • House Armed Services Committee
    • House Committee on Foreign Affairs
    • House Permanent Select Committee on Intelligence

    Senate

    • Senate Committee on Commerce, Science, and Transportation
    • Senate Judiciary Committee
    • Senate Committee on Foreign Relations
    • Senate Committee on Homeland Security and Government Affairs
    • Senate Committee on Appropriations
    • Senate Committee on Armed Services
    • Senate Select Committee on Intelligence
    • Senate Committee on Energy & Natural Resources
    • Senate Committee on Banking, Housing, and Urban Affairs

    The Congressional Research Service, a nonpartisan legislative agency, also offers opportunities to conduct research that can impact policy design across all subjects.

    In general, we don’t recommend taking entry-level jobs within the executive branch for this path because it’s very difficult to progress your career through the bureaucracy at this level. It’s better to get a law degree or relevant master’s degree, which can give you the opportunity to start with more seniority.

    The influence of different agencies over AI regulation may shift over time, and there may even be entirely new agencies set up to regulate AI at some point, which could become highly influential. Whichever agency may be most influential in the future, it will be useful to have accrued career capital working effectively in government, creating a professional network, learning about day-to-day policy work, and deepening your knowledge of all things AI.

    We have a lot of uncertainty about this topic, but here are some of the agencies that may have significant influence on at least one key dimension of AI policy as of this writing:

    • Executive Office of the President (EOP)
      • Office of Management and Budget (OMB)
      • National Security Council (NSC)
      • Office of Science and Technology Policy (OSTP)
    • Department of State
      • Office of the Special Envoy for Critical and Emerging Technology (S/TECH)
      • Bureau of Cyberspace and Digital Policy (CDP)
      • Bureau of Arms Control, Verification and Compliance (AVC)
      • Office of Emerging Security Challenges (ESC)
    • Federal Trade Commission
    • Department of Defense (DOD)
      • Chief Digital and Artificial Intelligence Office (CDAO)
      • Emerging Capabilities Policy Office
      • Defense Advanced Research Projects Agency (DARPA)
      • Defense Technology Security Administration (DTSA)
    • Intelligence Community (IC)
      • Intelligence Advanced Research Projects Activity (IARPA)
      • National Security Agency (NSA)
      • Science advisor roles within the various agencies that make up the intelligence community
    • Department of Commerce (DOC)
      • The Bureau of Industry and Security (BIS)
      • The National Institute of Standards and Technology (NIST)
      • CHIPS Program Office
    • Department of Energy (DOE)
      • Artificial Intelligence and Technology Office (AITO)
      • Advanced Scientific Computing Research (ASCR) Program Office
    • National Science Foundation (NSF)
      • Directorate for Computer and Information Science and Engineering (CISE)
      • Directorate for Technology, Innovation and Partnerships (TIP)
    • Cybersecurity and Infrastructure Security Agency (CISA)

    Readers can find listings for roles in these departments and agencies at the federal government’s job board, USAJOBS; a more curated list of openings for potentially high impact roles and career capital is on the 80,000 Hours job board.

    We do not currently recommend attempting to join the US government via the military if you are aiming for a career in AI policy. There are many levels of seniority to rise through and many people competing for places, and initially you have to spend all of your time doing work unrelated to AI. However, having military experience already can be valuable career capital for other important roles in government, particularly national security positions. We would consider this route more competitive for military personnel who have been to an elite military academy, such as West Point, or for commissioned officers at rank O-3 or above.

    Policy fellowships are among the best entryways into policy work. They offer many benefits like first-hand policy experience, funding, training, mentoring, and networking. While many require an advanced degree, some are open to college graduates.

    • Center for Security and Emerging Technology (CSET)
    • Center for a New American Security
    • RAND Corporation
    • The MITRE Corporation
    • Brookings Institution
    • Carnegie Endowment for International Peace
    • Center for Strategic and International Studies (CSIS)
    • Federation of American Scientists (FAS)
    • Alignment Research Center
    • Open Philanthropy1
    • Institute for AI Policy and Strategy
    • Epoch AI
    • Centre for the Governance of AI (GovAI)
    • Center for AI Safety (CAIS)
    • Legal Priorities Project
    • Apollo Research
    • Centre for Long-Term Resilience
    • AI Impacts
    • Johns Hopkins Applied Physics Lab

    (Read our career review discussing the pros and cons of working at a top AI lab.)

    • Organisation for Economic Co-operation and Development (OECD)
    • International Atomic Energy Agency (IAEA)
    • International Telecommunication Union (ITU)
    • International Organization for Standardization (ISO)
    • European Union institutions (e.g., European Commission)
    • Simon Institute for Longterm Governance

    Our job board features opportunities in AI safety and policy:

      View all opportunities

      How this career path can go wrong

      Doing harm

      As we discuss in an article on accidental harm, there are many ways to set back a new field that you’re working in when you’re trying to do good, and this could mean your impact is negative rather than positive. (You may also want to read our article on harmful careers.)

      It seems likely there’s a lot of potential to inadvertently cause harm in the emerging field of AI governance. We discussed some possibilities in the section on advocacy and lobbying. Some other possibilities include:

      • Pushing for a given policy to the detriment of a superior policy
      • Communicating about the risks of AI in a way that ratchets up geopolitical tensions
      • Enacting a policy that has the opposite impact of its intended effect
      • Setting policy precedents that could be exploited by dangerous actors down the line
      • Funding projects in AI that turn out to be dangerous
      • Sending the message, implicitly or explicitly, that the risks are being managed when they aren’t, or that they’re lower than they in fact are
      • Suppressing technology that would actually be extremely beneficial for society

      The trouble is that we have to act with incomplete information, so it may never be very clear when or if people in AI governance are falling into these traps. Being aware that they are potential ways of causing harm will help you keep alert for these possibilities, though, and you should remain open to changing course if you find evidence that your actions may be damaging.

      And we recommend keeping in mind the following pieces of general guidance from our article on accidental harm:

      1. Ideally, eliminate courses of action that might have a big negative impact.
      2. Don’t be a naive optimizer.
      3. Have a degree of humility.
      4. Develop expertise, get trained, build a network, and benefit from your field’s accumulated wisdom.
      5. Follow cooperative norms
      6. Match your capabilities to your project and influence.
      7. Avoid hard-to-reverse actions.

      Burning out

      We think this work is exceptionally pressing and valuable, so we encourage our readers who might have a strong personal fit for governance work to test it out. But going into government, in particular, can be difficult. Some people we’ve advised have gone into policy roles with the hope of having an impact, only to burn out and move on.

      At the same time, many policy practitioners find their work very meaningful, interesting, and varied.

      Some roles in government may be especially challenging for the following reasons:

      • Some roles can be very fast-paced, involving relatively high stress and long hours. This is particularly true in Congress and senior executive branch positions and much less so in think tanks or junior agency roles.
      • It can take a long time to get into positions with much autonomy or decision-making authority.
      • Progress on the issues you care about can be slow, and you often have to work on other priorities. Congressional staffers in particular typically have very broad policy portfolios.
      • Work within bureaucracies faces many limitations, which can be frustrating.
      • It can be demotivating to work with people who don’t share your values. Though note that policy can select for altruistic people — even if they have different beliefs about how to do good.
      • The work isn’t typically well paid relative to comparable positions outside of government.

      So we recommend speaking to people in the kinds of positions you might aim to have in order to get a sense of whether the career path would be right for you. And if you do choose to pursue it, look out for signs that the work may be having a negative effect on you and seek support from people who understand what you care about.

      If you end up wanting or needing to leave and transition into a new path, that’s not necessarily a loss or a reason for regret. You will likely make important connections and learn a lot of useful information and skills. This career capital can be useful as you transition into another role, perhaps pursuing a complementary approach to AI governance and coordination.

      What the increased attention on AI means

      We’ve been concerned about risks posed by AI for years. Based on the arguments that this technology could potentially cause a global catastrophe, and otherwise have a dramatic impact on future generations, we’ve advised many people to work to mitigate the risks.

      The arguments for the risk aren’t completely conclusive, in our view. But the arguments are worth taking seriously, and given the fact that few others in the world seemed to be devoting much time to even figuring out how big the threat was or how to mitigate it (while at the same time progress in making AI systems more powerful was accelerating) we concluded it was worth ranking among our top priorities.

      Now that there’s increased attention on AI, some might conclude that it’s less neglected and thus less pressing to work on. However, the increased attention on AI also makes many interventions potentially more tractable than they had been previously, as policymakers and others are more open to the idea of crafting AI regulations.

      And while more attention is now being paid to AI, it’s not clear it will be focused on the most important risks. So there’s likely still a lot of room for important and pressing work positively shaping the development of AI policy.

      Read next

      If you’re interested in this career path, we recommend checking out some of the following articles next.

      Learn more

      Top recommendations

      Further recommendations

      Read next:  Learn about other high-impact careers

      Want to consider more paths? See our list of the highest-impact career paths according to our research.

      Plus, join our newsletter and we’ll mail you a free book

      Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

      The post AI governance and coordination appeared first on 80,000 Hours.

      ]]>
      AI safety technical research https://80000hours.org/career-reviews/ai-safety-researcher/ Mon, 19 Jun 2023 10:28:33 +0000 https://80000hours.org/?post_type=career_profile&p=74400 The post AI safety technical research appeared first on 80,000 Hours.

      ]]>
      Progress in AI — while it could be hugely beneficial — comes with significant risks. Risks that we’ve argued could be existential.

      But these risks can be tackled.

      With further progress in AI safety, we have an opportunity to develop AI for good: systems that are safe, ethical, and beneficial for everyone.

      This article explains how you can help.

      In a nutshell: Artificial intelligence will have transformative effects on society over the coming decades, and could bring huge benefits — but we also think there’s a substantial risk. One promising way to reduce the chances of an AI-related catastrophe is to find technical solutions that could allow us to prevent AI systems from carrying out dangerous behaviour.

      Pros

      • Opportunity to make a significant contribution to a hugely important area of research
      • Intellectually challenging and interesting work
      • The area has a strong need for skilled researchers and engineers, and is highly neglected overall

      Cons

      • Due to a shortage of managers, it’s difficult to get jobs and might take you some time to build the required career capital and expertise
      • You need a strong quantitative background
      • It might be very difficult to find solutions
      • There’s a real risk of doing harm

      Key facts on fit

      You’ll need a quantitative background and should probably enjoy programming. If you’ve never tried programming, you may be a good fit if you can break problems down into logical parts, generate and test hypotheses, possess a willingness to try out many different solutions, and have high attention to detail.

      If you already:

      • Are a strong software engineer, you could apply for empirical research contributor roles right now (even if you don’t have a machine learning background, although that helps)
      • Could get into a top 10 machine learning PhD, that would put you on track to become a research lead
      • Have a very strong maths or theoretical computer science background, you’ll probably be a good fit for theoretical alignment research

      Recommended

      If you are well suited to this career, it may be the best way for you to have a social impact.

      Review status

      Based on a medium-depth investigation 

      Thanks to Adam Gleave, Jacob Hilton and Rohin Shah for reviewing this article. And thanks to Charlie Rogers-Smith for his help, and his article on the topic — How to pursue a career in technical AI alignment.

      Why AI safety technical research is high impact

      As we’ve argued, in the next few decades, we might see the development of hugely powerful machine learning systems with the potential to transform society. This transformation could bring huge benefits — but only if we avoid the risks.

      We think that the worst-case risks from AI systems arise in large part because AI systems could be misaligned — that is, they will aim to do things that we don’t want them to do. In particular, we think they could be misaligned in such a way that they develop (and execute) plans that pose risks to humanity’s ability to influence the world, even when we don’t want that influence to be lost.

      We think this means that these future systems pose an existential threat to civilisation.

      Even if we find a way to avoid this power-seeking behaviour, there are still substantial risks — such as misuse by governments or other actors — which could be existential threats in themselves.

      Want to learn more about risks from AI? Read the problem profile.

      We think that technical AI safety could be the highest-impact career path we’ve identified to date. That’s because it seems like a promising way of reducing risks from AI. We’ve written an entire article about what those risks are and why they’re so important.

      Read more about preventing an AI-related catastrophe

      There are many ways in which we could go about reducing the risks that these systems might pose. But one of the most promising may be researching technical solutions that prevent unwanted behaviour — including misaligned behaviour — from AI systems. (Finding a technical way to prevent misalignment in particular is known as the alignment problem.)

      In the past few years, we’ve seen more organisations start to take these risks more seriously. Many of the leading industry labs developing AI — including Google DeepMind and OpenAI — have teams dedicated to finding these solutions, alongside academic research groups including at MIT, Oxford, Cambridge, Carnegie Mellon University, and UC Berkeley.

      That said, the field is still very new. We think there are only around 300 people working on technical approaches to reducing existential risks from AI systems,1 which makes this a highly neglected field.

      Finding technical ways to reduce this risk could be quite challenging. Any practically helpful solution must retain the usefulness of the systems (remaining economically competitive with less safe systems), and continue to work as systems improve over time (that is, it needs to be ‘scalable’). As we argued in our problem profile, it seems like it might be difficult to find viable solutions, particularly for modern ML (machine learning) systems.

      (If you don’t know anything about ML, we’ve written a very very short introduction to ML, and we’ll go into more detail on how to learn about ML later in this article. Alternatively, if you do have ML experience, talk to our team — they can give you personalised career advice, make introductions to others working on these issues, and possibly even help you find jobs or funding opportunities.)

      Although it seems hard, there are lots of avenues for more research — and the field really is very young, so there are new promising research directions cropping up all the time. So we think it’s moderately tractable, though we’re highly uncertain.

      In fact, we’re uncertain about all of this and have written extensively about reasons we might be wrong about AI risk.

      But, overall, we think that — if it’s a good fit for you — going into AI safety technical research may just be the highest-impact thing you can do with your career.

      What does this path involve?

      AI safety technical research generally involves working as a scientist or engineer at major AI labs, in academia, or in independent nonprofits.

      These roles can be very hard to get. You’ll likely need to build up career capital before you end up in a high-impact role (more on this later, in the section on how to enter). That said, you may not need to spend a long time building this career capital — we’ve seen exceptionally talented people move into AI safety from other quantitative fields, sometimes in less than a year.

      Most AI safety technical research falls on a spectrum between empirical research (experimenting with current systems as a way of learning more about what will work), and theoretical research (conceptual and mathematical research looking at ways of ensuring that future AI systems are safe).

      No matter where on this spectrum you end up working, your career path might look a bit different depending on whether you want to aim at becoming a research lead — proposing projects, managing a team and setting direction — or a contributor — focusing on carrying out the research.

      Finally, there are two slightly different roles you might aim for:

      • In academia, research is often led by professors — the key distinguishing feature of being a professor is that you’ll also teach classes and mentor grad students (and you’ll definitely need a PhD).
      • Many (but not all) contributor roles in empirical research are also engineers, often software engineers. Here, we’re focusing on software roles that directly contribute to AI safety research (and which often require some ML background) — we’ve written about software engineering more generally in a separate career review.

      4 kinds of AI safety role: empirical lead, empirical contributor, theoretical lead and theoretical contributor

      We think that research lead roles are probably higher-impact in general. But overall, the impact you could have in any of these roles is likely primarily determined by your personal fit for the role — see the section on how to predict your fit in advance.

      Next, we’ll take a look at what working in each path might involve. Later, we’ll go into how you might enter each path.

      What does work in the empirical AI safety path involve?

      Empirical AI safety tends to involve teams working directly with ML models to identify any risks and develop ways in which they might be mitigated.

      That means the work is focused on current ML techniques and techniques that might be applied in the very near future.

      Practically, working on empirical AI safety involves lots of programming and ML engineering. You might, for example, come up with ways you could test the safety of existing systems, and then carry out these empirical tests.

      You can find roles in empirical AI safety in industry and academia, as well as some in AI safety-focused nonprofits.

      Particularly in academia, lots of relevant work isn’t explicitly labelled as being focused on existential risk — but it can still be highly valuable. For example, work in interpretability, adversarial examples, diagnostics and backdoor learning, among other areas, could be highly relevant to reducing the chance of an AI-related catastrophe.

      We’re also excited by experimental work to develop safety standards that AI companies might adhere to in the future — for example, the work being carried out by METR.

      To learn more about the sorts of research taking place at labs focused on empirical AI safety, take a look at:

      While programming is central to all empirical work, generally, research lead roles will be less focused on programming; instead, they need stronger research taste and theoretical understanding. In comparison, research contributors need to be very good at programming and software engineering.

      What does work in the theoretical AI safety path involve?

      Theoretical AI safety is much more heavily conceptual and mathematical. Often it involves careful reasoning about the hypothetical behaviour of future systems.

      Generally, the aim is to come up with properties that it would be useful for safe ML algorithms to have. Once you have some useful properties, you can try to develop algorithms with these properties (bearing in mind that to be practically useful these algorithms will have to end up being adopted by industry). Alternatively, you could develop ways of checking whether systems have these properties. These checks could, for example, help hold future AI products to high safety standards.

      Many people working in theoretical AI safety will spend much of their time proving theorems or developing new mathematical frameworks. More conceptual approaches also exist, although they still tend to make heavy use of formal frameworks.

      Some examples of research in theoretical AI safety include:

      There are generally fewer roles available in theoretical AI safety work, especially as research contributors. Theoretical research contributor roles exist at nonprofits (primarily the Alignment Research Center), as well as at some labs (for example, Anthropic’s work on conditioning predictive models and the Causal Incentives Working Group at Google DeepMind). Most contributor roles in theoretical AI safety probably exist in academia (for example, PhD students in teams working on projects relevant to theoretical AI safety).

      Some exciting approaches to AI safety

      There are lots of technical approaches to AI safety currently being pursued. Here are just a few of them:

      It’s worth noting that there are many approaches to AI safety, and people in the field strongly disagree on what will or won’t work.

      This means that, once you’re working in the field, it can be worth being charitable and careful not to assume that others’ work is unhelpful just because it seemed so on a quick skim. You should probably be uncertain about your own research agenda as well.

      What’s more, as we mentioned earlier, lots of relevant work across all these areas isn’t explicitly labelled ‘safety.’

      So it’s important to think carefully about how or whether any particular research helps reduce the risks that AI systems might pose.

      What are the downsides of this career path?

      AI safety technical research is not the only way to make progress on reducing the risks that future AI systems might pose. Also, there are many other pressing problems in the world that aren’t the possibility of an AI-related catastrophe, and lots of careers that can help with them. If you’d be a better fit working on something else, you should probably do that.

      Beyond personal fit, there are a few other downsides to the career path:

      • It can be very competitive to enter (although once you’re in, the jobs are well paid, and there are lots of backup options).
      • You need quantitative skills — and probably programming skills.
      • The work is geographically concentrated in just a few places (mainly the California Bay Area and London, but there are also opportunities in places with top universities such as Oxford, New York, Pittsburgh, and Boston). That said, remote work is increasingly possible at many research labs.
      • It might not be very tractable to find good technical ways of reducing the risk. Although assessments of its difficulty vary, and while making progress is almost certainly possible, it may be quite hard to do so. This reduces the impact that you could have working in the field. That said, if you start out in technical work you might be able to transition to governance work, since that often benefits from technical training and experience with the industry, which most people do not have.)
      • Relatedly, there’s lots of disagreement in the field about what could work; you’ll probably be able to find at least some people who think what you’re working on is useless, whatever you end up doing.
      • Most importantly, there’s some risk of doing harm. While gaining career capital, and while working on the research itself, you’ll have to make difficult decisions and judgement calls about whether you’re working on something beneficial (see our anonymous advice about working in roles that advance AI capabilities). There’s huge disagreement on which technical approaches to AI safety might work — and sometimes this disagreement takes the form of thinking that a strategy will actively increase existential risks from AI.

      Finally, we’ve written more about the best arguments against AI being pressing in our problem profile on preventing an AI-related catastrophe. If those are right, maybe you could have more impact working on a different issue.

      How much do AI safety technical researchers earn?

      Many technical researchers work at companies or small startups that pay wages competitive with the Bay Area and Silicon Valley tech industry, and even smaller organisations and nonprofits will pay competitive wages to attract top talent. The median compensation for a software engineer in the San Francisco Bay area was $222,000 per year in 2020.3 (Read more about software engineering salaries).

      This $222,000 median may be an underestimate, as AI roles, especially in top AI labs that are rapidly scaling up their work in AI, often pay better than other tech jobs, and the same applies to safety researchers — even those in nonprofits.

      However, academia has lower salaries than industry in general, and we’d guess that AI safety research roles in academia pay less than commercial labs and nonprofits.

      Examples of people pursuing this path

      How to predict your fit in advance

      You’ll generally need a quantitative background (although not necessarily a background in computer science or machine learning) to enter this career path.

      There are two main approaches you can take to predict your fit, and it’s helpful to do both:

      • Try it out: try out the first few steps in the section below on learning the basics. If you haven’t yet, try learning some python, as well as taking courses in linear algebra, calculus, and probability. And if you’ve done that, try learning a bit about deep learning and AI safety. Finally, the best way to try this out for many people would be to actually get a job as a (non-safety) ML engineer (see more in the section on how to enter).
      • Talk to people about whether it would be a good fit for you: If you want to become a technical researcher, our team probably wants to talk to you. We can give you 1-1 advice, for free. If you know anyone working in the area (or something similar), discuss this career path with them and ask for their honest opinion. You may be able to meet people through our community. Our advisors can also help make connections.

      It can take some time to build expertise, and enjoyment can follow expertise — so be prepared to take some time to learn and practice before you decide to switch to something else entirely.

      If you’re not sure what roles you might aim for longer term, here are a few rough ways you could make a guess about what to aim for, and whether you might be a good fit for various roles on this path:

      • Testing your fit as an empirical research contributor: In a blog post about hiring for safety researchers, the Google DeepMind team said “as a rough test for the Research Engineer role, if you can reproduce a typical ML paper in a few hundred hours and your interests align with ours, we’re probably interested in interviewing you.”
        • Looking specifically at software engineering, one hiring manager at Anthropic said that if you could, with a few weeks’ work, write a complex new feature or fix a very serious bug in a major ML library, they’d want to interview you straight away. (Read more.)
      • Testing your fit for theoretical research: If you could have got into a top 10 maths or theoretical computer science PhD programme if you’d optimised your undergrad to do so, that’s a decent indication of your fit (and many researchers in fact have these PhDs). The Alignment Research Center (one of the few organisations that hires for theoretical research contributors, as of 2023) said that they were open to hiring people without any research background. They gave four tests of fit: creativity (e.g. you may have ideas for solving open problems in the field, like Eliciting Latent Knowledge); experience designing algorithms, proving theorems, or formalising concepts; broad knowledge of maths and computer science; and having thought a lot about the AI alignment problem in particular.
      • Testing your fit as a research lead (or for a PhD): The vast majority of research leads have a PhD. Also, many (but definitely not all) AI safety technical research roles will require a PhD — and if they don’t, having a PhD (or being the sort of person that could get one) would definitely help show that you’re a good fit for the work. To get into a top 20 machine learning PhD programme, you’d probably need to publish something like a first author workshop paper, as well as a third author conference paper at a major ML conference (like NeurIPS or ICML). (Read more about whether you should do a PhD).

      Read our article on personal fit to learn more about how to assess your fit for the career paths you want to pursue.

      How to enter

      You might be able to apply for roles right away — especially if you meet, or are near meeting, the tests we just looked at — but it also might take you some time, possibly several years, to skill up first.

      So, in this section, we’ll give you a guide to entering technical AI safety research. We’ll go through four key questions:

      1. How to learn the basics
      2. Whether you should do a PhD
      3. How to get a job in empirical research
      4. How to get a job in theoretical research

      Hopefully, by the end of the section, you’ll have everything you need to get going.

      Learning the basics

      To get anywhere in the world of AI safety technical research, you’ll likely need a background knowledge of coding, maths, and deep learning.

      You might also want to practice enough to become a decent ML engineer (although this is generally more useful for empirical research), and learn a bit about safety techniques in particular (although this is generally more useful for empirical research leads and theoretical researchers).

      We’ll go through each of these in turn.

      Learning to program

      You’ll probably want to learn to code in python, because it’s the most widely used language in ML engineering.

      The first step is probably just trying it out. As a complete beginner, you can write a Python program in less than 20 minutes that reminds you to take a break every two hours. Don’t be discouraged if your code doesn’t work the first time — that’s what normally happens when people code!

      Once you’ve done that, you have a few options:

      You can read more about learning to program — and how to get your first job in software engineering (if that’s the route you want to take) — in our career review on software engineering.

      Learning the maths

      The maths of deep learning relies heavily on calculus and linear algebra, and statistics can be useful too — although generally learning the maths is much less important than programming and basic, practical ML.

      We’d generally recommend studying a quantitative degree (like maths, computer science or engineering), most of which will cover all three areas pretty well.

      If you want to actually get good at maths, you have to be solving problems. So, generally, the most useful thing that textbooks and online courses provide isn’t their explanations — it’s a set of exercises to try to solve, in order, with some help if you get stuck.

      If you want to self-study (especially if you don’t have a quantitative degree) here are some possible resources:

      You might be able to find resources that cover all these areas, like Imperial College’s Mathematics for Machine Learning.

      Learning basic machine learning

      You’ll likely need to have a decent understanding of how AI systems are currently being developed. This will involve learning about machine learning and neural networks, before diving into any specific subfields of deep learning.

      Again, there’s the option of covering this at university. If you’re currently at college, it’s worth checking if you can take an ML course even if you’re not majoring in computer science.

      There’s one important caveat here: you’ll learn a huge amount on the job, and the amount you’ll need to know in advance for any role or course will vary hugely! Not even top academics know everything about their fields. It’s worth trying to find out how much you’ll need to know for the role you want to do before you invest hundreds of hours into learning about ML.

      With that caveat in mind, here are some suggestions of places you might start if you want to self-study the basics:

      PyTorch is a very common package used for implementing neural networks, and probably worth learning! When I was first learning about ML, my first neural network was a 3-layer convolutional neural network with L2 regularisation classifying characters from the MNIST database. This is a pretty common first challenge, and a good way to learn PyTorch.

      Learning about AI safety

      If you’re going to work as an AI safety researcher, it usually helps to know about AI safety.

      This isn’t always true — some engineering roles won’t require much knowledge of AI safety. But even then, knowing the basics will probably help land you a position, and can also help with things like making difficult judgement calls and avoiding doing harm. And if you want to be able to identify and do useful work, you’ll need to learn about the field eventually.

      Because the field is still so new, there probably aren’t (yet) university courses you can take. So you’ll need to do some self-study. Here are some places you might start:

      For more suggestions — especially when it comes to reading about the nature of the risks we might face from AI systems — take a look at the top resources to learn more from our problem profile.

      Should you do a PhD?

      Some technical research roles will require a PhD — but many won’t, and PhDs aren’t the best option for everyone.

      The main benefit of doing a PhD is probably practising setting and carrying out your own research agenda. As a result, getting a PhD is practically the default if you want to be a research lead.

      That said, you can also become a research lead without a PhD — in particular, by transitioning from a role as a research contributor. At some large labs, the boundary between being a contributor and a lead is increasingly blurry.

      Many people find PhDs very difficult. They can be isolating and frustrating, and take a very long time (4–6 years). What’s more, both your quality of life and the amount you’ll learn will depend on your supervisor — and it can be really difficult to figure out in advance whether you’re making a good choice.

      So, if you’re considering doing a PhD, here are some things to consider:

      • Your long-term vision: If you’re aiming to be a research lead, that suggests you might want to do a PhD — the vast majority of research leads have PhDs. If you mainly want to be a contributor (e.g. an ML or software engineer), that suggests you might not. If you’re unsure, you should try doing something to test your fit for each, like trying a project or internship. You might try a pre-doctoral research assistant role — if the research you do is relevant to your future career, these can be good career capital, whether or not you do a PhD.
      • The topic of your research: It’s easy to let yourself become tied down to a PhD topic you’re not confident in. If the PhD you’re considering would let you work on something that seems useful for AI safety, it’s probably — all else equal — better for your career, and the research itself might have a positive impact as well.
      • Mentorship: What are the supervisors or managers like at the opportunities open to you? You might be able to find ML engineering or research roles in industry where you could learn much more than you would in a PhD — or vice versa. When picking a supervisor, try reaching out to the current or former students of a prospective supervisor and asking them some frank questions. (Also, see this article on how to choose a PhD supervisor.)
      • Your fit for the work environment: Doing a PhD means working on your own with very little supervision or feedback for long periods of time. Some people thrive in these conditions! But some really don’t and find PhDs extremely difficult.

      Read more in our more detailed (but less up-to-date) review of machine learning PhDs.

      It’s worth remembering that most jobs don’t need a PhD. And for some jobs, especially empirical research contributor roles, even if a PhD would be helpful, there are often better ways of getting the career capital you’d need (for example, working as a software or ML engineer). We’ve interviewed two ML engineers who have had hugely successful careers without doing a PhD.

      Whether you should do a PhD doesn’t depend (much) on timelines

      We think it’s plausible that we will develop AI that could be hugely transformative for society by the end of the 2030s.

      All else equal, that possibility could argue for trying to have an impact right away, rather than spending five (or more) years doing a PhD.

      Ultimately, though, how well you, in particular, are suited to a particular PhD is probably a much more important factor than when AI will be developed.

      That is to say, we think the increase in impact caused by choosing a path that’s a good fit for you is probably larger than any decrease in impact caused by delaying your work. This is in part because the spread in impact caused by the specific roles available to you, as well as your personal fit for them, is usually very large. Some roles (especially research lead roles) will just require having a PhD, and others (especially more engineering-heavy roles) won’t — and people’s fit for these paths varies quite a bit.

      We’re also highly uncertain about estimates about when we might develop transformative AI. This uncertainty reduces the expected cost of any delay.

      Most importantly, we think PhDs shouldn’t be thought of as a pure delay to your impact. You can do useful work in a PhD, and generally, the first couple of years in any career path will involve a lot of learning the basics and getting up to speed. So if you have a good mentor, work environment, and choice of topic, your PhD work could be as good as, or possibly better than, the work you’d do if you went to work elsewhere early in your career. And if you suddenly receive evidence that we have less time than you thought, it’s relatively easy to drop out.

      There are lots of other considerations here — for a rough overview, and some discussion, see this post by 80,000 Hours advisor Alex Lawsen, as well as the comments.

      Overall, we’d suggest that instead of worrying about a delay to your impact, think instead about which longer-term path you want to pursue, and how the specific opportunities in front of you will get you there.

      How to get into a PhD

      ML PhDs can be very competitive. To get in, you’ll probably need a few publications (as we said above, something like a first author workshop paper, as well as a third author conference paper at a major ML conference (like NeurIPS or ICML), and references, probably from ML academics. (Although publications also look good whatever path you end up going down!)

      To end up at that stage, you’ll need a fair bit of luck, and you’ll also need to find ways to get some research experience.

      One option is to do a master’s degree in ML, although make sure it’s a research masters — most ML master’s degrees primarily focus on preparation for industry.

      Even better, try getting an internship in an ML research group. Opportunities include RISS at Carnegie Mellon University, UROP at Imperial College London, the Aalto Science Institute international summer research programme, the Data Science Summer Institute, the Toyota Technological Institute intern programme and MILA. You can also try doing an internship specifically in AI safety, for example at CHAI. However, there are sometimes disadvantages to doing internships specifically in AI safety directly — in general, it may be harder to publish and mentorship might be more limited.

      Another way of getting research experience is by asking whether you can work with researchers. If you’re already at a top university, it can be easiest to reach out to people working at the university you’re studying at.

      PhD students or post-docs can be more responsive than professors, but eventually, you’ll want a few professors you’ve worked with to provide references, so you’ll need to get in touch. Professors tend to get lots of cold emails, so try to get their attention! You can try:

      • Getting an introduction, for example from a professor who’s taught you
      • Mentioning things you’ve done (your grades, relevant courses you’ve taken, your GitHub, any ML research papers you’ve attempted to replicate as practice)
      • Reading some of their papers and the main papers in the field, and mention them in the email
      • Applying for funding that’s available to students who want to work in AI safety, and letting people know you’ve got funding to work with them

      Ideally, you’ll find someone who supervises you well and has time to work with you (that doesn’t necessarily mean the most famous professor — although it helps a lot if they’re regularly publishing at top conferences). That way, they’ll get to know you, you can impress them, and they’ll provide an amazing reference when you apply for PhDs.

      It’s very possible that, to get the publications and references you’ll need to get into a PhD, you’ll need to spend a year or two working as a research assistant, although these positions can also be quite competitive.

      This guide by Adam Gleave also goes into more detail on how to get a PhD, including where to apply and tips on the application process itself. We discuss ML PhDs in more detail in our career review on ML PhDs (though it’s outdated compared to this career review).

      Getting a job in empirical AI safety research

      Ultimately, the best way of learning to do empirical research — especially in contributor and engineering-focused roles — is to work somewhere that does both high-quality engineering and cutting-edge research.

      The top three labs are probably Google DeepMind (who offer internships to students), OpenAI (who have a 6-month residency programme) and Anthropic. (Working at a leading AI lab carries with it some risk of doing harm, so it’s important to think carefully about your options. We’ve written a separate article going through the major relevant considerations.)

      To end up working in an empirical research role, you’ll probably need to build some career capital.

      Whether you want to be a research lead or a contributor, it’s going to help to become a really good software engineer. The best ways of doing this usually involve getting a job as a software engineer at a big tech company or at a promising startup. (We’ve written an entire article about becoming a software engineer.)

      Many roles will require you to be a good ML engineer, which means going further than just the basics we looked at above. The best way to become a good ML engineer is to get a job doing ML engineering — and the best places for that are probably leading AI labs.

      For roles as a research lead, you’ll need relatively more research experience. You’ll either want to become a research contributor first, or enter through academia (for example by doing a PhD).

      All that said, it’s important to remember that you don’t need to know everything to start applying, as you’ll inevitably learn loads on the job — so do try to find out what you’ll need to learn to land the specific roles you’re considering.

      How much experience do you need to get a job? It’s worth reiterating the tests we looked at above for contributor roles:

      • In a blog post about hiring for safety researchers, the DeepMind team said “as a rough test for the Research Engineer role, if you can reproduce a typical ML paper in a few hundred hours and your interests align with ours, we’re probably interested in interviewing you.”
      • Looking specifically at software engineering, one hiring manager at Anthropic said that if you could, with a few weeks’ work, write a new feature or fix a serious bug in a major ML library, they’d want to interview you straight away. (Read more.)

      In the process of getting this experience, you might end up working in roles that advance AI capabilities. There are a variety of views on whether this might be harmful — so we’d suggest reading our article about working at leading AI labs and our article containing anonymous advice from experts about working in roles that advance capabilities. It’s also worth talking to our team about any specific opportunities you have.

      If you’re doing another job, or a degree, or think you need to learn some more before trying to change careers, there are a few good ways of getting more experience doing ML engineering that go beyond the basics we’ve already covered:

      • Getting some experience in software / ML engineering. For example, if you’re doing a degree, you might try an internship as a software engineer during the summer. DeepMind offer internships for students with at least two years of study in a technical subject,
      • Replicating papers. One great way of getting experience doing ML engineering, is to replicate some papers in whatever sub-field you might want to work in. Richard Ngo, an AI governance researcher at OpenAI, has written some advice on replicating papers. But bear in mind that replicating papers can be quite hard — take a look at Amid Fish’s blog on what he learned replicating a deep RL paper. Finally, Rogers-Smith has some suggestions on papers to replicate. If you do spend some time replicating papers, remember that when you get to applying for roles, it will be really useful to be able to prove you’ve done the work. So try uploading your work to GitHub, or writing a blog on your progress. And if you’re thinking about spending a long time on this (say, over 100 hours), try to get some feedback on the papers you might replicate before you start — you could even reach out to a lab you want to work for.
      • Taking or following a more in-depth course in empirical AI safety research. Redwood Research ran the MLAB bootcamp, and you can apply for access to their curriculum here. You could also take a look at this Deep Learning Curriculum by Jacob Hilton, a researcher at the Alignment Research Center — although it’s probably very challenging without mentorship.4 The Alignment Research Engineer Accelerator is a program that uses this curriculum. Some mentors on the SERI ML Alignment Theory Scholars Program focus on empirical research.
      • Learning about a sub-field of deep learning. In particular, we’d suggest natural language processing (in particular transformers — see this lecture as a starting point) and reinforcement learning (take a look at Pong from Pixels by Andrej Karpathy, and OpenAI’s Spinning up in Deep RL). Try to get to the point where you know about the most important recent advances.

      Finally, Athena is an AI alignment mentorship program for women with a technical background looking to get jobs in the alignment field

      Getting a job in theoretical AI safety research

      There are fewer jobs available in theoretical AI safety research, so it’s harder to give concrete advice. Having a maths or theoretical computer science PhD isn’t always necessary, but is fairly common among researchers in industry, and is pretty much required to be an academic.

      If you do a PhD, ideally it’d be in an area at least somewhat related to theoretical AI safety research. For example, it could be in probability theory as applied to AI, or in theoretical CS (look for researchers who publish in COLT or FOCS).

      Alternatively, one path is to become an empirical research lead before moving into theoretical research.

      Compared to empirical research, you’ll need to know relatively less about engineering, and relatively more about AI safety as a field.

      Once you’ve done the basics, one possible next step you could try is reading papers from a particular researcher, or on a particular topic, and summarising what you’ve found.

      You could also try spending some time (maybe 10–100 hours) reading about a topic and then some more time (maybe another 10–100 hours) trying to come up with some new ideas on that topic. For example, you could try coming up with proposals to solve the problem of eliciting latent knowledge. Alternatively, if you wanted to focus on the more mathematical side, you could try having a go at the assignment at the end of this lecture by Michael Cohen, a grad student at the University of Oxford.

      If you want to enter academia, reading a ton of papers seems particularly important. Maybe try writing a survey paper on a certain topic in your spare time. It’s a great way to master a topic, spark new ideas, spot gaps, and come up with research ideas. When applying to grad school or jobs, your paper is a fantastic way to show you love research so much you do it for fun.

      There are some research programmes aimed at people new to the field, such as the SERI ML Alignment Theory Scholars Program, to which you could apply.

      Other ways to get more concrete experience include doing research internships, working as a research assistant, or doing a PhD, all of which we’ve written about above, in the section on whether and how you can get into a PhD programme.

      One note is that a lot of people we talk to try to learn independently. This can be a great idea for some people, but is fairly tough for many, because there’s substantially less structure and mentorship.

      AI labs in industry that have empirical technical safety teams, or are focused entirely on safety:

      • Anthropic is an AI safety company working on building interpretable and safe AI systems. They focus on empirical AI safety research. Anthropic cofounders Daniela and Dario Amodei gave an interview about the lab on the Future of Life Institute podcast. On our podcast, we spoke to Chris Olah, who leads Anthropic’s research into interpretability, and Nova DasSarma, who works on systems infrastructure at Anthropic.
      • METR works on assessing whether cutting-edge AI systems could pose catastrophic risks to civilization, including early-stage, experimental work to develop techniques, and evaluating systems produced by Anthropic and OpenAI.
      • The Center for AI Safety is a nonprofit that does technical research and promotion of safety in the wider machine learning community.
      • FAR AI is a research nonprofit that incubates and accelerates research agendas that are too resource-intensive for academia but not yet ready for commercialisation by industry, including research in adversarial robustness, interpretability and preference learning.
      • Google DeepMind is probably the largest and most well-known research group developing general artificial machine intelligence, and is famous for its work creating AlphaGo, AlphaZero, and AlphaFold. It is not principally focused on safety, but has two teams focused on AI safety, with the Scalable Alignment Team focusing on aligning existing state-of-the-art systems, and the Alignment Team focused on research bets for aligning future systems.
      • OpenAI, founded in 2015, is a lab that is trying to build artificial general intelligence that is safe and benefits all of humanity. OpenAI is well known for its language models like GPT-4. Like DeepMind, it is not principally focused on safety, but has a safety team and a governance team. Jan Leike (co-lead of the superalignment team) has some blog posts on how he thinks about AI alignment, and has spoken on our podcast about the sorts of people he’d like to hire for his team.
      • Ought is a machine learning lab building Elicit, an AI research assistant. Their aim is to align open-ended reasoning by learning human reasoning steps, and to direct AI progress towards helping with evaluating evidence and arguments.
      • Redwood Research is an AI safety research organisation, whose first big project attempted to make sure language models (like GPT-3) produce output following certain rules with very high probability, in order to address failure modes too rare to show up in standard training.

      Theoretical / conceptual AI safety labs:

      • The Alignment Research Center (ARC) is attempting to produce alignment strategies that could be adopted in industry today while also being able to scale to future systems. They focus on conceptual work, developing strategies that could work for alignment and which may be promising directions for empirical work, rather than doing empirical AI work themselves. Their first project was releasing a report on Eliciting Latent Knowledge, the problem of getting advanced AI systems to honestly tell you what they believe (or ‘believe’) about the world. On our podcast, we interviewed ARC founder Paul Christiano about his research (before he founded ARC).
      • The Center on Long-Term Risk works to address worst-case risks from advanced AI. They focus on conflict between AI systems.
      • The Machine Intelligence Research Institute was one of the first groups to become concerned about the risks from machine intelligence in the early 2000s, and its team has published a number of papers on safety issues and how to resolve them.
      • Some teams in commercial labs also do some more theoretical and conceptual work on alignment, such as Anthropic’s work on conditioning predictive models and the Causal Incentives Working Group at Google DeepMind.

      AI safety in academia (a very non-comprehensive list; while the number of academics explicitly and publicly focused on AI safety is small, it’s possible to do relevant work at a much wider set of places):

      Want one-on-one advice on pursuing this path?

      We think that the risks posed by the development of AI may be the most pressing problem the world currently faces. If you think you might be a good fit for any of the above career paths that contribute to solving this problem, we’d be especially excited to advise you on next steps, one-on-one.

      We can help you consider your options, make connections with others working on reducing risks from AI, and possibly even help you find jobs or funding opportunities — all for free.

      APPLY TO SPEAK WITH OUR TEAM

      Find a job in this path

      If you think you might be a good fit for this path and you’re ready to start looking at job opportunities that are currently accepting applications, see our curated list of opportunities for this path:

        View all opportunities

        Learn more about AI safety technical research

        Top recommendations

        Further recommendations

        Here are some suggestions about where you could learn more:

        Read next:  Learn about other high-impact careers

        Want to consider more paths? See our list of the highest-impact career paths according to our research.

        Plus, join our newsletter and we’ll mail you a free book

        Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

        The post AI safety technical research appeared first on 80,000 Hours.

        ]]>
        Are we doing enough to stop the worst pandemics? https://80000hours.org/2023/04/are-we-doing-enough-to-stop-the-worst-pandemics/ Fri, 21 Apr 2023 15:41:37 +0000 https://80000hours.org/?p=81524 The post Are we doing enough to stop the worst pandemics? appeared first on 80,000 Hours.

        ]]>
        COVID-19 has been devastating for the world. While people debate how the response could’ve been better, it should be easy to agree that we’d all be better off if we can stop any future pandemic before it occurs. But we’re still not taking pandemic prevention very seriously.

        A recent report in The Washington Post highlighted one major danger: some research on potential pandemic pathogens may actually increase the risk, rather than reduce it.

        Back in 2017, we talked about what we thought were several warning signs that something like COVID might be coming down the line. It’d be a big mistake to ignore these kinds of warning signs again.

        This blog post was first released to our newsletter subscribers.

        Join over 350,000 newsletter subscribers who get content like this in their inboxes weekly — and we’ll also send you a free ebook!

        It seems unfortunate that so much of the discussion of the risks in this space is backward-looking. The news has been filled with commentary and debates about the chances that COVID accidentally emerged from a biolab or that it crossed over directly from animals to humans.

        We’d appreciate a definitive answer to this question as much as anyone, but there’s another question that matters much more but gets asked much less:

        What are we doing to reduce the risk that the next dangerous virus — which could come from an animal, a biolab, or even a bioterrorist attack — causes a pandemic even worse than COVID-19?

        80,000 Hours ranks preventing catastrophic pandemics as among the most pressing problems in the world. If you would be a good fit for a career working to mitigate this danger, it could be by far your best opportunity to have a positive impact on the world.

        We’ve recently updated our review of career paths reducing biorisk with the help of Dr Gregory Lewis, providing more detail about both policy and technical paths, as well as ways in which they can overlap. For example, the review notes that:

        • Policy changes could reduce some risks by, for instance, regulating ‘dual use’ research.

        • New technology could help us catch emerging outbreaks sooner.

        • International diplomacy could devote more resources toward supporting the Biological Weapons Convention.

        There’s particular reason to be worried about engineered pathogens, whether they’re created to do harm or for research purposes. Pandemics caused by such viruses could be many times worse than natural ones, because they would be designed with danger to humans in mind, rather than just evolving by natural selection.

        The Washington Post story linked above suggests some reason for hope. Scientists and experts are raising alarms about risky practices in their field. And according to anonymous officials cited in the piece, the Biden administration may announce new restrictions on research using dangerous pathogens this year.

        If executed well, such reforms might significantly reduce the risk that labs accidentally release an extremely dangerous pathogen — which has happened many times before.

        We also need to be preparing for a time, perhaps not too far in the future, when technological advances and cheaper materials make it increasingly easy for bad actors to create dangerous pathogens on their own.

        If you’re looking for a career working on a problem that is massively important, relatively neglected, and potentially very tractable, reducing biorisk might be a terrific option.

        Learn more:

        The post Are we doing enough to stop the worst pandemics? appeared first on 80,000 Hours.

        ]]>
        Marcus Davis on founding and leading Rethink Priorities https://80000hours.org/after-hours-podcast/episodes/marcus-davis-rethink-priorities/ Mon, 12 Dec 2022 23:00:45 +0000 https://80000hours.org/?post_type=podcast_after_hours&p=80089 The post Marcus Davis on founding and leading Rethink Priorities appeared first on 80,000 Hours.

        ]]>
        The post Marcus Davis on founding and leading Rethink Priorities appeared first on 80,000 Hours.

        ]]>
        Bear Braumoeller on the case that war isn’t in decline https://80000hours.org/podcast/episodes/bear-braumoeller-decline-of-war/ Tue, 08 Nov 2022 22:35:17 +0000 https://80000hours.org/?post_type=podcast&p=79838 The post Bear Braumoeller on the case that war isn’t in decline appeared first on 80,000 Hours.

        ]]>
        The post Bear Braumoeller on the case that war isn’t in decline appeared first on 80,000 Hours.

        ]]>
        Alan Hájek on puzzles and paradoxes in probability and expected value https://80000hours.org/podcast/episodes/alan-hajek-probability-expected-value/ Fri, 28 Oct 2022 21:53:53 +0000 https://80000hours.org/?post_type=podcast&p=79744 The post Alan Hájek on puzzles and paradoxes in probability and expected value appeared first on 80,000 Hours.

        ]]>
        The post Alan Hájek on puzzles and paradoxes in probability and expected value appeared first on 80,000 Hours.

        ]]>
        Nova DasSarma on why information security may be critical to the safe development of AI systems https://80000hours.org/podcast/episodes/nova-dassarma-information-security-and-ai-systems/ Tue, 14 Jun 2022 21:46:23 +0000 https://80000hours.org/?post_type=podcast&p=78027 The post Nova DasSarma on why information security may be critical to the safe development of AI systems appeared first on 80,000 Hours.

        ]]>
        The post Nova DasSarma on why information security may be critical to the safe development of AI systems appeared first on 80,000 Hours.

        ]]>