Biomedical Research (Topic archive) - 80,000 Hours https://80000hours.org/topic/careers/categories-of-impactful-careers/in-research/biomedical-research-global-poverty/ Sat, 06 Jan 2024 12:52:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 Research skills https://80000hours.org/skills/research/ Mon, 18 Sep 2023 15:15:19 +0000 https://80000hours.org/?post_type=skill_set&p=83656 The post Research skills appeared first on 80,000 Hours.

]]>
Norman Borlaug was an agricultural scientist. Through years of research, he developed new, high-yielding, disease-resistant varieties of wheat.

It might not sound like much, but as a result of Borlaug’s research, wheat production in India and Pakistan almost doubled between 1965 and 1970, and formerly famine-stricken countries across the world were suddenly able to produce enough food for their entire populations. These developments have been credited with saving up to a billion people from famine,1 and in 1970, Borlaug was awarded the Nobel Peace Prize.

Many of the highest-impact people in history, whether well-known or completely obscure, have been researchers.

In a nutshell: Talented researchers are a key bottleneck facing many of the world’s most pressing problems. That doesn’t mean you need to become an academic. While that’s one option (and academia is often a good place to start), lots of the most valuable research happens elsewhere. It’s often cheap to try out developing research skills while at university, and if it’s a good fit for you, research could be your highest impact option.

Key facts on fit

You might be a great fit if you have the potential to become obsessed with high-impact questions, have high levels of grit and self-motivation, are open to new ideas, are intelligent, and have a high degree of intellectual curiosity. You’ll also need to be a good fit for the particular area you’re researching (e.g. you might need quantitative ability).

Why are research skills valuable?

Not everyone can be a Norman Borlaug, and not every discovery gets adopted. Nevertheless, we think research can often be one of the most valuable skill sets to build — if you’re a good fit.

We’ll argue that:

Together, this suggests that research skills could be particularly useful for having an impact.

Later, we’ll look at:

Research seems to have been extremely high-impact historically

If we think about what has most improved the modern world, much can be traced back to research: advances in medicine such as the development of vaccines against infectious diseases, developments in physics and chemistry that led to steam power and the industrial revolution, and the invention of the modern computer, an idea which was first proposed by Alan Turing in his seminal 1936 paper On Computable Numbers.2

Many of these ideas were discovered by a relatively small number of researchers — but they changed all of society. This suggests that these researchers may have had particularly large individual impacts.

Dr Nalin helped to invent oral rehydration therapy
Dr. Nalin helped to save millions of lives with a simple innovation: giving patients with diarrhoea water mixed with salt and sugar.

That said, research today is probably lower-impact than in the past. Research is much less neglected than it used to be: there are nearly 25 times as many researchers today as there were in 1930.3 It also turns out that more and more effort is required to discover new ideas, so each additional researcher probably has less impact than those that came before.4

However, even today, a relatively small fraction of people are engaged in research. As an approximation, only 0.1% of the population are academics,5 and only about 2.5% of GDP is spent on research and development. If a small number of people account for a large fraction of progress, then on average each person’s efforts are significant.

Moreover, we still think there’s a good case to be made for research being impactful on average today, which we cover in the next two sections.

There are good theoretical reasons to think that research will be high-impact

There’s little commercial incentive to focus on the most socially valuable research. And most researchers don’t get rich, even if their discoveries are extremely valuable. Alan Turing made no money from the discovery of the computer, and today it’s a multibillion-dollar industry. This is because the benefits of research often come a long time in the future and can’t usually be protected by patents. This means if you care more about social impact than profit, then it’s a good opportunity to have an edge.

Research is also a route to leverage. When new ideas are discovered, they can be spread incredibly cheaply, so it’s a way that a single person can change a field. And innovations are cumulative — once an idea has been discovered, it’s added to our stock of knowledge and, in the ideal case, becomes available to everyone. Even ideas that become outdated often speed up the important future discoveries that supersede it.

Research skills seem extremely useful to the problems we think are most pressing

When you look at our list of the world’s most pressing problems — like preventing future pandemics or reducing risks from AI systems — expert researchers seem like a key bottleneck.

For example, to reduce the risk posed by engineered pandemics, we need people who are talented at research to identify the biggest biosecurity risks and to develop better vaccines and treatments.

To ensure that developments in AI are implemented safely and for the benefit of humanity, we need technical experts thinking hard about how to design machine learning systems safely and policy researchers to think about how governments and other institutions should respond. (See this list of relevant research questions.)

And to decide which global priorities we should spend our limited resources on, we need economists, mathematicians, and philosophers to do global priorities research. For example, see the research agenda of the Global Priorities Institute at Oxford.

We’re not sure why so many of the most promising ways to make progress on the problems we think are most pressing involve research, but it may well be due to the reasons in the section above — research offers huge opportunities for leverage, so if you take a hits-based approach to finding the best solutions to social problems, it’ll often be most attractive.

In addition, our focus on neglected problems often means we focus on smaller and less developed areas, and it’s often unclear what the best solutions are in these areas. This means that research is required to figure this out.

For more examples, and to get a sense of what you might be able to work on in different fields, see this list of potentially high-impact research questions, organised by discipline.

If you’re a good fit, you can have much more impact than the average

The sections above give reasons why research can be expected to be impactful in general. But as we’ll show below, the productivity of individual researchers probably varies a great deal (and more than in most other careers). This means that if you have reason to think your degree of fit is better than average, your expected impact could be much higher than the average.

Depending on which subject you focus on, you may have good backup options

Pursuing research helps you develop deep expertise on a topic, problem-solving, and writing skills. These can be useful in many other career paths. For example:

  • Many research areas can lead to opportunities in policymaking, since relevant technical expertise is valued in some of these positions. You might also have opportunities to advise policymakers and the public as an expert.
  • The expertise and credibility you can develop by focusing on research (especially in academia) can put you in a good position to switch your focus to communicating important ideas, especially those related to your speciality, either to the general public, policymakers, or your students.
  • If you specialise in an applied quantitative subject, it can open up certain high-paying jobs, such as quantitative trading or data science, which offer good opportunities for earning to give.

Some research areas will have much better backup options than others — lots of jobs value applied quantitative skills, so if your research is quantitative you may be able to transition into work in effective nonprofits or government. A history academic, by contrast, has many fewer clear backup options outside of academia.

What does building research skills typically involve?

By ‘research skills’ we broadly mean the ability to make progress solving difficult intellectual problems.

We find it especially useful to roughly divide research skills into three forms:

Academic research

Building academic research skills is the most predefined route. The focus is on answering relatively fundamental questions which are considered valuable by a specific academic discipline. This can be impactful either through generally advancing a field of research that’s valuable to society or finding opportunities to work on socially important questions within that field.

Turing was an academic. He didn’t just invent the computer — during World War II he developed code-breaking machines that allowed the Allies to be far more effective against Nazi U-boats. Some historians estimate this enabled D-Day to happen a year earlier than it would have otherwise.6 Since World War II resulted in 10 million deaths per year, Turing may have saved about 10 million lives.

Alan Turing aged 16
Turing was instrumental in developing the computer. Sadly, he was prosecuted for being gay, perhaps contributing to his suicide in 1954.

We’re particularly excited about academic research in subfields of machine learning relevant to reducing risks from AI, subfields of biology relevant to preventing catastrophic pandemics, and economics — we discuss which fields you should enter below.

Academic careers are also excellent for developing credibility, leading to many of the backup options we looked at above, especially options in communicating important ideas or policymaking.

Academia is relatively unique in how flexibly you can use your time. This can be a big advantage — you really get time to think deeply and carefully about things — but can be a hindrance, depending on your work style.

See more about what academia involves in our career review on academia.

Practical but big picture research

Academia rewards a focus on questions that can be decisively answered with the methods of the field. However, the most important questions can rarely be answered rigorously — the best we can do is look at many weak forms of evidence and come to a reasonable overall judgement. which means while some of this research happens in academia, it can be hard to do that.

Instead, this kind of research is often done in nonprofit research institutes, e.g. the Centre for the Governance of AI or Our World in Data, or independently.

Your focus should be on answering the questions that seem most important (given your view of which global problems most matter) through whatever means are most effective.

Some examples of questions in this category that we’re especially interested in include:

  • How likely is a pandemic worse than COVID-19 in the next 10 years?
  • How difficult is the AI alignment problem going to be to solve?
  • Which global problems are most pressing?
  • Is the world getting better or worse over time?
  • What can we learn from the history of philanthropy about which forms of philanthropy might be most effective?

You can see a longer list of ideas in this article.

Someone we know who’s had a big impact with research skills is Ajeya Cotra. Ajeya initially studied electrical engineering and computer science at UC Berkeley. In 2016, she joined Open Philanthropy as a grantmaker.7 Since then she’s worked on a framework for estimating when transformative AI might be developed, how worldview diversification could be applied to allocating philanthropic budgets, and how we might accidentally teach AI models to deceive us.

Ajeya Cotra
Ajeya was moved by many of the conclusions of effective altruism, which eventually led to her researching the transformative effects of AI.

Applied research

Then there’s applied research. This is often done within companies or nonprofits, like think tanks (although again, there’s also plenty of applied research happening in academia). Here the focus is on solving a more immediate practical problem (and if pursued by a company, where it might be possible to make profit from the solution) — and there’s lots of overlap with engineering skills. For example:

  • Developing new vaccines
  • Creating new types of solar cells or nuclear reactors
  • Developing meat substitutes

Neel was doing an undergraduate degree in maths when he decided that he wanted to work in AI safety. Our team was able to introduce Neel to researchers in the field and helped him secure internships in academic and industry research groups. Neel didn’t feel like he was a great fit for academia — he hates writing papers — so he applied to roles in commercial AI research labs. He’s now a research engineer at DeepMind. He works on mechanistic interpretability research which he thinks could be used in the future to help identify potentially dangerous AI systems before they can cause harm.

Neel Nanda
Neel’s machine learning research is heavily mathematical — but has clear applications to reducing the risks from advanced AI.

We also see “policy research” — which aims to develop better ideas for public policy — as a form of applied research.

Stages of progression through building and using research skills

These different forms of research blur into each other, and it’s often possible to switch between them during a career. In particular, it’s common to begin in academic research and then switch to more applied research later.

However, while the skill sets contain a common core, someone who can excel in intellectual academic research might not be well-suited to big picture practical or applied research.

The typical stages in an academic career involve the following steps:

  1. Pick a field. This should be heavily based on personal fit (where you expect to be most successful and enjoy your work the most), though it’s also useful to think about which fields offer the best opportunities to help tackle the problems you think are most pressing, give you expertise that’s especially useful given these problems, and use that at least as a tie-breaker. (Read more about choosing a field.)
  2. Earn a PhD.
  3. Learn your craft and establish your career — find somewhere you can get great mentorship and publish a lot of impressive papers. This usually means finding a postdoc with a good group and then temporary academic positions.
  4. Secure tenure.
  5. Focus on the research you think is most socially valuable (or otherwise move your focus towards communicating ideas or policy).

Academia is usually seen as the most prestigious path…within academia. But non-academic positions can be just as impactful — and often more so since you can avoid some of the dysfunctions and distractions of academia, such as racing to get publications.

At any point after your PhD (and sometimes with only a master’s), it’s usually possible to switch to applied research in industry, policy, nonprofits, and so on, though typically you’ll still focus on getting mentorship and learning for at least a couple of years. And you may also need to take some steps to establish your career enough to turn your attention to topics that seem more impactful.

Note that from within academia, the incentives to continue with academia are strong, so people often continue longer than they should!

If you’re focused on practical big picture research, then there’s less of an established pathway, and a PhD isn’t required.

Besides academia, you could attempt to build these skills in any job that involves making difficult, messy intellectual judgement calls, such as investigative journalism, certain forms of consulting, buy-side research in finance, think tanks, or any form of forecasting.

Personal fit is perhaps more important for research than other skills

The most talented researchers seem to differ hugely in their impact compared to typical researchers across a wide variety of metrics and according to the opinions of other researchers.

For instance, when we surveyed biomedical researchers, they said that very good researchers were rare, and they’d be willing to turn down large amounts of money if they could get a good researcher for their lab.8 Professor John Todd, who works on medical genetics at Cambridge, told us:

The best people are the biggest struggle. The funding isn’t a problem. It’s getting really special people[…] One good person can cover the ground of five, and I’m not exaggerating.

This makes sense if you think the distribution of research output is very wide — that the very best researchers have a much greater output than the average researcher.

How much do researchers differ in productivity?

It’s hard to know exactly how spread out the distribution is, but there are several strands of evidence that suggest the variability is very high.

Firstly, most academic papers get very few citations, while a few get hundreds or even thousands. An analysis of citation counts in science journals found that ~47% of papers had never been cited, more than 80% had been cited 10 times or less, but the top 0.1% had been cited more than 1,000 times. A similar pattern seems to hold across individual researchers, meaning that only a few dominate — at least in terms of the recognition their papers receive.

Citation count is a highly imperfect measure of research quality, so these figures shouldn’t be taken at face-value. For instance, which papers get cited the most may depend at least partly on random factors, academic fashions, and “winner takes all” effects — papers that get noticed early end up being cited by everyone to back up a certain claim, even if they don’t actually represent the research that most advanced the field.

However, there are other reasons to think the distribution of output is highly skewed.

William Shockley, who won the Nobel Prize for the invention of the transistor, gathered statistics on all the research employees in national labs, university departments, and other research units, and found that productivity (as measured by total number of publications, rate of publication, and number of patents) was highly skewed, following a log-normal distribution.

Shockley suggests that researcher output is the product of several (normally distributed) random variables — such as the ability to think of a good question to ask, figure out how to tackle the question, recognize when a worthwhile result has been found, write adequately, respond well to feedback, and so on. This would explain the skewed distribution: if research output depends on eight different factors and their contribution is multiplicative, then a person who is 50% above average in each of the eight areas will in expectation be 26 times more productive than average.9

When we looked at up-to-date data on how productivity differs across many different areas, we found very similar results. The bottom line is that research seems to perhaps be the area where we have the best evidence for output being heavy-tailed.

Interestingly, while there’s a huge spread in productivity, the most productive academic researchers are rarely paid 10 times more than the median, since they’re on fixed university pay-scales. This means that the most productive researchers yield a large “excess” value to their field. For instance, if a productive researcher adds 10 times more value to the field than average, but is paid the same as average, they will be producing at least nine times as much net benefit to society. This suggests that top researchers are underpaid relative to their contribution, discouraging them from pursuing research and making research skills undersupplied compared to what would be ideal.

Can you predict these differences in advance?

Practically, the important question isn’t how big the spread is, but whether you could — early on in your career — identify whether or not you’ll be among the very best researchers.

There’s good news here! At least in scientific research, these differences also seem to be at least somewhat predictable ahead of time, which means the people entering research with the best fit could have many times more expected impact.

In a study, two IMF economists looked at maths professors’ scores in the International Mathematical Olympiad — a prestigious maths competition for high school students. They concluded that each additional point scored on the International Mathematics Olympiad “is associated with a 2.6 percent increase in mathematics publications and a 4.5 percent increase in mathematics citations.”

We looked at a range of data on how predictable productivity differences are in various areas and found that they’re much more predictable in research.

What does this mean for building research skills?

The large spread in productivity makes building strong research skills a lot more promising if you’re a better fit than average. And if you’re a great fit, research can easily become your best option.

And while these differences in output are not fully predictable at the start of a career, the spread is so large that it’s likely still possible to predict differences in productivity with some reliability.

This also means you should mainly be evaluating your long-term expected impact in terms of your chances of having a really big success.

That said, don’t rule yourself out too early. Firstly, many people systematically underestimate their skills. (Though others overestimate them!) Also, the impact of research can be so large that it’s often worth trying it out, even if you don’t expect you’ll succeed. This is especially true because the early steps of a research career often give you good career capital for many other paths.

How to evaluate your fit

How to predict your fit in advance

It’s hard to predict success in advance, so we encourage an empirical approach: see if you can try it out and look at your track record.

You probably have some track record in research: many of our readers have some experience in academia from doing a degree, whether or not they intended to go into academic research. Standard academic success can also point towards being a good fit (though is nowhere near sufficient!):

  • Did you get top grades at undergraduate level (a 1st in the UK or a GPA over 3.5 in the US)?
  • If you do a graduate degree, what’s your class rank (if you can find that out)? If you do a PhD, did you manage to author an article in a top journal (although note that this is easier in some disciplines than others)?

Ultimately, though, your academic track record isn’t going to tell you anywhere near as much as actually trying out research. So it’s worth looking for ways to cheaply try out research (which can be easy if you’re at college). For example, try doing a summer research project and see how it goes.

Some of the key traits that suggest you might be a good fit for a research skills seem to be:

  • Intelligence (Read more about whether intelligence is important for research.)
  • The potential to become obsessed with a topic (Becoming an expert in anything can take decades of focused practice, so you need to be able to stick with it.)
  • Relatedly, high levels of grit, self-motivation, and — especially for independent big picture research, but also for research in academia — the ability to learn and work productively without a traditional manager or many externally imposed deadlines
  • Openness to new ideas and intellectual curiosity
  • Good research taste, i.e. noticing when a research question matters a lot for solving a pressing problem

There are a number of other cheap ways you might try to test your fit.

Something you can do at any stage is practice research and research-based writing. One way to get started is to try learning by writing.

You could also try:

  • Finding out what the prerequisites/normal backgrounds of people who go into a research area are to compare your skills and experience to them
  • Reading key research in your area, trying to contribute to discussions with other researchers (e.g. via a blog or twitter), and getting feedback on your ideas
  • Talking to successful researchers in a field and asking what they look for in new researchers

How to tell if you’re on track

Here are some broad milestones you could aim for while becoming a researcher:

  • You’re successfully devoting time to building your research skills and communicating your findings to others. (This can often be the hardest milestone to hit for many — it can be hard to simply sustain motivation and productivity given how self-directed research often needs to be.)
  • In your own judgement, you feel you have made and explained multiple novel, valid, nontrivially important (though not necessarily earth-shattering) points about important topics in your area.
  • You’ve had enough feedback (comments, formal reviews, personal communication) to feel that at least several other people (whose judgement you respect and who have put serious time into thinking about your area) agree, and (as a result) feel they’ve learned something from your work. For example, lots of this feedback could come from an academic supervisor. Make sure you’re asking people in a way that gives them affordance to say you’re not doing well.
  • You’re making meaningful connections with others interested in your area — connections that seem likely to lead to further funding and/or job opportunities. This could be from the organisations most devoted to your topics of interest; but, there could also be a “dissident” dynamic in which these organisations seem uninterested and/or defensive, but others are noticing this and offering help.

If you’re finding it hard to make progress in a research environment, it’s very possible that this is the result of that particular environment, rather than the research itself. So it can be worth testing out multiple different research jobs before deciding this skill set isn’t for you.

Within academic research

Academia has clearly defined stages, so you can see how you’re performing at each of these.

Very roughly, you can try asking “How quickly and impressively is my career advancing, by the standards of my institution and field?” (Be careful to consider the field as a whole, rather than just your immediate peers, who might be very different from average.) Academics with more experience than you may be able to help give you a clear idea of how things are going.

We go through this in detail in our review of academic research careers.

Within independent research

As a very rough guideline, people who are an excellent fit for independent research can often reach the broad milestones above with a year of full-time effort purely focusing on building a research skill set, or 2–3 years of 20%-time independent effort (i.e. one day per week).

Within research in industry or policy

The stages here can look more like an organisation-building career, and you can also assess your fit by looking at your rate of progression through the organisation.

How to get started building research skills

As we mentioned above, if you’ve done an undergraduate degree, one obvious pathway into research is to go to graduate school (read our advice on choosing a graduate programme) and then attempt to enter academia before deciding whether to continue or pursue positions outside of academia later in your career.

If you take the academic path, then the next steps are relatively clear. You’ll want to try to get excellent grades in undergraduate and in your master’s, ideally gain some kind of research experience in your summers, and then enter the best PhD programme you can. From there, focus on learning your craft by working under the best researcher you can find as a mentor and working in a top hub for your field. Try to publish as many papers as possible since that’s required to land an academic position.

It’s also not necessary to go to graduate school to become a great researcher (though this depends a lot on the field), especially if you’re very talented.
For instance, we interviewed Chris Olah, who is working on AI research without even an undergraduate degree.

You can enter many non-academic research jobs without a background in academia. So one starting point for building up research skills would be getting a job at an organisation specifically focused on the type of question you’re interested in. For examples, take a look at our list of recommended organisations, many of which conduct non-academic research in areas relevant to pressing problems.

More generally, you can learn research skills in any job that heavily features making difficult intellectual judgement calls and bets, preferably on topics that are related to the questions you’re interested in researching. These might include jobs in finance, political analysis, or even nonprofits.

Another common route — depending on your field — is to develop software and tech skills and then apply them at research organisations. For instance, here’s a guide to how to transition from software engineering into AI safety research.

If you’re interested in doing practical big-picture research (especially outside academia), it’s also possible to establish your career through self-study and independent work — during your free time or on scholarships designed for this (such as EA Long-Term Future Fund grants and Open Philanthropy support for individuals working on relevant topics).

Some example approaches you might take to self-study:

  • Closely and critically review some pieces of writing and argumentation on relevant topics. Explain the parts you agree with as clearly as you can and/or explain one or more of your key disagreements.
  • Pick a relevant question and write up your current view and reasoning on it. Alternatively, write up your current view and reasoning on some sub-question that comes up as you’re thinking about it.
  • Then get feedback, ideally from professional researchers or those who use similar kinds of research in their jobs.

It could also be beneficial to start with some easier versions of this sort of exercise, such as:

  • Explaining or critiquing interesting arguments made on any topic you find motivating to write about
  • Writing fact posts
  • Reviewing the academic literature on any topic of interest and trying to reach and explain a bottom-line conclusion

In general, it’s not necessary to obsess over being “original” or having some new insight at the beginning. You can learn a lot just by trying to write up your current understanding.

Choosing a research field

When you’re getting started building research skills, there are three factors to consider in choosing a field:

  1. Personal fit — what are your chances of being a top researcher in the area? Even if you work on an important question, you won’t make much difference if you’re not particularly good at it or motivated to work on the problem.
  2. Impact — how likely is it that research in your field will contribute to solving pressing problems?
  3. Back-up options — how will the skills you build open up other options if you decide to change fields (or leave research altogether)?

One way to go about making a decision is to roughly narrow down fields by relevance and back-up options and then pick among your shortlist based on personal fit.

We’ve found that, especially when they’re getting started building research skills, people sometimes think too narrowly about what they can be good at and enjoy. Instead, they end up pigeonholing themselves in a specific area (for example being restricted by the field of their undergraduate degree). This can be harmful because it means people who could contribute to highly important research don’t even consider it. This increases the importance of writing a broad list of possible areas to research.

Given our list of the world’s most pressing problems, we think some of the most promising fields to do research within are as follows:

  • Fields relevant to artificial intelligence, especially machine learning, but also computer science more broadly. This is mainly to work on AI safety directly, though there are also many opportunities to apply machine learning to other problems (as well as many back-up options).
  • Biology, particularly synthetic biology, virology, public health, and epidemiology. This is mainly for biosecurity.
  • Economics. This is for global priorities research, development economics, or policy research relevant to any cause area, especially global catastrophic risks.
  • Engineering — read about developing and using engineering skills to have an impact.
  • International relations/political science, including security studies and public policy — these enable you to do research into policy approaches to mitigating catastrophic risks and are also a good route into careers in government and policy more broadly.
  • Mathematics, including applied maths or statistics (or even physics). This may be a good choice if you’re very uncertain, as it teaches you skills that can be applied to a whole range of different problems — and lets you move into most of the other fields we list. It’s relatively easy to move from a mathematical PhD into machine learning, economics, biology, or political science, and there are opportunities to apply quantitative methods to a wide range of other fields. They also offer good back-up options outside of research.
  • There are many important topics in philosophy and history, but these fields are unusually hard to advance within, and don’t have as good back-up options. (We do know lots of people with philosophy PhDs who have gone on to do other great, non-philosophy work!)

However, many different kinds of research skills can play a role in tackling pressing global problems.

Choosing a sub-field can sometimes be almost as important as choosing a field. For example, in some sciences the particular lab you join will determine your research agenda — and this can shape your entire career.

And as we’ve covered, personal fit is especially important in research. This can mean it’s easily worth going into a field that seems less relevant on average if you are an excellent fit. (This is due both to the value of the research you might produce and the excellent career capital that comes from becoming top of an academic field.)

For instance, while we most often recommend the fields above, we’d be excited to see some of our readers go into history, psychology, neuroscience, and a whole number of other fields. And if you have a different view of global priorities from us, there might be many other highly relevant fields.

Once you have these skills, how can you best apply them to have an impact?

Richard Hamming used to annoy his colleagues by asking them “What’s the most important question in your field?”, and then after they’d explained, following up with “And why aren’t you working on it?”

You don’t always need to work on the very most important question in your field, but Hamming has a point. Researchers often drift into a narrow speciality and can get detached from the questions that really matter.

Now let’s suppose you’ve chosen a field, learned your craft, and are established enough that you have some freedom about where to focus. Which research questions should you focus on?

Which research topics are the highest-impact?

Charles Darwin travelled the oceans to carefully document different species of birds on a small collection of islands — documentation which later became fuel for the theory of evolution. This illustrates how hard it is to predict which research will be most impactful.

What’s more, we can’t know what we’re going to discover until we’ve discovered it, so research has an inherent degree of unpredictability. There’s certainly an argument for curiosity-driven research without a clear agenda.

That said, we think it’s also possible to increase your chances of working on something relevant, and the best approach is to try to find topics that both personally motivate you and seem more likely than average to matter. Here are some approaches to doing that.

Using the problem framework

One approach is to ask yourself which global problems you think are most pressing, and then try to identify research questions that are:

  • Important to making progress on those problems (i.e. if this question were answered, it would lead to more progress on these problems)
  • Neglected by other researchers (e.g. because they’re at the intersection of two fields, unpopular for bad reasons, or new)
  • Tractable (i.e. you can see a path to making progress)

The best research questions will score at least moderately well on all parts of this framework. Building a perpetual motion machine is extremely important — if we could do it, then we’d solve our energy problems — but we have good reason to think it’s impossible, so it’s not worth working on. Similarly, a problem can be important but already have the attention of many extremely talented researchers, meaning your extra efforts won’t go very far.

Finding these questions, however, is difficult. Often, the only way to identify a particularly promising research question is to be an expert in that field! That’s because (when researchers are doing their jobs), they will be taking the most obvious opportunities already.

However, the incentives within research rarely perfectly line up with the questions that most matter (especially if you have unusual values, like more concern for future generations or animals). This means that some questions often get unfairly neglected. If you’re someone who does care a lot about positive impact and have some slack, you can have a greater-than-average impact by looking for them.

Below are some more ways of finding those questions (which you can use in addition to directly applying the framework above).

Rules of thumb for finding unfairly neglected questions

  • There’s little money in answering the question. This can be because the problem mostly affects poorer people, people who are in the future, or non-humans, or because it involves public goods. This means there’s little incentive for businesses to do research on this question.
  • The political incentives to answer the question are missing. This can happen when the problem hurts poorer or otherwise marginalised people, people who tend not to organise politically, people in countries outside the one where the research is most likely to get done, people who are in the future, or non-humans. This means there’s no incentive for governments or other public actors to research this question.
  • It’s new, doesn’t already have an established discipline, or is at the intersection of two disciplines. The first researchers in an area tend to take any low hanging fruit, and it gets harder and harder from there to make big discoveries. For example, the rate of progress within machine learning is far higher than the rate of progress within theoretical physics. At the same time, the structure of academia means most researchers stay stuck within the field they start in, and it can be hard to get funding to branch out into other areas. This means that new fields or questions at the intersection of two disciplines often get unfairly neglected and therefore provide opportunities for outsized impact.
  • There is some aspect of human irrationality that means people don’t correctly prioritise the issue. For instance, some issues are easy to visualise, which makes them more motivating to work on. People are scope blind which means they’re likely to neglect the issues with the very biggest scale. They’re also bad at reasoning about issues with low probability, which can make them either over-invest or under-invest in them.
  • Working on the question is low status. In academia, research that’s intellectually interesting and fits the research standards of the discipline are high status. Also, mathematical and theoretical work tends to be seen as higher status (and therefore helps to progress your career). But these don’t correlate that well with the social value of the question.
  • You’re bringing new skills or a new perspective to an established area. Progress often comes in science from bringing the techniques and insights of one field into another. For instance, Kahneman started a revolution in economics by applying findings from psychology. Cross-over is an obvious approach but is rarely used because researchers tend to be immersed in their own particular subject.

If you think you’ve found a research question that’s short on talent, it’s worth checking whether the question is answerable. People might be avoiding the question because it’s just extremely difficult to find an answer. Or perhaps progress isn’t possible at all. Ask yourself, “If there were progress on this question, how would we know?”

Finally, as we’ve discussed, personal fit is particularly important in research. So position yourself to work on questions where you maximise your chances of producing top work.

Find jobs that use a research skills

If you have these skills already or are developing it and you’re ready to start looking at job opportunities that are currently accepting applications, see our curated list of opportunities for this skill set:

    View all opportunities

    Career paths we’ve reviewed that use these skills

    Learn more about research

    See all our articles and podcasts on research careers.

    Read next:  Explore other useful skills

    Want to learn more about the most useful skills for solving global problems, according to our research? See our list.

    Plus, join our newsletter and we’ll mail you a free book

    Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

    The post Research skills appeared first on 80,000 Hours.

    ]]>
    Are we doing enough to stop the worst pandemics? https://80000hours.org/2023/04/are-we-doing-enough-to-stop-the-worst-pandemics/ Fri, 21 Apr 2023 15:41:37 +0000 https://80000hours.org/?p=81524 The post Are we doing enough to stop the worst pandemics? appeared first on 80,000 Hours.

    ]]>
    COVID-19 has been devastating for the world. While people debate how the response could’ve been better, it should be easy to agree that we’d all be better off if we can stop any future pandemic before it occurs. But we’re still not taking pandemic prevention very seriously.

    A recent report in The Washington Post highlighted one major danger: some research on potential pandemic pathogens may actually increase the risk, rather than reduce it.

    Back in 2017, we talked about what we thought were several warning signs that something like COVID might be coming down the line. It’d be a big mistake to ignore these kinds of warning signs again.

    This blog post was first released to our newsletter subscribers.

    Join over 350,000 newsletter subscribers who get content like this in their inboxes weekly — and we’ll also send you a free ebook!

    It seems unfortunate that so much of the discussion of the risks in this space is backward-looking. The news has been filled with commentary and debates about the chances that COVID accidentally emerged from a biolab or that it crossed over directly from animals to humans.

    We’d appreciate a definitive answer to this question as much as anyone, but there’s another question that matters much more but gets asked much less:

    What are we doing to reduce the risk that the next dangerous virus — which could come from an animal, a biolab, or even a bioterrorist attack — causes a pandemic even worse than COVID-19?

    80,000 Hours ranks preventing catastrophic pandemics as among the most pressing problems in the world. If you would be a good fit for a career working to mitigate this danger, it could be by far your best opportunity to have a positive impact on the world.

    We’ve recently updated our review of career paths reducing biorisk with the help of Dr Gregory Lewis, providing more detail about both policy and technical paths, as well as ways in which they can overlap. For example, the review notes that:

    • Policy changes could reduce some risks by, for instance, regulating ‘dual use’ research.

    • New technology could help us catch emerging outbreaks sooner.

    • International diplomacy could devote more resources toward supporting the Biological Weapons Convention.

    There’s particular reason to be worried about engineered pathogens, whether they’re created to do harm or for research purposes. Pandemics caused by such viruses could be many times worse than natural ones, because they would be designed with danger to humans in mind, rather than just evolving by natural selection.

    The Washington Post story linked above suggests some reason for hope. Scientists and experts are raising alarms about risky practices in their field. And according to anonymous officials cited in the piece, the Biden administration may announce new restrictions on research using dangerous pathogens this year.

    If executed well, such reforms might significantly reduce the risk that labs accidentally release an extremely dangerous pathogen — which has happened many times before.

    We also need to be preparing for a time, perhaps not too far in the future, when technological advances and cheaper materials make it increasingly easy for bad actors to create dangerous pathogens on their own.

    If you’re looking for a career working on a problem that is massively important, relatively neglected, and potentially very tractable, reducing biorisk might be a terrific option.

    Learn more:

    The post Are we doing enough to stop the worst pandemics? appeared first on 80,000 Hours.

    ]]>
    Maha Rehman on working with governments to rapidly deliver masks to millions of people https://80000hours.org/podcast/episodes/maha-rehman-governments-masks-millions/ Fri, 22 Oct 2021 19:18:11 +0000 https://80000hours.org/?post_type=podcast&p=74569 The post Maha Rehman on working with governments to rapidly deliver masks to millions of people appeared first on 80,000 Hours.

    ]]>
    The post Maha Rehman on working with governments to rapidly deliver masks to millions of people appeared first on 80,000 Hours.

    ]]>
    Tara Kirk Sell on COVID-19 misinformation, who’s done well and badly, and what we should reopen first https://80000hours.org/podcast/episodes/tara-kirk-sell-covid-19-misinformation-performance-reopen/ Fri, 08 May 2020 23:46:01 +0000 https://80000hours.org/?post_type=podcast&p=69661 The post Tara Kirk Sell on COVID-19 misinformation, who’s done well and badly, and what we should reopen first appeared first on 80,000 Hours.

    ]]>
    The post Tara Kirk Sell on COVID-19 misinformation, who’s done well and badly, and what we should reopen first appeared first on 80,000 Hours.

    ]]>
    If you want to help the world tackle COVID-19, what should you do? https://80000hours.org/articles/covid-19-what-should-you-do/ Fri, 27 Mar 2020 09:39:29 +0000 https://80000hours.org/?post_type=article&p=68997 The post If you want to help the world tackle COVID-19, what should you do? appeared first on 80,000 Hours.

    ]]>
    To tackle the COVID-19 crisis, there are five main things we need to do:

    1. Research to understand the disease and to develop new treatments and a vaccine.
    2. Determine the right policies, both for public health and the economic response.
    3. Increase healthcare capacity, especially for testing, ventilators, personal protective equipment, and critical care.
    4. Slow the spread through testing & isolating cases, as well as mass advocacy to promote social distancing and other key behaviours, buying us more time to do the above.1
    5. We also need to keep society functioning through the progression of the pandemic.

    Everyone can help stem the spread of COVID-19 by practicing proper hygiene and staying at home whenever possible. But if you want to do more, what can you do that’s most effective?

    To maximise your impact, the aim is to identify a high-leverage opportunity to contribute to one of these bottlenecks that’s a good fit for your skills.

    In this article, we’ll discuss some opportunities to work within each of these five categories, and some rules of thumb to work out which might be highest-impact for you, drawing from the rest of our research on high-impact careers. We also provide a long list of specific projects we’ve seen proposed.

    We cover where to donate, in a separate article on donation opportunities to fight COVID-19.

    We’ll also briefly consider whether to spend time working on COVID-19, or stick with your current path. The rest of the world’s problems have not gone away, so if you’re already working on something high-impact, you should most likely stick with it. The main purpose of this article is to help people who have already decided they want to work on COVID-19 to find something effective.

    Because this is an area where a lot of diverse, specialised knowledge is relevant, it’s also easy to accidentally make things worse, so consider downside risks and expert advice before doing something big or controversial.

    So how might you contribute to solving each of the five bottlenecks?

    The specific opportunities we mention throughout this piece should not be taken as recommendations — due to the speed of the crisis, we have not carefully reviewed them. Rather, we included them simply on the grounds that we have seen the relevant project endorsed by an institution or individual with expertise or even just that we (as non-experts) have glanced at it and thought it seemed at least somewhat promising. The specific opportunities are best taken as a starting point for more investigation.

    Key ways to contribute

    1. Research into the disease, treatments and vaccines

    We’ll eventually overcome this challenge most likely by developing a vaccine or effective antivirals. Few people have the skills to develop these treatments, meaning that if you do have the skills, it’s very likely to be your highest-impact way to contribute.

    What about people without these skills? Besides supporting researchers indirectly, one other way non-specialists might be able to contribute in this area is by volunteering to be infected as part of vaccine trials. We haven’t heavily vetted this proposal, but for young and healthy people it might be a very valuable way of helping speed up the development of a vaccine.

    It’s also important that we understand the nature of COVID-19 as well as we can (see our understanding of the current science). For example, we’re still not clear on:

    • How COVID-19 is most often spread — e.g., through direct contact, touching surfaces with viral particles or through contact with respiratory droplets or aerosols2
    • What fraction of infected people are asymptomatic3, and the relationship between symptoms and infectiousness4
    • Whether there can be long-term health effects if you get the disease and recover

    Researching these questions could help us understand what interventions make the most sense and whether certain costly interventions — like shutting down cities — are worth it.

    We’re not going to say much more about this area, because our impression is that most people with relevant skills have already considered working on COVID-19, and are in a much better position to work out what’s effective than we are.

    One way to contribute to the research effort which may be easier for non-specialists (e.g. grad students in statistics or biology) to contribute to is vetting others’ research — checking the most influential papers and preprints closely and writing up any important weaknesses or errors. So many papers are coming out so quickly that peer review processes aren’t always able to keep up. People are making life or death decisions based on preliminary papers, so spotting important errors could be very valuable.

    If you do find important errors, contact the authors to check with them before publicising your findings more widely (bearing in mind these researchers are likely to have an overwhelming amount of work right now). This idea is more promising if you know people in the field or have a reputation that means authors and other relevant actors are more likely to engage with you.

    If you’re a computer scientist with training in AI, you could help develop text and data mining tools for medical researchers as part of the COVID-19 Open Research Dataset Challenge.

    2. Determining the right policy response

    Like biomedical research, getting the policy response right is crucial in overcoming this crisis, but it’s relatively hard for people not already in relevant positions to contribute.

    That said, because of the unusually fast-moving nature of the situation, there may be more opportunity than usual for outsiders to help. In particular, they can focus on distilling expert advice and synthesizing research and data in a format that’s easy for policy-makers to quickly absorb.

    We’ve seen great analysis and data coming from people outside of the policy world, including from amateurs, journalists, researchers in other fields, and others. In part this is because policy-makers are flooded with work, and so not able to follow every useful angle. For example:

    More ideas for work in this category:

    • Update Wikipedia pages. If you are able to follow news and papers closely and accurately, you can help keep the Wikipedia page on the COVID-19 pandemic updated for other people to reference, as well as add to articles (titled “2020 coronavirus pandemic in [Country X]”) on the situation and responses in specific countries.
    • Translate materials. Valuable reports and other materials are being released quickly and aren’t always translated into different languages. For example, it was difficult for us to find primary source information in English regarding widely reported evidence from the blanket testing of residents in the Italian town of Vò, and unfortunately none of us know Italian. If you’re able to accurately translate technical language, you may be able to help a great deal by offering to translate reports and papers, as well as by translating Wikipedia pages. Because so much research is done in English, it might be especially helpful to translate materials from English into other languages so that local policymakers can have access to them.

    3. Increasing healthcare capacity

    Even given current suppression efforts, our healthcare systems in the US and the UK may become overwhelmed in the coming weeks or months. According to models from a March 16 paper from Imperial College London, we may need eight times as many critical care beds at the peak of the outbreak than will be available, given current surge capacity in the US and UK.5 Something similar seems to be true of most other western countries, and hospitals in several are already overwhelmed with cases.

    The infection fatality rate seems likely to be sensitive to how overwhelmed the healthcare system is, since ventilation and ICU care can, where available, reduce the risk of death for severe cases. The magnitude of this effect is very hard to estimate, as we aren’t even sure of the infection fatality rate right now. However, assuming that 1% – 5% of COVID-19 patients could require ventilation or another form of critical care6, it seems plausible that the fatality rate is likely to be several times higher if hospital capacity is overwhelmed.

    An overwhelmed system also means that other diseases will go untreated, which will likely lead to worse health outcomes for many people and a significant number of additional deaths.

    This means we need to scale up relevant healthcare capacity several-fold as quickly as possible.

    Much discussion has focused on getting more ventilator capacity – this involves both a technical element (producing the devices) and having enough trained staff to operate them. There are also shortages of other important supplies, such as personal protective equipment like masks and gloves. We even saw a biomedical researcher on Twitter saying his work on COVID-19 could become compromised due to a lack of protective equipment.

    Perhaps even more vital is increasing capacity to test for whether people have the infection, since this would let us isolate the right people while keeping the rest of society running as normally as possible. The countries that have been more successful in slowing the spread of the virus seem to have had more extensive testing.7

    There may be a broader range of opportunities to contribute in this area compared to research or policy. For instance, if you work in manufacturing, you might be able to convert capacity to produce supplies.

    On the medical side, anyone working in healthcare is contributing, at the very least by freeing up other healthcare workers to work on COVID-19. Others may be able to increase capacity indirectly, by providing something like free and safe childcare for hospital workers or, in the UK, volunteering with the NHS.

    Here are some projects that seem promising in this category:

    If you’re an engineer

    • If you know of a company that could make ventilators or ventilator components, you could connect them up with the UK government’s call here.
    • If you have fabricant equipment, you could volunteer to work on some projects — especially scaling up production of personal protective equipment like masks and gloves — with Helpful Engineering.
    • The Coronavirus tech handbook lists some projects for scaling up production of personal protective equipment as well as other hardware needs. (Although this is another area where it is possible to do harm — it’s important that the equipment be high quality, which may be difficult to achieve quickly.)

    If you have medical training

    If you’re a programmer or computer scientist

    • One idea we’ve seen proposed for people in Big Tech is developing a cloud-based ventilator surveillance platform to track hospital ICU capacity and ventilator supply.
    • In general, there are likely many potential ways for programmers and other technically skilled people to help, often by supporting other projects. See the Coronavirus Tech Handbook’s Tech communities page for more ideas.

    4. Slowing the spread through public health advocacy

    China, South Korea, Singapore, and other countries have shown it’s possible to dramatically slow the spread of the disease.8

    Even if we’re unable to suppress the number of cases as much as these countries, the more we do here, the more time we have to do everything else, reducing the overall amount of damage.

    Many of the most important efforts to slow the spread need to be led by governments, such as rolling out mass testing and quarantining infected individuals and others they might have been in contact with.

    However, anyone can contribute to slowing the spread by promoting social distancing and other key behaviours, such as 20 second handwashing, coughing/sneezing into your elbow, staying 6 feet away from others, and so on (see the NHS’s “dos and don’ts”).

    We asked our advisors which measure they’d especially highlight, and they seem to agree that social distancing – encouraging people to stay home and otherwise reduce contact with others – is the crucial message to spread right now.

    To do this, we need mass advocacy campaigns that are memorable and convincing. We can all lead by example, but this measure should especially be considered by anyone who has some kind of platform or loyal following, e.g. if you are a blogger, artist, expert, YouTuber, or some kind of celebrity. (For example see Arnold Schwarzenegger’s video, featuring donkeys Whiskey and Lulu.) Or, if you know someone like that, perhaps you can convince them to help spearhead such a campaign (and offer to help them out with it, especially if you have skills in design or marketing).

    Be careful, however, that you focus on promoting the most needed measures — advocating ineffective or less effective behaviors could crowd out messaging about the most important ones and do more harm than good (as we discuss more below).

    You can also contribute by inspiring people and otherwise making it easier for them to stay home for long periods of time, by providing entertainment, work-from-home advice, exercise advice, and so on.

    Note that some degree of distancing is likely to need to be maintained for months, so these campaigns will need to be sustained. Finding ways to help people stay motivated to keep up these measures as time goes on could prove very valuable.

    5. Keeping society running and supporting people on the front lines

    We need to do all of the above while keeping essential services going amidst the risk of infection.

    Delivery drivers, online retailers, pharmacists, and supermarket workers are essential in helping us get through this pandemic, since they allow people to get the food and supplies they need while staying as isolated as possible.

    Taking a job anywhere in the supply chain, and being conscious of ways to reduce infection, is a way to earn income while also contributing.

    Likewise, we need to keep other ‘essential services’ operating.

    Anyone can also contribute by assisting those on the front lines in whatever ways they can.

    Perhaps you know someone who works in vaccine research – could you bring them a box of supplies so they can better stay isolated? Often you can do more by helping indirectly than by working on the issue yourself.

    Where might you fit in?

    We’ve put the five areas above very roughly in order of priority, but all of them are pressing and needed, so the biggest factors in determining what’s best for you are probably:

    1. The quality of the specific opportunities open to you, and how high-leverage they seem
    2. Which option is the best fit for your skills

    On the first factor, you want to evaluate how much of one of the five key bottlenecks you might be able to solve, if the project succeeds (in expectation). For instance, if you’re working on slowing the spread, how many people might you be able to encourage to practice social distancing? If you’re working on developing a vaccine, roughly how much might you be able to accelerate the process? You can then compare this to the amount of time and money that needs to be invested in the opportunity.

    This is difficult to evaluate, but try to get a rough sense of the potential scale of the upside if you succeed, and the likelihood of success.

    You could also consider other heuristics, like whether experts think it seems like a good idea, and how neglected it seems. If thousands of people are already taking an opportunity, it’s going to be harder to have a big impact.

    On the second factor, a rough rule of thumb is to consider what skills, connections, credentials, and other resources (career capital) you have that are rarest compared to the rest of the population – if you’re using relatively rare career capital, it’s more likely you’ll have a comparative advantage in that opportunity and not be quickly replaced by others. You should also consider what you’ll find most motivating.

    So, a rough process for deciding could be:

    1. Generate a list of specific options, by: (i) considering how you might help with each bottleneck listed above (ii) browsing lists of specific opportunities (iii) considering how your most unusual career capital might be applied.
    2. Compare these options in terms of how pressing the bottleneck is, how high-leverage the opportunity is for helping with that bottleneck, and your degree of personal fit with the opportunity.

    You can use our article on making career decisions if you’d like more tips for making comparisons.

    Avoid making things worse

    It’s difficult even for experts to understand all or even most aspects of the pandemic, because there are so many fast-moving parts and so many different relevant fields, which often involve technical knowledge.

    This means that this is an area where it’s easy to do more harm than good. For instance, we’ve seen people possibly making things worse by:

    • Advocating for ineffective behaviours, potentially crowding out the adoption of more effective behaviours or increasing the total risk due to ‘risk compensation’ – the same phenomenon whereby people wearing safety belts sometimes drive more recklessly, offsetting some of the benefit
    • Pursuing potentially dangerous ideas, like using untested drugs as treatment for COVID-19 without medical supervision

    • Using up time from policy-makers and experts who are extremely bottlenecked, and could have worked on something more useful

    We’d encourage people looking to contribute to be very mindful of where their expertise lies, defer to relevant experts in other areas except in unusual cases, and in general be cautious about promoting original ideas when they might not understand all the relevant considerations.

    We’d also encourage people to try to look for ways to contribute that don’t require significant input from those already responding to the crisis e.g., providing data in a digestible format or by encouraging people to engage in social distancing. (Though we would still encourage people to check their ideas with experts before starting new projects on a big scale.)

    See our full article on ways people trying to do good sometimes accidentally make things worse, and how to avoid them.

    Should you work on COVID-19 or something else high-impact?

    Unfortunately, all the other pressing problems in the world have not gone away, and if you’re already in a job that you think is high-impact, it can be difficult to decide whether to continue with that or to switch to working on COVID-19.

    We think there are strong reasons for our community of readers to spread out over pressing problems. Our incredibly rough take is that, although this issue is not neglected, it could be worth about 1 in 25 people who are interested in effective altruism to work on COVID-19 for some period of time. (The case is stronger if you normally focus on ‘near-termist’ issues, such as global health.)

    This means that you should perhaps switch if you’re in the 1 in ~25 members of the effective altruism community relatively best suited to working on it. Here are some things that could indicate that’s you:

    1. You have highly relevant skills or other career capital, such as useful connections, knowledge of vaccines or public health policy, or experience in government institutions
    2. You’re not currently in a career path you think is high-impact and a good fit
    3. You are highly motivated to work on COVID-19 (e.g., are you the person in your group telling your friends about the latest research? We know one or two people like this.)
    4. You’ve identified an especially promising opportunity
    5. You’re able to switch temporarily in order to take the best opportunities in the area, and will then be able to go back to other projects without derailing your long term career plans
    6. You’re in a relatively safe position with your health, finances, and career

    For example, we’ve decided to spend some time as a team working on COVID-19, because we have some unusual resources (like connections and a platform), we’re able to switch temporarily and then go back to other projects without losing too much ground, and we’re highly motivated by work on the subject.

    If you don’t think you should switch, that might be a difficult decision. But this pandemic shows how rapidly a disaster can materialise (and how underprepared we are for them), and we still need people to prepare society for the future disasters we might face, including future pandemics that could even worse.

    It will also be best for some people to focus on how work in other problem areas can best weather the storm – COVID-19 may pose challenges for many organizations, such as issues with remote working or perhaps future funding uncertainties.

    And if you do think you should switch? There are large gains to acting earlier rather than later in this area due to the disease’s rapid (and accelerating) spread. So if you’re going to work on COVID-19, the sooner, the better.

    Long list of opportunities to work on COVID-19

    Below is a list of opportunities to help the global response to COVID-19. The list is focused on opportunities in research, policy, technology and startups. We focus on opportunities in the US and UK, because most of our audience is based there.

    Note we have not carefully reviewed the organizations and opportunities on these lists. As with the projects mentioned throughout the article, we constructed these lists based on either seeing these projects endorsed by institutions or individuals with expertise or by simply glancing at them ourselves and thinking that they seemed promising.

    Groups that are hiring or seeking volunteers

    Update 2020-05: We’re now listing COVID-19 opportunities on our job board.

    We previously published a list of groups that are hiring or seeking volunteers. It may still be useful, but we are no longer updating it.

    Funding opportunities

    This list was last updated on 2020-04-10. We are no longer updating this list, though you may still find it useful.

    Other good lists

    Here are some lists that other groups have made:

    See our COVID-19 page to learn more

    There you can find all our content on COVID-19 as well as links to other potentially useful resources.

    The post If you want to help the world tackle COVID-19, what should you do? appeared first on 80,000 Hours.

    ]]> Reducing global catastrophic biological risks https://80000hours.org/problem-profiles/preventing-catastrophic-pandemics/full-report/ Mon, 16 Mar 2020 20:34:26 +0000 https://80000hours.org/?post_type=problem_profile&p=68658 The post Reducing global catastrophic biological risks appeared first on 80,000 Hours.

    ]]>

    This article is our full report into reducing global catastrophic biological risks. For a shorter introduction, see our problem profile on preventing catastrophic pandemics.

    What is our analysis based on?

    I, Gregory Lewis, wrote this profile. I work at the Future of Humanity Institute on GCBRs. It owes a lot to helpful discussions with (and comments from) Christopher Bakerlee, Haydn Belfield, Elizabeth Cameron, Gigi Gronvall, David Manheim, Thomas McCarthy, Michael McClaren, Brenton Mayer, Michael Montague, Cassidy Nelson, Carl Shulman, Andrew Snyder-Beattie, Bridget Williams, Jaime Yassif, and Claire Zabel. Their kind help does not imply they agree with everything I write. All mistakes remain my own.

    This profile is in three parts. First, I explain what GCBRs are and why they could be a major global priority. Second, I offer my impressions (such as they are) on the broad contours of the risk landscape, and how these risks are best addressed. Third, I gesture towards the best places to direct one’s career to reduce this danger.

    Motivation

    What are global catastrophic biological risks?

    Global catastrophic risks (GCRs) are roughly defined as risks that threaten great worldwide damage to human welfare, and place the long-term trajectory of humankind in jeopardy.1 Existential risks are the most extreme members of this class. Global catastrophic biological risks (GCBRs) are a catch-all for any such risk that is broadly biological in nature (e.g. a major pandemic).

    I write from a broadly longtermist perspective: roughly, that there is profound moral importance in how humanity’s future goes, and so trying to make this future go better is a key objective in our decision-making (I particularly recommend Joseph Carlsmith’s talk).2 When applying this perspective to biological risks, the issue of whether a given event threatens the long-term trajectory of humankind becomes key. This question is much harder to adjudicate than whether a given event threatens severe worldwide damage to human welfare. My guesswork is the ‘threshold’ for when a biological event starts to threaten human civilisation is high: a rough indicator is a death toll of 10% of the human population, at the upper limit of all disasters ever observed in human history.

    As such, I believe some biological catastrophes, even those which are both severe and global in scope, would not be GCBRs. One example is antimicrobial resistance (AMR): AMR causes great human suffering worldwide, threatens to become an even bigger problem, and yet I do not believe it is a plausible GCBR. An attempt to model the worst case scenario of AMR suggests it would kill 100 million people over 35 years, and reduce global GDP by 2%-3.5%.3 Although disastrous for human wellbeing worldwide, I do not believe this could threaten humanity’s future – if nothing else, most of humanity’s past occurred during the ‘pre-antibiotic age’, to which worst-case scenario AMR threatens a return.

    To be clear, a pandemic that killed less than 10% of the human population could easily still be among the worst events in our species’ history. For example, the ongoing COVID-19 pandemic is already a humanitarian crisis and threatens to get much worse, though it is very unlikely to threaten extinction according to this threshold. It is well worth investing great resources to mitigate such disasters and prevent more from arising.

    The reason to focus here on events that kill a larger fraction of the population is firstly, that they are not so unlikely, secondly, that the damage they could do would be vastly greater still — and potentially even more long-lasting.

    These impressions have pervasive influence on judging the importance of GCBRs in general, and choosing what to prioritise in particular. They are also highly controversial: One may believe that the ‘threshold’ for when an event poses a credible threat to human civilisation is even higher than I suggest (and the risk of any biological event reaching this threshold is very remote). Alternatively, one may believe that this threshold should be set much lower (or at least set with different indicators) so a wider or different set of risks should be the subject of longtermist concern.4 On all of this, more later.

    The plausibility of GCBRs

    The case that biological global catastrophic risks are a credible and urgent threat to humankind arises from a few different sources of evidence. All are equivocal.

    1. Experts express alarm about biological risks in general, and some weak evidence of expert concern about GCBRs in particular. (Yet other experts are sceptical.)
    2. Historical evidence of ‘near-GCBR’ events, suggesting a ‘proof of principle’ there could be risks of something even worse. (Yet none have approached extinction-level nor had discernable long-run negative impacts on global civilisation that approached GCBR levels.)
    3. Worrying features of advancing biotechnology.
    4. Numerical estimates and extrapolation. (Yet the extrapolation is extremely uncertain and indictable.)

    Expert opinion

    Various expert communities have highlighted the danger of very-large scale biological catastrophe, and have assessed that existing means of preventing and mitigating this danger are inadequate.5

    Yet, as above, not all large scale events would constitute a GC(B)R. The balance of expert opinion on the likelihood of these sorts of events is hard to assess, although my impression is that there is substantial scepticism.6 The only example of expert elicitation addressed to this I am aware of is a 2008 global catastrophic risks survey, which offers these median estimates of a given event occurring before 2100:

    Table 1: Selected risk estimates from 2008 survey

    At least 1 million dead At least 1 billion dead Human extinction
    Number killed in the single biggest engineered pandemic 30% 10% 2%
    Number killed in the single biggest natural pandemic 60% 5% 0.05%

    This data should be weighed lightly. As Millett and Snyder-Beattie (2017) note:

    The disadvantage is that the estimates were likely highly subjective and unreliable, especially as the survey did not account for response bias, and the respondents were not calibrated beforehand.

    The raw data also shows considerable variation in estimates,7 although imprecision in risk estimates is generally a cause for greater concern.

    ‘Near-GCBR’ events in the historical record

    ‘Naturally arising’ biological extinction events seem unlikely given the rarity of ‘pathogen driven’ extinction events in natural history, and the 200,000 year lifespan of anatomically modern humans. The historical record also rules against a very high risk of ‘naturally arising’ GCBRs (on which more later). Nonetheless history has four events that somewhat resemble a global biological catastrophe, and so act as a partial ‘proof of principle’ for the danger:8

    1. Plague of Justinian (541-542 CE): Thought to have arisen in Asia before spreading into the Byzantine Empire around the Mediterranean. The initial outbreak is thought to have killed ~6 million (~3% of world population),9 and contributed to reversing the territorial gains of the Byzantine empire around the Mediterranean rim as well as (possibly) the success of its opponent in the subsequent Arab-Byzantine wars.
    2. The Black Death (1335-1355 CE): Estimated to have killed 20-75 million people (~10% of world population), and believed to have had profound impacts on the subsequent course of European history.
    3. The Columbian Exchange (1500-1600 CE): A succession of pandemics (likely including smallpox and paratyphoid) brought by the European colonists devastated Native American populations: it is thought to contribute in large part to the ~80% depopulation of native populations in Mexico over the 16th century, and other groups in the Americas are suggested to have suffered even starker depopulation – up to 98% proportional mortality.10
    4. The 1918 Influenza Pandemic (1918 CE): A pandemic which ranged almost wholly across the globe, and killed 50-100 million people (2.5% – 5% of world population) – probably more than either World War.

    COVID-19, which the World Health organization declared a global pandemic on March 11th 2020, has already caused grave harm to humankind, and regrettably is likely to cause much more. Fortunately, it seems unlikely to cause as much harm as the historical cases noted here.

    All of the impacts of the cases above are deeply uncertain, as:

    • Vital statistics range from at best very patchy (1918) to absent. Historical populations (let alone their mortality rate, and let alone mortality attributable to a given outbreak) are very imprecisely estimated.
    • Proxy indicators (e.g. historical accounts, archaeology) have very poor resolution, leaving a lot to educated guesswork and extrapolation (e.g. “The evidence suggests, in European city X, ~Y% of the population died due to the plague – how should one adjust this to the population of Asia?”)
    • Attribution of historical consequences of an outbreak are highly contestable: other coincident events can offer competing (or overdetermining) explanations.

    Although these factors add ‘simple’ uncertainty, I would guess academic incentives and selection effects introduce a bias to over-estimates for historical cases. For this reason I’ve used Muelhauser’s estimates for ‘death tolls’ (generally much more conservative than typical estimates, such as ’75-200 million died in the black death’), and reiterate the possible historical consequences are ‘credible’ rather than confidently asserted.

    For example, it’s not clear the plague of Justinian should be on the list at all. Mordechai et al. (2019) survey the circumstantial archeological data around the time of the Justinian Plague, and find little evidence of a discontinuity over this period suggestive of a major disaster: papyri and inscriptions suggest stable rates of administrative activity, and pollen measures suggest stable land-use (they also offer reasonable alternative explanations for measures which did show a sharp decline – new laws declined during the ‘plague period’, but this could be explained by government efforts at legal consolidation having coincidentally finished beforehand).

    Even if one takes the supposed impacts of each at face value, each has features that may disqualify it as a ‘true’ global catastrophe. The first three, although afflicting a large part of humanity, left another large part unscathed (the Eurasian and American populations were effectively separated). 1918 Flu had a very high total death toll and global reach, but not the highest proportional mortality, and relatively limited historical impact. The Columbian Exchange, although having high proportional mortality and crippling impact on the affected civilisations, had comparatively little effect on global population owing to the smaller population in the Americas and the concurrent population growth of the immigrant European population.

    Yet even though these historical cases were not ‘true’ GCBRs, they were perhaps near-GCBRs. They suggest that certain features of a global catastrophe (e.g. civilisational collapse, high proportional mortality) can be driven by biological events. And the current COVID-19 outbreak illustrates the potential for diseases to spread rapidly across the world today, despite efforts to control it. There seems to be no law of nature that prevents a future scenario more extreme than these, or that combines the worst characteristics of those noted above (even if such an event is unlikely to naturally arise).

    Whether the risk of ‘natural’ biological catastrophes is increasing or decreasing is unclear

    The cases above are ‘naturally occuring’ pandemic diseases, and most of them afflicted much less technically advanced civilisations in the past. Whether subsequent technological progress has increased or decreased this danger is unclear.

    Good data is hard to find: Burden of endemic infectious disease is on a downward trend, but this gives little reassurance for changes in the far right tail of pandemic outbreaks. One (modelling) datapoint comes from an AIR worldwide study to estimate the impact if the 1918 influenza outbreak happened today. They suggest that although the absolute numbers of deaths would be similar (tens of millions), the proportional mortality of the global population would be much lower, due to a 90% reduction in case fatality risk.

    From first principles, considerations point in both directions. On the side of natural GCBR risk getting lower:

    • A healthier (and more widely geographically spread) population.
    • Better hygiene and sanitation.
    • The potential for effective vaccination and therapeutics.
    • Understanding of the mechanisms of disease transmission and pathogenesis.

    On the other hand:

    • Trade and air travel allow much faster and wider transmission.11 For example, air travel seems to have played a large role in the spread of COVID-19.
    • Climate change may (among other effects) increase the likelihood of new emerging zoonotic diseases.
    • Greater human population density.
    • Much larger domestic animal reservoirs.

    There are many other relevant considerations. On balance, my (highly uncertain) view is that the danger of natural GCBRs has declined.

    Artificial GCBRs are very dangerous, and increasingly likely

    ‘Artificial’ GCBRs are a category of increasing concern, owed to advancing biotechnological capacity alongside the increasing risk of its misuse.12 The current landscape (and plausible forecasts of its future development) have concerning features which, together, make the accidental or deliberate misuse of biotechnology a credible global catastrophic risk.

    Replaying the worst outbreaks in history

    Polio, the 1918 pandemic influenza strain, and most recently horsepox (a close relative of smallpox) have all been synthesised ‘from scratch’. The genetic sequence of all of these disease-causing organisms (and others besides) are publicly available, and the progress and democratisation of biotechnology may make the capacity to perform similar work more accessible to the reckless or malicious.13 Biotechnology therefore poses the risk of rapidly (and repeatedly) recreating the pathogens which led to the worst biological catastrophes observed in history.

    Engineered pathogens could be even more dangerous

    Beyond repetition, biotechnology allows the possibility of engineering pathogens more dangerous than those that have occurred in natural history. Evolution is infamously myopic, and its optimisation target is reproductive fitness, rather than maximal damage to another species (cf. optimal virulence). Nature may not prove a peerless bioterrorist; dangers that emerge by evolutionary accident could be surpassed by deliberate design.

    Hints of this can be seen in the scientific literature. The gain-of-function influenza experiments, suggested that artificial selection could lead to pathogens with properties that enhance their danger.14 There have also been instances of animal analogues of potential pandemic pathogens being genetically modified to reduce existing vaccine efficacy.

    These cases used techniques well behind the current cutting edge of biotechnology, and were produced somewhat ‘by accident’ by scientists without malicious intent. The potential for bad actors to intentionally produce new or modified pathogens using modern biotechnology is harrowing.

    Ranging further, and reaching higher, than natural history

    Natural history constrains how life can evolve. One consequence is the breadth of observed biology is a tiny fraction of the space of possible biology.15 Bioengineering may begin to explore this broader space.

    One example is enzymes: proteins that catalyse biological reactions. The repertoire of biochemical reactions catalysed by natural enzymes is relatively narrow, and few are optimised for very high performance, due to limited selection pressure or ‘short-sighted evolution’.16 Enzyme engineering is a relatively new field, yet it has already produced enzymes that catalyse novel reactions (1, 2, 3), and modifications of existing enzymes with improved catalytic performance and thermostability (1, 2).

    Similar stories can be told for other aspects of biology, and together they suggest the potential for biological capabilities unprecedented in natural history. It would be optimistic to presume that in this space of large and poorly illuminated ‘unknown unknowns’ there will only be familiar dangers.

    Numerical estimates

    Millett and Snyder-Beattie (2017) offer a number of different models to approximate the chance of a biological extinction risk:

    Table 2: Estimates of biological extinction risk 17

    Model Risk of extinction per century Method (in sketch)
    Potentially Pandemic Pathogens 0.00016% to 0.008% 0.01 to 0.2% yearly risk of global pandemic emerging from accidental release in the US.

    Multiplied by 4 to approximate worldwide risk.

    Multiplied by 2 to include possibility of deliberate release

    1 in 10000 risk of extinction from a pandemic release.

    Power law (bioterrorism) 0.014% Scale parameter of ~0.5

    Risk of 5 billion deaths = (5 billion)-0.5

    10% chance of 5 billion deaths leading to extinction

    Power law (biowarfare) 0.005% Scale parameter of ~0.41

    Risk of 5 billion deaths = (5 billion)-0.41

    A war every 2 years

    10% chance of massive death toll being driven by bio

    10% chance of extinction

    These rough approximations may underestimate by virtue of the conservative assumptions in the models, that the three scenarios do not mutually exhaust the risk landscape, and that the extrapolation from historical data is not adjusted for trends that (I think, in aggregate) increase the risk. That said, the principal source of uncertainty is the extremely large leap of extrapolation: power-law assumptions guarantee a heavy right tail, yet in this range other factors may drive a different distribution (either in terms of type or scale parameter). The models are (roughly) transcribed into a guesstimate here.18

    GCBRs may be both neglected and tractable

    Even if GCBRs are a ‘big problem’, this does not entail more people should work on it. Some big problems are hard to make better, often because they are already being addressed by many others, or that there are no good available interventions.19

    This doesn’t seem to apply to GCBRs. There are good reasons to predict this is a problem that will continue to be neglected; surveying the area provides suggestive evidence of under-supply and misallocation; and examples of apparently tractable shortcomings are readily found.

    A prior of low expectations

    Human cognition, sculpted by the demands of the ancestral environment, may fit poorly with modern challenges. Yudkowsky surveys heuristics and biases that tend to mislead our faculties: GC(B)Rs, with their unpredictability, rarity, and high consequence, appear to be a treacherous topic for our minds to navigate.

    Decisions made by larger groups can sometimes mitigate these individual faults. But the wider social and political environment presents its own challenges. There can be value divergence: a state may regard its destruction and outright human extinction as similarly bad, even if they starkly differ from the point of view of the universe. Misaligned incentives can foster very short time horizons, parochial concern, and policy driven by which constituents can shout the loudest instead of who is the most deserving.20 Concern for GCBRs – driven in large part by cosmopolitan interest in the global population, concern for the long-run future, and where most of its beneficiaries are yet to exist – has obvious barriers to overcome.

    The upshot is GC(B)Rs lie within the shadows cast by defects in our individual reasoning, and their reduction to a global and intergenerational public good standard theory suggests markets and political systems will under-provide.

    The imperfectly allocated portfolio

    Very large efforts are made on mitigating biological risks in the general sense. The US government alone planned to spend around $3 billion on biosecurity in 2019.21 Even if only a small fraction of this is ‘GCBR-relevant’ (see later), it looks much larger than (say) $10s of millions yearly spending on AI safety, another 80,000 Hours priority area.

    Most things are relatively less neglected than AI safety, yet they can still be neglected in absolute terms. A clue for this being the case in biological risk generally is evidence of high marginal cost-effectiveness. One example is pandemic preparedness. The World Bank suggests an investment of 1.9B to 3.4B in ‘One Health‘ initiatives would reduce the likelihood of pandemic outbreaks by 20%. At this level, the economic rate of return is a (highly favourable) 14-49%. Although I think this ‘bottom line’ is optimistic,22 it is probably not so optimistic for its apparent outperformance to be wholly overestimation.

    There is a story to be told of insufficient allocation towards GCBRs in particular, as well as biosecurity in general. Millett and Snyder-Beattie (2017) offer a ‘black-box’ approach (e.g. “X billion dollars would reduce biological existential risk by Y% in expectation”) to mitigating extremely high consequence biological disasters. ‘Pricing in’ the fact extinction not only kills everyone currently alive, but also entails the loss of all people who could live in the future, they report ‘cost per QALY’ of a 250 billion dollar programme that reduces biological extinction risks by 1% from their previous estimates (i.e an absolute risk reduction of 0.02 to ~2/million over a century) to be between $0.13 to $1600,23 superior to marginal health spending in rich countries. Contrast the billion dollar efforts to develop and stockpile anthrax vaccines.

    Illustrative examples

    I suggest the following two examples are tractable shortcomings in the area of GCBR reduction (even if they are not necessarily the best opportunities), and so suggest opportunities to make a difference are reasonably common.

    State actors and the Biological Weapons Convention

    Biological weapons have some attractive features for state actors to include in their portfolio of violence: they provide a novel means of attack, are challenging to attribute, and may provide a strategic deterrent more accessible than (albeit inferior to) nuclear weapons.24 The trend of biotechnological progress may add to or enhance these attractive features, and thus deliberate misuse by a state actor developing and deploying a biological weapon is a plausible GCBR (alongside other risks which may not be ‘globally catastrophic’ as defined before, but are nonetheless extremely bad).

    The principal defence against proliferation of biological weapons among states is the Biological Weapons Convention. Of 197 state parties eligible to ratify the BWC, 183 have done so. Yet some states which have signed or ratified the BWC have covertly pursued biological weapons programmes. The leading example was the Biopreparat programme of the USSR,25 which at its height spent billions and employed tens of thousands of people across a network of secret facilities, and conducted after the USSR signed onto the BWC:26 their activities are alleged to have included industrial-scale production of weaponised agents like plague, smallpox and anthrax, alongside successes in engineering pathogens for increased lethality, multi-resistance to therapeutics, evasion of laboratory detection, vaccine escape, and novel mechanisms of disease not observed in nature.27 Other past and ongoing violations in a number of countries are widely suspected.28

    The BWC faces ongoing difficulties. One is verification: the Convention lacks verification mechanisms for countries to demonstrate their compliance,29 and the technical and political feasibility of such verification is fraught – similarly, it lacks an enforcement mechanism. Another is states may use disarmament treaties (BWC) included as leverage for other political ends: decisions must be made by unanimity, and thus the 8th review conference in 2017 ended without agreement due to the intransigence of one state.30 Finally (and perhaps most tractably) is that the BWC struggles for resources; it has around 3 full-time staff, a budget less than the typical McDonalds, and many states do not fulfil their financial obligations: the 2017 meeting of states parties was only possible thanks to overpayment by some states, and the 2018 meeting had to be cut short by a day due to insufficient funds.31

    Dual-use research of concern

    The gain-of-function influenza experiments is an example of dual-use research of concern (DURC): research whose results have the potential for misuse. De novo horsepox synthesis is a more recent case. Good governance of DURC remains more aspiration than actuality.

    A lot of decision making about whether to conduct a risky experiment falls on an individual investigator, and typical scientific norms around free inquiry and challenging consensus may be a poor fit for circumstances where the downside risks ramify far beyond the practitioners themselves. Even in the best case, where the scientific community is solely composed of those who only perform work which they sincerely believe is on balance good for the world, this independence of decision making gives rise to a unilateralist curse: the decision on ‘should this be done’ defaults to the most optimistic outlier, as only one needs to mistakenly believe it should be done for it to be done, even if it should not.

    In reality, scientists are subject to other incentives besides the public good (e.g. publications, patents). This drives the scientific community to make all accessible discoveries as quickly as possible, even if the sequence of discoveries that results is not the best from the perspective of the public good: it may be better that safety-enhancing discoveries occur before (easier to make) dangerous discoveries (cf. differential technological development).

    Individually, some scientists may be irresponsible or reckless. Ron Fouchier, when first presenting his work on gain of function avian influenza, did not describe it in terms emblematic of responsible caution: saying that he first “mutated the hell out of the H5N1 virus” to try and make it achieve mammalian transmission. Although it successfully attached to mammalian cells (“which seemed to be very bad news”) it could not transmit from mammal to mammal. Then “someone finally convinced [Fouchier] to do something really, really stupid” – using serial passage in ferrets of this mutated virus, which did successfully produce an H5N1 strain that could transmit from mammal-to-mammal (“this is very bad news indeed”).32

    Governance and oversight can mitigate risks posed by individual foibles or mistakes. The track record of these identifying concerns in advance is imperfect. The gain of function influenza work was initially funded by the NIH (the same body which would subsequently declare a moratorium on gain of function experiments), and passed institutional checks and oversight – concerns only began after the results of the work became known. When reporting de novo horsepox synthesis to the WHO advisory committee on Variola virus research, the scientists noted:

    Professor Evans’ laboratory brought this activity to the attention of appropriate regulatory authorities, soliciting their approval to initiate and undertake the synthesis. It was the view of the researchers that these authorities, however, may not have fully appreciated the significance of, or potential need for, regulation or approval of any steps or services involved in the use of commercial companies performing commercial DNA synthesis, laboratory facilities, and the federal mail service to synthesise and replicate a virulent horse pathogen.

    One underlying challenge is there is no bright line one can draw around all concerning research. ‘List based’ approaches, such as select agent lists or the seven experiments of concern are increasingly inapposite to current and emerging practice (for example, neither of these would ‘flag’ horsepox synthesis, as horsepox is not a select agent, and de novo synthesis, in itself, is not one of the experiments of concern). Extending the lists after new cases are demonstrated does not seem to be a winning strategy, yet the alternative to lists is not clear: the consequences of scientific discovery are not always straightforward to forecast.

    Even if a more reliable governance ‘safety net’ could be constructed, there would remain challenges in geographic scope. Practitioners inclined (for whatever reason) towards more concerning work can migrate to where the governance is less stringent; even if one journal declines to publish on public safety grounds, one can resubmit to another who might.33

    Yet these challenges are not insurmountable: research governance can adapt to modern challenges; greater awareness of (and caution around) biosecurity issues can be inculcated into the scientific community; one can attempt to construct better means of risk assessment than blacklists (cf. Lewis et al. (2019)); broader intra- and inter-national cooperation can mitigate some of the dangers of the unilateralist’s curse. There is ongoing work in all of these areas. All could be augmented.

    Impressions on the problem area

    Even if the above persuades that GCBRs should be an important part of the ‘longtermist portfolio’,34 it does not answer either how to prioritise this problem area relative to other parts of the ‘far future portfolio’ (e.g. AI safety, Nuclear security), nor which areas under the broad heading of ‘GCBRs’ are the best to work on. I survey some of the most important questions, and (where I have them) offer my impressions as a rough guide.

    Key uncertainties

    What is the threshold for an event to threaten global catastrophe?

    Biological events vary greatly in their scale. At either extreme, there is wide agreement of whether to ‘rule out’ or ‘rule in’ an event as a credible GCBR: a food poisoning outbreak is not a GCBR; an extinction event is. Disagreement is widespread between these limits. I offered before a rough indicator of ‘10% of the population’, which suggests a threshold for concern for GCBRs at the upper limit of events observed in human history.

    As this plays a large role in driving my guesses over risk share, a lower threshold would tend to push risk ‘back’ from the directions I indicate (and generally towards ‘conventional’ or ‘commonsense’ prioritisation), and vice-versa.

    How likely is humanity to get back on track after global catastrophe?

    I have also presumed that a biological event which causes human civilisation to collapse (or ‘derail’) threatens great harm to humanity’s future, and thus such risks would have profound importance to longtermists alongside those of outright human extinction. This is commonsensical, but not inarguable.

    Much depends on how likely humanity is to recover from extremely large disasters which nonetheless are not extinction events. An event which kills 99% of humankind would leave a population of around 78 million, still much higher than estimates of prehistoric total human populations (which survived the 200,000 year duration of prehistory, suggesting reasonable resilience to subsequent extinction). Unlike prehistory, the survivors of a ‘99%’ catastrophe likely have much greater knowledge and access to technology than earlier times, better situating them for a speedy recovery, at least relative to the hundreds of millions of years remaining of the earth’s habitable period.35 The likelihood of repeated disasters ‘resetting’ human development again and again through this interval looks slim.

    If so, a humanity whose past has been scarred by such a vast disaster nonetheless still has good prospects to enjoy a flourishing future. If this is true, from a longtermist perspective, more effort should be spent upon disasters which would not offer reasonable prospect of recovery, of which risks of outright extinction are the leading (but may not be the only) candidate.36

    Yet it may not be so:

    • History may be contingent and fragile, and the history we have observed can at best give limited reassurance that recovery would likely occur if we “blast (or ‘bio’) ourselves back to the stone age”.
    • We may also worry about interaction terms between catastrophic risks: perhaps one global catastrophe is likely to precipitate others which ‘add up’ to an existential risk.
    • We may take the trajectory of our current civilisation to be unusually propitious out of those which were possible (consider reasonably-nearby possible worlds with totalitarian state hyper-power, or roiling great power warfare). Even if a GCBR ‘only’ causes a civilisational collapse which is quickly recovered from, it may still substantially increase risk indirectly if the successor civilisations tend to be worse at navigating subsequent existential risks well.37
    • Risk factors may be shared between GCBRs and other global catastrophes (e.g. further proliferation of weapons of mass destruction do not augur well for humanity navigating other challenges of emerging technology). Thus the risk of large biological disasters may be a proxy indicator for these important risk factors.38

    The more one is persuaded by a ‘recovery is robust and reliable’ view, the more one should focus effort on existential rather than globally catastrophic dangers (and vice versa). Such a view would influence not only how to allocate efforts ‘within’ the GCBR problem area, but also in allocation between problem areas. The aggregate risk of GCBRs appears to be mainly composed of non-existential dangers, and so this problem area would be relatively disfavoured compared to those, all else equal, where the danger is principally one of existential risk (AI perhaps chief amongst these).

    Some guesses on risk

    How do GCBRs compare to AI risk?

    A relatively common view within effective altruism is that biological risks and AI risks comprise the two most important topics to work on from a longtermist perspective.39 AI likely poses a greater overall risk, yet GCBRs may have good opportunities for people interested in increasing their impact given the very large pre-existing portfolio, more ‘shovel-ready’ interventions, and very few people working in the relevant fields who have this as their highest priority (see later).

    My impression is that GCBRs should be a more junior member of an ideal ‘far future portfolio’ compared to AI.40 But not massively more junior: some features of GCBRs look worrying, and many others remain unclear. When considered alongside relatively greater neglect (at least among those principally concerned with the longterm future), whatever gap lies between GCBRs and AI is unlikely so large as to swamp considerations around comparative advantage. I recommend those with knowledge, skills or attributes particularly well-suited to working on GCBRs explore this area first before contemplating changing direction into AI. I also suggest those for whom personal fit does not provide a decisive consideration consider this area as a reasonable candidate alongside AI.

    Probably anthropogenic > natural GCBRs.

    In sketch, the case for thinking anthropogenic risks are greater than natural ones is this:

    Our observational data, such as it is, argues for a low rate of natural GCBRs:41

    • Pathogen-driven extinction events appear to be relatively rare.
    • As Dr Toby Ord argues in the section on natural risks in his book ‘The Precipice’, the fact that humans have survived for 200,000 years is evidence against there being a high baseline extinction risk from any cause (biology included), and so a low probability of occuring in (say) 100 years.42
    • A similar story applies to GCBRs, given we’ve (arguably) not observed a ‘true’ GCBR, and only a few (or none) near-GCBRs.

    One should update this baseline risk by all the novel changes in recent history (e.g. antibiotics, air travel, public health, climate change, animal agriculture – see above). Estimates of the aggregate impact of these changes are highly non-resilient even with respect to sign (I think it is risk-reducing, but reasonable people disagree). Yet it seems reasonable given this uncertainty that one should probably not be adjusting the central risk estimate upwards by the orders of magnitude necessary to make natural GCBR at 1% or greater this century.43

    I think anthropogenic GCBR is around the 1% mark or greater this century, motivated partly by the troubling developments in biotechnology noted above, and partly by the absence of reassuring evidence of a long track record of safety from this type of risk. Thus this looks more dangerous.

    Perhaps deliberate over accidental misuse

    Within anthropogenic risks one can (imperfectly) subdivide them into deliberate versus accidental misuse (compare bioterrorism to a ‘science experiment gone wrong’ scenario).44

    Which is more worrying is hard to say – little data to go on, and considerations in both directions. For deliberate misuse, the idea is that scenarios of vast disaster are (thankfully) rare among the space of ‘bad biological events’ (however cashed out), and so are more likely to be found by deliberate search rather than chance; for accidents, the idea is that (thankfully) most actors are well intentioned, and so there will be a much higher rate of good actors making mistakes than bad actors doing things on purpose. I favour the former consideration more,45 and so lean towards deliberate misuse scenarios being more dangerous than accidental ones.

    Which bad actors pose the greatest risk?

    Various actors may be inclined to deliberate misuse: from states, to terrorist groups, to individual misanthropes (and others besides). One key feature is what we might call actor sophistication (itself a rough summary of their available resources, understanding, and so on). There are fewer actors at a higher level of sophistication, but the danger arising from each is higher: there are many more possible individual misusers than possible state misusers, but a given state programme tends to be much more dangerous than a given individual scheme (and states themselves could vary widely in their sophistication).

    My impression is that one should expect the aggregate risk to initially arise principally from highly sophisticated bad actors, this balance shifts over time towards less sophisticated ones. My reasoning, in sketch, is this:

    For a given danger of misuse, the barrier to entry for an actor to utilise this starts very high, but inexorably falls (top left panel, below). Roughly, the risk window is opened by a vanguard risk where some bad actor can access it, and saturates when the danger has proliferated to the point where (virtually) any bad actor can access it.46

    Suppose the risk window for every danger ultimately closes, and there is some finite population of such dangers distributed over time (top right).47 Roughly, this suggests cumulative danger first rises, and then declines (cf., also). This is similar to the danger posed by a maximally sophisticated bad actor over time, with lower sophistication corresponding to both a reduced magnitude of danger, and a skew towards later in time (bottom left – ‘sophisticated’ and ‘unsophisticated’ are illustrations; there aren’t two neat classes, but rather a spectrum). With actors becoming more numerous with less sophistication, this suggests the risk share of total danger shifts from the latter to the former over time (bottom right – again making an arbitrary cut in the hypothesised ‘sophistication spectrum).

    Crowdedness, convergence, and the current portfolio

    If we can distinguish GCBRs from ‘biological risks’ in general, ‘longtermist’ views would recommend greater emphasis be placed on reducing GCBRs in particular. Nonetheless, three factors align current approaches in biosecurity to GCBR mitigation efforts:48

    1. Current views imply some longtermist interest: Even though ‘conventional’ views would not place such a heavy weight on protecting the long-term future, they would tend not to wholly discount it. Insofar as they don’t, they value work that reduces these risks.
    2. GCBRs threaten near-term interests too: Events that threaten to derail civilisation also threaten vast amounts of death and misery. Even if one discounts the former, the latter remains a powerful motivation.49
    3. Interventions tend to be ‘dual purpose’ between ‘GCBRs’ and ‘non-GCBRs’: disease surveillance can detect both large and small outbreaks, counter-proliferation efforts can stop both higher and lower consequence acts of deliberate use, and so on.

    This broad convergence, although welcome, is not complete. The current portfolio of work (set mainly by the lights of ‘conventional’ views) would not be expected to be perfectly allocated by the lights of a longtermist view (see above). One could imagine these efforts as a collection of vectors, their length corresponding to the investment they currently receive, and of varying alignment with the cause of GCBR mitigation, the envelope of these having a major axis positively (but not perfectly) correlated with the ‘GCBR mitigation’ axis:

    Approaches to intervene given this pre-existing portfolio can be split into three broad buckets. The first bucket simply channels energy into this portfolio without targeting – ‘buying the index’ of conventional biosecurity, thus generating more beneficial spillover into GCBR mitigation. The second bucket aims to complement the portfolio of biosecurity effort to better target GCBRs: assisting work particularly important to GCBRs, adapting existing efforts to have greater ‘GCBR relevance’, and perhaps advocacy within the biosecurity community to place greater emphasis on GCBRs when making decisions of allocation. The third bucket is pursuing GCBR-reducing work which is fairly independent of (and has little overlap with) efforts of the existing biosecurity community.

    I would prioritise the latter two buckets over the first: I think it is possible to identify which areas are most important to GCBRs and so for directed effort to ‘beat the index’.50 My impression is the second bucket should be prioritised over the third: although GCBRs have some knotty macro-strategic questions to disentangle,51 the area is less pre-paradigmatic than AI risk, and augmenting and leveraging existing effort likely harbours a lot of value (as well as a wide field where ‘GCBR-focused’ and ‘conventional’ efforts can mutually benefit).52 Of course, much depends on how easy interventions in the buckets are. If it is much lower cost to buy the biosecurity index because of very large interest in conventional biosecurity subfields, the approach may become competitive.

    There is substantial overlap between ‘bio’ and other problem areas, such as global health (e.g. the Global Health Security Agenda), factory farming (e.g. ‘One Health‘ initiatives), or AI (e.g. due to analogous governance challenges between both). Although suggesting useful prospects for collaboration, I would hesitate to recommend ‘bio’ as a good ‘hedge’ for uncertainty between problem areas. The more indirect path to impact makes bio unlikely to be the best option by the lights of other problem areas (e.g. although I hope bio can provide some service to AI governance, one would be surprised if work on bio made greater contributions to AI governance than working on AI governance itself – cf.), and so further deliberation over one’s uncertainty (and then committing to one’s leading option) will tend to have a greater impact than a mixed strategy.53

    How to help

    General remarks

    What is the comparative advantage of Effective Altruists in this area?

    The main advantage of Effective Altruists entering this area appears to be value-alignment – in other words, appreciating the great importance of the long-run future, and being able to prioritise by these lights. This pushes towards people seeking roles where they influence prioritisation and the strategic direction of relevant communities, rather than doing particular object level work: an ‘EA vaccinologist’ (for example) is reasonably replaceable by another competent vaccinologist; an ‘EA deciding science budget allocation’ much less so.54

    Desirable personal characteristics

    There are some characteristics which make one particularly well suited to work on GCBRs.

    1. Discretion: Biosecurity in general (and GCBRs in particular) are a delicate area – one where mistakes are easy to make yet hard to rectify. The ideal norm is basically the opposite of ‘move fast and break things’, and caution and discretion are essential. To illustrate:
      • First, GCBRs are an area of substantial information hazard, as a substantial fraction of risk arises from scenarios of deliberate misuse. As such certain information ‘getting into the wrong hands’ could prove dangerous. This not only includes particular ‘dangerous recipes’, but also general heuristics or principles which could be used by a bad actor to improve their efforts: the historical trend of surprising incompetence from those attempting to use disease as a weapon is one I am eager to see continue.55 It is important to recognise when information could be hazardous; to judge impartially the risks and benefits of wider disclosure (notwithstanding personal interest in e.g. ‘publishing interesting papers’ or ‘being known to have cool ideas’); and to practice caution in decision-making (and not take these decisions unilaterally).56
      • Second, the area tends to be politically sensitive, given it intersects many areas of well-established interests (e.g. state diplomacy, regulation of science and industry, security policy). One typically needs to build coalitions of support among these, as one can seldom ‘implement yourself’, and mature conflicts can leave pitfalls that are hard to spot without a lot of tacit knowledge.
      • Third, this also applies to EA’s entry into biosecurity itself. History suggests integration between a pre-existing expert community and an ‘outsider’ group with a new interest does not always go well. This has gone much better in the case of biorisk so far, but hard won progress can be much more easily reversed by inapt words or ill-chosen deeds.
    2. Focus: If one believes the existing ‘humanity-wide portfolio’ of effort underweights GCBRs, it follows much of the work that best mitigates GCBRs may be somewhat different to typical work in biosafety, public health, and other related areas. A key ability is to be able to prioritise by this criterion, and not get side-tracked into laudable work which is less important.

    3. Domain knowledge and relevant credentials: Prior advanced understanding and/or formal qualifications (e.g. a PhD in a bioscience field) are a considerable asset, for three reasons. First, for many relevant careers to GCBRs, advanced credentials (e.g. PhD, MD, JD) are essential or highly desirable. Second, close understanding of relevant fields seems important for doing good work: much of the work in GCBRs will be concrete, and even more abstract approaches will likely rely more on close understanding rather than flashes of ‘pure’ conceptual insight. Third, (non-pure) time discounting favours those who already have relevant knowledge and credentials, rather than those several years away from getting them.

    4. US Citizenship: The great bulk of biosecurity activity is focused in the United States. For serious involvement with some major stakeholders (e.g. the US biodefense community), US citizenship is effectively pre-requisite. For many others, it remains a considerable advantage.

    Work directly now, or build capital for later?

    There are two broad families of approach to work on GCBRs, which we might call explicit versus implicit. Explicit approaches involve working on GCBRs expressly. Implicit (indirect) approaches involve pursuing a more ‘conventional’ career path in an area relevant to GCBRs, both to do work that has GCBR-relevance (even if that is not the primary objective of the work) and also aim to translate this influence and career capital to bear on the problem later.57 Which option is more attractive is sensitive to the difficult topics discussed above (and other things besides).

    Reasonable people differ on the ideal balance of individuals pursuing explicit versus implicit approaches (compare above on degree of convergence between GCBR mitigation and conventional biosecurity work): perhaps the strongest argument for the former is that such work can better uncover considerations that inform subsequent effort (e.g. if ‘technical fixes’ to GCBRs look unattractive compared to ‘political fixes’, this changes which areas the community should focus on); perhaps the strongest argument for the latter is that there are very large amounts of financial and human capital distributed nearby to GCBRs, so efforts to better target this portfolio are very highly leveraged.

    That said, this discussion is somewhat overdetermined at present by two considerations: the first is both appear currently undersupplied with human capital, regardless of one’s view of what the ideal balance between them should be. The second is that there are few immediate opportunities for people to work directly on GCBRs (although that will hopefully change in a few years, see below), and so even for those inclined towards direct work, indirect approaches may be the best ‘holding pattern’ at present.

    Tentative career advice

    This section offers some tentative suggestions of what to do to contribute to this problem. It is hard to overstate how uncertain and non-resilient these recommendations are: I recommend folks considering a career in this area to heavily supplement this with their own research (and talking to others) before making big decisions.

    1. What to study at university

    There’s a variety of GCBR-relevant subjects that can be studied, and the backgrounds of people working in this space are highly diverse.58 One challenge is GCBRs interface fuzzily with a number of areas, many of which themselves are interdisciplinary.

    It can be roughly divided into two broad categories of technical and policy fields. Most careers require some knowledge of both. Of the two, it is probably better to ‘pick up’ technical knowledge first: this seems generally harder to pick up later-career than most non-technical subjects, and career trajectories of those with a technical background moving towards policy are much more common than the opposite.

    Technical fields

    Synthetic biology (roughly stipulated as ‘bioengineering that works’) is a key engine of biological capabilities, and so also one that drives risks and opportunities relevant to GCBRs. Synthetic biology is broad and nebulous, but it can be approached from the more mechanistic (e.g. molecular biology), computational (e.g. bioinformatics), or integrative (e.g. systems biology) aspects of biological science.

    To ‘become a synthetic biologist’ a typical route is a biological sciences or chemistry undergrad, with an emphasis on one or more of these sub-fields alongside laboratory research experience, followed by graduate training in a relevant lab. iGEM is another valuable opportunity if one’s university participates. Other subfields in biology also provide background and experience relevant commensurate to their proximity to synthetic biology (e.g. biophysics is generally better than histology).

    Another approach, particularly relevant for more macrostrategy-type research, are areas of biology with a more abstract bent (e.g. mathematical and theoretical biology, evolutionary biology, ecology). They tend to be populated by a mix of mathematically-inclined biologists and biologically inclined mathematicians (/computer scientists and other ‘math-heavy’ areas).

    A further possibility is scientific training in an area whose subject-matter will likely be relevant to specific plausible GCBRs (examples might be virology or microbiology for certain infectious agents, immunology, pharmacology and vaccinology for countermeasures). Although a broad portfolio of individuals with domain-specific scientific expertise would be highly desirable in a mature GCBR ecosystem, its current small size disfavours lots of subspecialisation, especially with the possibility that our understanding of the risk landscape (and thus which specialties are most relevant) may change.

    If there are clear new technologies that we will need to develop to mitigate GCBRs, it’s possible that you could also have a significant impact as a generalist engineer or tech entrepreneur. This could mean that general training in quantitative subjects, in particular engineering, would be helpful.

    Policy fields

    Unlike technical fields, policy fields are accessible to those with backgrounds in the humanities or social sciences as well as ‘harder’ science subjects.59 They therefore tend to be better approaches for those who have already completed an undergraduate degree in the humanities or social sciences looking to move into this area than technical fields (but people with technical backgrounds are often highly desired for government jobs).

    The most relevant policy subjects go under the labels of ‘health security’, ‘biosecurity’, or ‘biodefense’.60 The principal emphasis of these areas is the protection of people and the environment from biological threats, so giving it the greatest ‘GCBR relevance’. Focused programmes exist, such as George Mason University’s Biodefense programmes (MS, PhD). Academic centres in this area, even if they do not teach, may take research interns or PhD students in related disciplines (e.g. Johns Hopkins Centre for Health Security, Georgetown Centre for Global Science and Security). The ELBI fellowship and SynbioLEAP are also good opportunities (albeit generally for mid or later-career people) to get further involved in this arena.

    This area is often approached in the context of other (inter-)disciplines. Security studies/IR ‘covers’ aspects of biodefense, with (chemical and) biological weapon non-proliferation the centre of their overlap. Science and technology studies (STS), with its interest in socially responsible science governance, has some common ground with biosecurity (dual use research of concern is perhaps the centre of this shared territory). Public health is arguably a superset of health security (sometimes called health protection), and epidemiology a closely related practice. Public policy is another relevant superset, although fairly far-removed in virtue of its generality.

    1. ‘Explicit’ work on GCBRs

    There are only a few centres which work on GCBRs explicitly. To my knowledge, these are the main ones:

    • The Centre for Health Security (CHS)
    • The Nuclear Threat Initiative (NTI)
    • The Future of Humanity Institute (FHI)
    • Centre for the Study of Existential Risk (CSER)

    As above, this does not mean these are the only places which contribute to reducing GCBRs: a lot of efforts by other stakeholders, even if not labelled as (or primarily directed towards) GCBR reduction nonetheless do so.

    It is both hoped and expected that the field of ‘explicit’ GCBR-reduction work will grow dramatically over the next few years, and at maturity be capable of absorbing dozens of suitable people. At the present time, however, prospects for direct work are somewhat limited: forthcoming positions are likely to be rare and highly competitive – the same applies to internships and similar ‘limited term’ roles. More contingent roles (e.g. contracting work, self-contained projects) may be possible at some of these, but this work has features many will find unattractive (e.g. uncertainty, remote work, no clear career progression, little career capital).

    2. Implicit approaches

    Implicit approaches involve working at a major stakeholder in a nearby space, in the hopes of enhancing their efforts towards GCBR reduction and cultivating relevant career capital. I sketch these below, in highly approximate order of importance:

    United States Government

    The US government represents one of the largest accessible stakeholders in fields proximate to GCBRs. Positions are competitive, and many roles in a government career are unlikely to be solely focused on matters directly relevant to GCBRs. Individuals taking this path should not consider themselves siloed to a particular agency: career capital is transferable between agencies (and experience in multiple agencies is often desirable). Work with certain government contractors is one route to a position at some of these government agencies.

    Relevant agencies include:

    • Department of Defence (DoD)
      • Defence Advanced Research Projects Agency (DARPA)
      • Defence Threat Reduction Agency (DTRA)
      • Office of the Secretary of Defence
      • Offices that focus on oversight and implementation of the Cooperative Threat Reduction Program (Counter WMD Policy & Nuclear, Chem Bio Defense)
      • Office of Net Assessment, (including Health Affairs)
    • State Department
      • Bureau of International Security and Nonproliferation
      • Biosecurity Engagement Program (BEP)
    • Department of Health and Human Services (HHS)
      • Centers for Disease Control and Prevention (CDC)
      • Office of the Assistant Secretary for Preparedness and Response (ASPR)
      • Biomedical Advanced Project and Development Agency (BARDA)
      • Office of Global Affairs
    • Department of Homeland Security
    • Federal Bureau of Investigation, Weapons of Mass Destruction Directorate
    • U.S. Agency for International Development, Bureau of Global Health, Global Health Security and Development Unit
    • The US intelligence community (broadly)
      • Intelligence Advanced Research Projects Agency (IARPA)

    Post-graduate qualifications (PhD, MD, MA, or engineering degrees) are often required. Good steps to move into these careers are the Presidential Management Fellowship, the AAAS Science and Technology Policy Fellowship, Mirzayan Fellowship, and the Epidemic Intelligence Service fellowship (for Public Health/Epidemiology).

    Scientific community (esp. synthetic biology community)

    It would be desirable for those with positions of prominence in academic scientific research or leading biotech start-ups, particularly in synthetic biology, to take the risk of GCBRs seriously. Professional experience in this area also lends legitimacy when interacting with these communities. SynbioLEAP is one useful programme to be aware of. A further dividend is this area is a natural fit for those wanting to work directly on technical contributions and counter-measures.61

    International organisations

    The three leading candidates are the UN Office for Disarmament Affairs (UNODA), the World Health Organisation (WHO), and the World Organization for Animal Health (OIE).

    WHO positions tend to be entered into by those who spent their career at another relevant organisation. For more junior roles across the UN system, there is a programme for early-career professionals called The Junior Professional Officers (JPO) programme. Both sorts of positions are extraordinarily competitive. A further challenge is the limited number of positions orientated towards GCBRs specifically: as mentioned before, the biological weapons implementation and support unit (ISU) comprises three people.

    Academia/Civil society

    There are a relatively small number of academic centres working on related areas, as well as a similarly small diaspora of academics working independently on similar topics (some listed above). Additional work in these areas would be desirable.62

    Relevant civil society groups are thin on the ground, but Chatham House (International Security Department), and the National Academies of Sciences, Engineering, and Medicine (NASEM) are two examples.

    (See also this guide on careers in think tanks).

    Other nation states

    Parallel roles to those mentioned for the United States in security, intelligence, and science governance in other nation states likely have high value (albeit probably less so than either the US or international organisations). My understanding of these is limited (some pointers on China are provided here).

    Public Health/Medicine

    Public health and medicine are natural avenues from the perspective of disease control and prevention, as well as the treatment of infectious disease, and medical and public health backgrounds are common in senior decision makers in relevant fields. That said, training in these areas is time-inefficient, and seniority in these fields may not be the most valuable career capital to have compared to those above (cf. medical careers).

    3. Speculative possibilities

    Some further (even) more speculative routes to impact are worth noting:

    Grant-making

    The great bulk of grants to reduce GCBRs (explicitly) are made by Open Philanthropy (under the heading of ‘Biosecurity and Pandemic Preparedness‘).63 Compared to other potential funders loosely in the ‘EA community’ interested in GCBRs, Open Phil has two considerable advantages: i) a much larger pool of available funding; ii) staff dedicated to finding opportunities in this area. Working at Open Phil, even if one’s work would not relate to GCBRs, may still be a strong candidate from a solely GCBR perspective (i.e. ignoring benefits to other causes).64

    Despite these strengths, Open Phil may not be able to fill all available niches of an ideal funding ecosystem, as Carl Shulman notes in his essay on donor lotteries.65 Yet even if other funding sources emerge which can fill these niches, ‘GCBR grantmaking capacity’ remains in short supply. People skilled at this could have a considerable impact (either at Open Phil or elsewhere), but I do not know how such skill can be recognised or developed. See 80,000 Hours’ career review on foundation grantmaking for a general overview.

    Operations and management roles

    One constraint on expanding the number of people working directly on GCBRs is limited by operations and management capacity. Common to the wider EA community, people with these skills remain in short supply (although this seems to be improving).66

    A related area of need is roughly termed ‘research management’, comprising an overlap between management and operations talent and in depth knowledge of a particular problem area. These types of roles will be increasingly important as the area grows.

    Public facing advocacy

    It is possible public advocacy may be a helpful lever to push on to mitigate GCBRs, although also possible doing so is counter-productive. In the former case, backgrounds in politics, advocacy (perhaps analogous to nuclear disarmament campaigns), or journalism may prove valuable.67 Per the previous discussion about information continence, broad coordination with other EAs working on existential risk topics is critical.

    Engineering and Entrepreneurship

    Many ways of reducing biorisk will involve the development of new technologies. This means there may also be opportunities to work as an engineer or tech entrepreneur. Following one of these paths could mean building expertise through working as an engineer at a “hard-tech” company or working at a startup, rather than building academic biological expertise at university. Though if you do take this route, make sure the project you’re helping isn’t advancing biotech capabilities that could make the problem worse.

    Other things

    Relevant knowledge

    GCBRs are likely a cross-disciplinary problem, and although it is futile to attempt to become an ‘expert at everything’, basic knowledge of relevant fields outside one’s area of expertise is key. In practical terms, this recommends those approaching GCBRs from a policy angle acquaint themselves with the relevant basic science (particularly molecular and cell biology), and those with a technical background the policy and governance background.

    Exercise care with original research

    Although reading around the subject is worthwhile, independent original research on GCBRs should be done with care. The GCBR risk landscape has a high prevalence of potentially hazardous information, and in some cases the best approach will be prophylaxis: to avoid certain research directions which are likely to uncover these hazards. Some areas look more robustly positive, typically due to their defense-bias: better attribution techniques, technical and policy work to accelerate countermeasure development and deployment, and more effective biosurveillance would be examples of this. In contrast, ‘Red teaming’ or exploring what are the most dangerous genetic modifications that could be made to a given pathogen are two leading examples of research plausibly better not done at all, and certainly better not done publicly.

    Decisions here are complicated, and likely to be better made by the consensus in the GCBR community, rather than amateurs working outside of it. Unleashing lots of brainpower on poorly-directed exploration of the risk landscape may do more harm than good. A list of self-contained projects and research topics suitable for ‘external’ researchers is in development: those interested are encouraged to get in touch.

    Want to work on reducing global catastrophic biorisks? We want to help.

    We’ve helped dozens of people formulate their plans, and put them in touch with academic mentors. If you want to work on this area, apply for our free one-on-one advising service.

    Apply for advising

    Find opportunities on our job board

    Our job board features opportunities in biosecurity and pandemic preparedness:

      View all opportunities

      Learn more

      Podcast interviews about GCBRs

      Other relevant 80,000 Hours articles

      The post Reducing global catastrophic biological risks appeared first on 80,000 Hours.

      ]]> The Undercover Economist speaks to 80,000 Hours https://80000hours.org/2014/04/the-undercover-economist-speaks-to-80-000-hours/ Tue, 22 Apr 2014 13:38:00 +0000 http://80000hours.org/2014/04/the-undercover-economist-speaks-to-80-000-hours/ Tim2

      Tim Harford recently spoke to us at Oxford. He’s a journalist for the Financial Times and the best-selling author of the Undercover Economist, which we’d recommend as a popular introduction to Economics. He also wrote Adapt, which argues that trial and error is the best strategy for solving important global problems. The arguments he makes fit with some of the arguments we have made for trial and error being a good way to plan your career.

      Tim gave a talk on innovation, similar to this. The talk introduced a distinction between two types of innovation, and asks, which one is more important?

      1. Marginal improvements - incremental improvements to existing systems.

      2. Revolutionary improvements - transformations of existing systems to create new ones.

      The post The Undercover Economist speaks to 80,000 Hours appeared first on 80,000 Hours.

      ]]>
      Tim2

      Tim Harford recently spoke to us at Oxford. He’s a journalist for the Financial Times and the best-selling author of the Undercover Economist, which we’d recommend as a popular introduction to Economics. He also wrote Adapt, which argues that trial and error is the best strategy for solving important global problems. The arguments he makes fit with some of the arguments we have made for trial and error being a good way to plan your career.

      Tim gave a talk on innovation, similar to this. The talk introduced a distinction between two types of innovation, and asks, which one is more important?

      1. Marginal improvements – incremental improvements to existing systems.

      2. Revolutionary improvements – transformations of existing systems to create new ones.

      For instance, developing the Fosbury Flop was a revolutionary improvement for the high jump. Testing out different diet regimes for athletes or different ways of running up to the bar would yield marginal improvements.

      If you’re aiming to maximise your impact, which type of innovation should you aim to do?

      Tim introduced some reasons that revolutionary improvements can be neglected:

      • The value is more difficult to capture – Revolutionary improvements tend to be the types of innovations that quickly get widely copied, and so it’s difficult for the inventor to capture the benefits of the innovation for themselves. This reduces the economic incentives for individuals or organisations to invest in this kind of innovation.

      • Revolutionary improvements are often long-shots, which makes them risky and low-status to work on – Attempts at revolutionary improvements often have a low chance of success. This means they will get neglected by individuals or organisations who are risk-averse. Further, since status only comes with success, working on long-shots is personally unappealing.

      On the other hand, revolutionary improvements could receive over-investment because:

      • People are overconfident – There’s psychological evidence that people often wildly overestimate their abilities and chances of success. This could encourage people to over-invest in long-shots.

      • Some people are risk seeking – Sometimes people prefer small chances of very good outcomes, for instance when buying lottery tickets. Again, this could encourage over-investment in long-shots.

      • Governments and philanthropists take action to fight market failures – Governments and philanthropists are aware that individuals and organisations are under-incentivised to invest in revolutionary innovation, so they provide additional funding to science and the nonprofit sector. If governments and philanthropists have over-invested, further work may not yield outsized expected returns.

      Where does it all balance out? Should members of 80,000 Hours seek out long-shots with the aim of revolutionary improvements, or stick to marginal gains? We think it’s difficult to give a general rule of thumb. Rather, we need to examine individual domains to decide which strategy is most promising. Long-shots look best in domains which are dominated by risk-averse institutions and where there’s insufficient government action.

      Ideally, we can find empirical data to test the two strategies. Tim mentioned a study1 comparing medical research funding from the the Howard Hughes Medical Institute (HHMI) and National Institute of Health. You can see a media summary of the study here.

      The HHMI investigator program grants are more aimed at fostering long-shots e.g. the funder is longer term, more flexible and backs individuals rather than rigid proposals. The National Institute of Health provides funding via RO1 grants. These are relatively focused on reducing the chances of failure. The funding is for proposals rather than individuals and has shorter review cycles which are less forgiving of failure. The study found that recipients of HHMI funding submitted papers that were 35% more likely to fail to get published, but twice as likely to produce in paper in the top 1% by citations

      This is some evidence to suggest that within medical research, working on revolutionary long-shots may offer higher returns. In the last few months, the NIH has moved to provide more HHMI style grants. Unfortunately empirical research comparing the two strategies is rarely available.

      Tim concluded by saying that ultimately both types of innovation are important and should be supported. Looking at simple models like the multi-armed bandit suggests that the optimal strategy for society is lots of marginal innovation with some long-shots thrown in to prevent ending up in a local optimum.

      Aside: are we short of entrepreneurs?

      These arguments also apply to the issue of whether we need more social entrepreneurs. If the social sector is risk-averse, doesn’t reward failure and is short of government action, then we might expect working on entrepreneurial projects (which are typically long-shots) to have higher expected returns.

      Likewise, can for-profit entrepreneurs be expected to earn more than salaried employees? We’d expect people to be somewhat risk-averse, which might create an opportunity for someone more risk-neutral to earn extra returns. On the other hand, if enough people are irrationally overconfident and risk-loving, and governments aim to subsidise startups, then the earnings of entrepreneurs might not be higher on average.

      Overall, we’re unclear how it balances out, which means we need to turn to an empirical investigation of the financial returns to entrepreneurship. With social entrepreneurship, our guess is that it’s neglected, but again, we’re highly uncertain and want to do a more in-depth investigation.

      Thank you to Avi Roy for help with this article.



      1. P. Azoulay, J. Zivin, and G. Manso, RAND Journal of Economics, 42(3), pp. 527-554, 2011, Incentives and Creativity: Evidence from the Academic Life Sciences, http://pazoulay.scripts.mit.edu/docs/hhmi.pdf 

      The post The Undercover Economist speaks to 80,000 Hours appeared first on 80,000 Hours.

      ]]>
      Case study: earning to give compared to medical research https://80000hours.org/2014/02/case-study-earning-to-give-compared-to-medical-research/ Wed, 19 Feb 2014 17:06:00 +0000 http://80000hours.org/2014/02/case-study-earning-to-give-compared-to-medical-research/ Introduction

      Ramit came to us with a simple question: should I try to train as a medic with the aim of doing biomedical research, or should I seek a high earning job in finance and pursue Earning to Give?

      He’s currently doing both - working as a quantitative financial analyst giving away more than a third of his salary (he was an early stage funder of Give Directly) and taking pre-med courses part time, as well as other projects!

      Ramit’s initial thought was that the biomedical research path would be better. Read on to find out how he came to change his mind, and came up with a new set of next steps.

      The post Case study: earning to give compared to medical research appeared first on 80,000 Hours.

      ]]>
      Introduction

      Ramit came to us with a simple question: should I try to train as a medic with the aim of doing biomedical research, or should I seek a high earning job in finance and pursue Earning to Give?

      He’s currently doing both – working as a quantitative financial analyst giving away more than a third of his salary (he was an early stage funder of Give Directly) and taking pre-med courses part time, as well as other projects!

      Ramit’s initial thought was that the biomedical research path would be better. Read on to find out how he came to change his mind, and came up with a new set of next steps.

      The research

      In our first meeting, we discussed Ramit’s background, the decisions facing him and his thoughts on their pros and cons. Ramit’s guess was that he could make a considerably larger contribution as a medical researcher, in part because that might allow him to work on potentially very high return projects like vaccine development. We also discussed some big picture questions, like how concern for the far future might be relevant to picking causes.

      After this discussion, we decided to do a full comparison of earning to give and biomedical research. Here’s the report we produced. We also started a shallow investigation of biomedical research careers. You can see two of the four interviews we’ve performed for that here and here.

      We ended up coming down in favor of earning to give, especially if Ramit supports the best causes, though we thought it was a difficult call, which is highly dependent on Ramit’s prospects in the two careers. One key reason in favor of Earning to Give was that we think the best causes he can support through donations over the forseeable future are likely to be considerably more effective than the biomedical research cause. Another key reason was that in the earning to give path, his impact can happen much earlier, and we think doing good earlier is generally better than doing good later.

      Due to the difficulty of the decision, however, our main recommendation was to find out more about whether he might be a good fit in research. We recommended speaking to more researchers and trying out a research role. All the biomedical researchers we spoke to said that good researchers were highly valuable, so if Ramit has a good chance of becoming a good researcher, then the decision becomes much less clear. Another key variable is Ramit’s earning potential in finance. We suspected he could earn considerably more in a trading or investing position, rather than a research position, so encouraged him to speak to recruiters and make job applications in order to find out.

      In our second meeting, we discussed Ramit’s reactions to the report. He was surprised that we thought it was unclear whether research beat earning to give, given that he previously thought research was clearly better. In general, he agreed with our conclusions and ideas for next steps. We also shared our guesses on his earnings prospects in finance.

      In the end, Ramit switched from having medical research as his best guess path, to continuing in finance earning to give. He also became more in favor of donating to meta-charities, like GiveWell, which we discussed during our conversations.

      Lessons learned

      • We improved our process for comparing earning to give to direct work. Applying this, we confirmed our belief that it’s difficult to compare research to earning to give, and in general much comes down to the relative ability of the person in the two paths.
      • We found out a lot about how to lead a career and have impact in medical research, which we’ll write up separately.

      Summary of the case study

      Their plans before starting:

      Cause and Mission:

      • Socially Valuable Research – Develop vaccines and make vaccine development more efficient as an academic researcher – 60%
      • Fighting Global Poverty through earning to give in finance – 20%
      • Something else – 20%

      Next step:

      • Finish up pre-medical classes (postbacc) and apply for MD / PhD programs starting in June 2014 – 60%
      • Seek higher earning jobs in finance – 20%
      • Something else – 20%

      Questions they asked us:

      Should I continue in finance and seek to maximise my earnings, or should I start to study medicine?

      How many hours did we spend?: 28 hours

      Information they gained:

      • The potential for earnings in trading or investing roles is much higher than his current role. It’s worth applying for other positions in order to better assess whether he can earn more.
      • In medical research, there’s potential to become stranded mid-career.
      • It’s tough to say whether medical research or Earning to Give in finance is more high impact, but in this case 80,000 Hours slightly favors Earning to Give overall due to the higher flexibility and ability to support more promising causes.
      • Working in a lab during the summer is one good way to better assess fit with biomedical research.
      • We think there are strong donation opportunities within meta-charities, for instance, donating to GiveWell.
      • The combination of medicine and programming skill is seen as highly valuable in biomedical research.
      • Some researchers believe neglected tropical diseases could be a good area to work on within biomedical research, though others favor focusing on basic science.

      Their plans after finishing the case study:

      Cause:

      • Global poverty > global health – 60%
      • Biomedical Research – 20%
      • Meta-charity (e.g. GiveWell) 20%

      Mission:

      • Earning to give in finance – 50%
      • Medical researcher – 25%
      • Train medicine part-time, then choose which way to focus in several years – 25%

      Next step:
      Probabilities add to above 100% due to inclusivity of options

      • Finish pre-medical postbacc program in 6 months – 95%
      • Apply for trading positions – 100% (currently ongoing)
      • Apply for bioinformatics research position – 60%

      Plan changes:

      • Substantial update in favor of earning to give (from 20% to 50%)
      • Substantial update in favor of supporting meta-charities (from 0% to 20%)
      • Next steps changed to include applying for more finance roles immediately and applying for research roles to learn more about them.

      In their own words:

      Over the past decade, I’ve invested a lot of time thinking about how best I could improve the lot of humanity. I had jumped between earning to give and direct intervention and back.

      When I asked 80,000 hours for career advice, I was again stuck between an earning to give path and a medical research path. Their analysis added a level of rigor and clarity that I had never put into this obviously very important decision.

      They leveraged their resources and contacts to determine what it takes to make it in the medical research field and the likelihood of making a big impact. They looked into various large earning professions that fit my skills and talked to me about earnings trajectories. They told me what they knew and what they didn’t. And they gave me a path to gather the information that I would need to finally determine what my optimal route is.

      Of course, complex decisions with numerous moving parts are always difficult and uncertain, but 80,000 hours added rigor to the career decision process that makes the path forward that much clearer.

      The post Case study: earning to give compared to medical research appeared first on 80,000 Hours.

      ]]>
      Interview with leading HIV vaccine researcher – Prof. Sir Andrew McMichael https://80000hours.org/2014/01/interview-with-leading-hiv-vaccine-researcher-prof-sir-andrew-mcmichael/ Tue, 28 Jan 2014 20:27:00 +0000 http://80000hours.org/2014/01/interview-with-leading-hiv-vaccine-researcher-prof-sir-andrew-mcmichael/ Introduction

      Andrew McMichael

      Continuing our investigation into medical research careers, we interviewed Prof. Andrew McMichael. Andrew is Director of the Weatherall Institute of Molecular Medicine in Oxford, and focuses especially on two areas of special interest to us: HIV and flu vaccines.

      Key points made

      • Andrew would recommend starting in medicine for the increased security, better earnings, broader perspective and greater set of opportunities at the end. The main cost is that it takes about 5 years longer.
      • In the medicine career track, you qualify as a doctor in 5-6 years, then you work as a junior doctor for 3-5 years, while starting a PhD. During this time, you start to move towards a promising speciality, where you build your career.
      • In the biology career track, get a good undergraduate degree, then do a PhD. It’s very important to join a top lab and publish early in your career. Then you can start to move towards an interesting area.
      • After you finish your PhD is a good time to reassess. It’s a competitive career, and if you’re not headed towards the top, be prepared to do something else. Public health is a common backup option, which can make a significant contribution. If you’ve studied medicine, you can do that. People sometimes get stranded mid-career, and that can be tough.
      • An outstanding post-doc applicant has a great reference from their PhD supervisor, is good at statistics/maths/programming, and has published in a top journal.
      • If you qualify in medicine in the UK, you can earn as much as ordinary doctors while doing your research, though you’ll miss out on private practice. In the US, you’ll earn less.
      • Some exciting areas right now include stem cell research, neuroscience, psychiatry and the HIV vaccine.
      • To increase your impact, work on good quality basic science, but keep an eye out for applications.
      • Programming, mathematics and statistics are all valuable skills. Other skills shortages develop from the introduction of new technologies.
      • Good researchers can normally get funded, and Andrew would probably prefer a good researcher to a half million pound grant, though he wasn’t sure.
      • He doesn’t think that bad methodology or publication bias is a significant problem in basic science, though it might be in clinical trials.

      The post Interview with leading HIV vaccine researcher – Prof. Sir Andrew McMichael appeared first on 80,000 Hours.

      ]]>
      Introduction

      Andrew McMichael

      Continuing our investigation into medical research careers, we interviewed Prof. Andrew McMichael. Andrew is Director of the Weatherall Institute of Molecular Medicine in Oxford, and focuses especially on two areas of special interest to us: HIV and flu vaccines.

      Our aim in this interview was to gain the perspective of someone who had entered medical research as a medic, whereas our previous interviewee, John Todd, entered as a biologist, and to gain the perspective of a senior researcher focused on infectious diseases rather than genetics. We also wanted to test some hypotheses from our two previous interviews (here and here).

      Andrew was introduced to us by John Todd. The interview was conducted in person and recorded. The following is an abbreviated selection of key quotes, reorganised for clarity.

      Key points made

      • Andrew would recommend starting in medicine for the increased security, better earnings, broader perspective and greater set of opportunities at the end. The main cost is that it takes about 5 years longer.
      • In the medicine career track, you qualify as a doctor in 5-6 years, then you work as a junior doctor for 3-5 years, while starting a PhD. During this time, you start to move towards a promising speciality, where you build your career.
      • In the biology career track, get a good undergraduate degree, then do a PhD. It’s very important to join a top lab and publish early in your career. Then you can start to move towards an interesting area.
      • After you finish your PhD is a good time to reassess. It’s a competitive career, and if you’re not headed towards the top, be prepared to do something else. Public health is a common backup option, which can make a significant contribution. If you’ve studied medicine, you can do that. People sometimes get stranded mid-career, and that can be tough.
      • An outstanding post-doc applicant has a great reference from their PhD supervisor, is good at statistics/maths/programming, and has published in a top journal.
      • If you qualify in medicine in the UK, you can earn as much as ordinary doctors while doing your research, though you’ll miss out on private practice. In the US, you’ll earn less.
      • Some exciting areas right now include stem cell research, neuroscience, psychiatry and the HIV vaccine.
      • To increase your impact, work on good quality basic science, but keep an eye out for applications.
      • Programming, mathematics and statistics are all valuable skills. Other skills shortages develop from the introduction of new technologies.
      • Good researchers can normally get funded, and Andrew would probably prefer a good researcher to a half million pound grant, though he wasn’t sure.
      • He doesn’t think that bad methodology or publication bias is a significant problem in basic science, though it might be in clinical trials.

      Some updates for us

      • All of our interviewees have agreed that good researchers can generally get funded, which also fits to what we read in GiveWell’s interviews, so we’re inclined to accept this.
      • John Todd strongly preferred additional good researchers to money, while Andrew was less sure, so we slightly updated in favor of money being important.
      • We’re still unsure about the medicine vs. biology track, though are leaning towards medicine for most people due to the stronger back-up options and higher earnings.
      • Both Andrew and John agreed that quantitative skills were valuable, so we’re inclined to accept this.

      The interview

      Background

      Andrew comes from a family of medical researchers. Always knowing he wanted to do research, he studied medicine, then did part 3 biochemistry at Cambridge, 3 years of clinical practice, and a PhD. After his PhD he went to Stanford where where he worked as a researcher in Hugh McDevitt’s lab, before moving to Oxford in 1977. Today, he primarily works on flu and HIV vaccines.

      Did you ever consider doing anything else?

      “I always wanted to do research that was related to clinical medicine, and in immunology you can either go into infectious disease or autoimmune disease if you have a clinical interest. While I was at Stanford I did some work on rheumatology. About that time I got interested in the HLA system, then how it might translate into immune responses…work on the T-cell recognition of viruses came out. I thought, ‘that’s a really good thing to look at – look at virus infection’ So I started working on flu, which I’m still working on. I’m going to China about flu today. And then along the line HIV came along, so then I worked on HIV. I always felt these were clinically important topics.”

      Getting started in this career

      “If you’re going the PhD route, you want to go to the best labs in your general area – neuroscience, immunology, cancer, cell biology. You just need to go where the best science is – the most Nature papers, the most Cell papers.

      “If you’re going the clinical medicine route, you may see an opportunity like dermatology, where there’s no many people going but you see great opportunities. Dermatology is fantastic for clinical research, because you can access the tissue. There’s a whole host of diseases that are not well understood, relating to major systemic diseases, like skin rashes in HIV, or in autoimmune diseases.

      In medicine, you’d qualify in medicine, then do about 3 years of training, and maybe a bit more, and start heading towards a speciality (like cardiology, infectious disease, dermatology). You just start seeing what kind of research is going on in that field, who the best people are and where you might go. You’re doing junior doctor jobs, and trying to decide which direction your career is going. Then you might pick an area. Then you start developing a career.

      “It’s very important to have mentors, people who advise you. They’re your teachers initially, then your friends and advisors.”

      Is it better to start by doing an MD?

      “You need to put aside close to 10 years: 5-6 years for medical degree and junior training which is 3 years. You’ll also need to carry on doing some medicine while doing your PhD. Overall it takes 10-12 years before you’re fully qualified in medicine and able to run a research program. It’s tough, especially if you want to start a family.”

      What would you recommend to someone who’s starting their career?

      “If it was my son or daughter, I’d say do medicine. There’s more security, you’ve always got a second career as a fall back. There are a lot of different opportunities in medicine that you can explore. When you’re 21, you may not see the best things to work on.

      “One advantage of medicine is that you have a bit of extra time to spot an opportunity. That might help you find something that’s unexplored, but turns out to be important. One example is that there’s some fantastic things going on in Ophthalmology: gene therapy and stem cell work in the eye. Coming from a PhD background you might not see that being a place to go.

      “More generally, if you want to see patients, you have to do medicine. It’s a longer process. but there’s more opportunities at the end. There’s better salaries. If you want to get to research fast, go the PhD route. It’s tougher, and there’s more competition and it’s a sharper pyramid – though they tend to be the ones who win the Nobel Prizes.”

      To what extent does going into research as a medic hamper your earnings?

      “In the UK it’s not too bad. Most of the research will be in medical schools, and as a clinical researcher you’ll be on a very similar salary to other medics. Where you’ll lose out is in private practice. Some medics do 1-2 days a week and almost double their earnings! It’s much more of a problem in the US.”

      Increasing your impact within the field

      What are some of the promising areas right now?

      “Stem cell research is amazing and exciting. I think neuroscience is. The brain is incredibly complex, but you’re beginning to be able to get insight into how it works. Physiciatry is probably ripe for development and explanation of psychiatric disorders, using combinations of genetics, cell biology and neuroscience. In my own area, the HIV vaccine problem is a big, big question that is now open to quite sophisticated science to underpin it and develop it. I think there are many more, but those are examples. They’re exciting to me, but talk to different people and you’ll get different views.”

      What are the main things preventing researchers from producing more value? For instance, Iain Chalmers has argued that 90% of medical research is wasted due to bad study design and other biases.

      “I think these kinds of issues are more of a problem in clinical trials. You’re not going to get funding to do a laboratory based project if it has been done before or it isn’t a very sound idea. The process is very rigorous.”

      “I agree that negative results tend not to be published, which does bias the field occasionally. And sometimes something gets published, but there’s never any follow up, so you might doubt it’s a real result. But in basic science, the word normally gets out one way or another. It’ll probably be mentioned in reviews eventually.”

      We asked Andrew about how feasible it is to target your career at neglected areas. As explained above, in the PhD route, he stressed the importance of starting in the best labs
      In the clinical route, you could look to pick a promising speciality in the 4 years after qualifying.

      What strategies could you use for having more impact? For instance, our previous interviewee recommended trying to secure long-term funding to work on a difficult problem and avoid bandwagons.

      “That’s for a bit later. I think it’s difficult. Many people were not thinking about clinical application when they made discoveries with huge impact on clinical medicine.

      If aiming directly at impact isn’t what seems to work, what do you focus on instead?

      “Focus on what’s good basic biomedical science. Stem cell therapy is probably going to have a huge impact on medicine, but it stems from a very basic level. The people who started that work from the very beginning, were not thinking about treating patients. I think a good strategy is work on basic science, but keep an eye out for applications that might arise. More people are doing that now.

      Evaluating ability

      What kind of CV would make your face light up?

      “I’d look for a good first degree – first or a 2.1, though there are various means to get through with a 2.2 (note: a first or 2.1 is equivalent to a GPA of 3.2-4). You’d look at where they went. It’s quite a good indicator of academic excellence. Then I’d l look at what they did in their PhD. Did they publish any papers? Bear in mind they can get put on a project that’s not top notch, and it’s not always their fault.”

      “Then you’d look at what their supervisor said about them, or their head of department. A lot of people look quite similar up to this point. They generally have firsts or 2.1s from Oxford, Cambridge, Imperial, Edinburgh or so on. They’ve done a PhD and got 2-3 publications. They might be second or third author on a Nature paper, but you’re not quite sure what they did. You get a lot of people like that. The supervisor would usually be someone I know, because they’re in my field. If the supervisor says ‘this is the best student I’ve ever seen’, you take that seriously…unless you’ve had six previous letters from that supervisor saying that same thing!”

      “You’d also be looking at what skills they had. Sometimes you might get someone who’s very good at maths or statistics – that’s really good. Maths is always useful. Or they might have had a project with a lot of imaging in it, because it’s something we want to get into.

      Our previous interviewee said that the programming was highly in demand, especially if combined with medicine. Do you agree?

      “Yes. General IT skills are valuable. Medicine and programming is a valuable combination. I can see what that would be very good for him (our previous interviewee) and quite a few people here. I’d be pretty keen, though I wouldn’t say it’s essential for someone coming to my lab.”

      Do you think there’s any skills shortages?

      “It’s hard to say. New skill requirements come with new technologies. For instance, mass spectroscopy and proteomics. Probably 10 years ago nobody doing immunology knew what mass spectroscopy was. Now it’s widely used. To get it going, you need experts coming in from that field. They were all sitting down in the chemistry department. It’s the same with imaging now. With some of these super resolution imaging methods, we’re examining single molecules on cell surfaces, or within cells.”

      Funding vs. talent constraints

      For the good person who’s CV you just described, would you prefer their CV landing on your desk or an extra grant?

      “It’s not a simple choice. If they’re that good, they’ll probably get their own funding at some point. You can take them on without huge risk. I would always take the person.”

      How about if you could have half a million pound grant?

      “It’s hard to turn down half a million pounds. I wouldn’t know many groups who would. You could buy another machine or do another project that would be too expensive otherwise. It depends on how much money I’ve got there already. It’s fantastic to get good people though, no question.”

      Can good researchers always get funding?

      “Yes, reasonably easily. Everyone can get bad patches. It’s unusual to always be on top of everything. For instance, you can get a dip at the end of a line of work, while you’re getting ready to start something else. But on the whole they can.”

      Pandemics

      Does anyone know the probability that a new flu strain turns into something really bad?

      “We don’t really know, because we didn’t record most of the outbreaks before. Now we’re seeing transfer from birds into humans every couple of years. It’s mostly high mortality, but doesn’t transmit between humans.”

      “If an infectious strain developed, it could cause a lot of damage. It would take 6 months to develop a vaccine.”

      Has the time taken to produce a vaccine decreased?

      “Yes, on the molecular biology end. We can go from a throat swab to a virus sequence in a matter of hours, then engineer into a vaccine in a week or two. In the past this took more like three months. But then you have to go through all the regulatory stuff, the manufacturing, this massive scale up. You’ve got to do some safety testing, because this is going to be given to millions of people. 6 months is what it takes.”

      Could it be reduced even more?

      “Just a couple of weeks. You really have to know that it generates the right antibody response, which in each individual takes 2-3 weeks per, and 2-3 months for a full study. For safety, you’ve got to give it to several thousand people to be sure there’s not a major problem.”

      If you wanted to use your career to reduce the risk of pandemics, what should you do?

      “You could go into vaccine research. Do your PhD in immunology, bioinformatics, microbiology, virology or background in public health. Virologists would say no vaccine has ever been developed by an immunologist, but there are a number of vaccine problems that definitely need immunology.

      “You can also go into public health. In many countries HIV has been brought under control by public health, though you’ll never eradicate it without a vaccine. It’s also important for controlling flu outbreaks in the first 6 months. To take this route, do a diploma in public health. You don’t need to do a medical research. You can also enter from medical research. I’ve had lots of PhD students who’ve gone into public health.”

      Job satisfaction and career progression

      What’s the worst thing about this career?

      “Overall, I think it’s a fantastic career. The downsides are that it’s not very secure. People can run into funding problems, especially if they’re not at the peak of things or a bit unlucky. It’s not particularly well paid. It has its ups and downs. The ups more than compensate, but when you have a string of bad results and grants rejected, it gets a bit depressing.”

      “It can leave people a bit stranded mid career. You start out well, but you don’t quite make it to the top. You’re on a 3-5 year contract. You find it doesn’t get renewed. You’re 45 and stranded.

      What happens then?

      “You can take jobs in lab admin, lab managers, research councils. It’s better to go into these careers at the start, but it’s an option. It’s difficult to go into medicine by this stage. People have, but it’s not ideal. Others just leave science, and go on to do other worthwhile things. One guy I know who went and started a coffee chain and did really well!”

      How can someone early in their career evaluate their prospects?

      “This career is hard to predict. You have someone just starting their PhD. It’s quite hard to predict how it’s going to go for them. If they’ve done really well, they’ll probably succeed, but probably not quite as they imagine. And you may find that, although you’re first author on a Nature paper, you relied on having a great mentor. If they haven’t done really well, they can succeed, I’ve seen it happen. More likely, they’ll have to do something else.”

      “You’ll need to bear in mind that if it’s not working out, you may need to think about alternatives. It’s better to make the decision at that point rather than in 10 years. You can help yourself a lot by going to a top lab. Then ask, are you swimming or sinking?”

      The post Interview with leading HIV vaccine researcher – Prof. Sir Andrew McMichael appeared first on 80,000 Hours.

      ]]>
      Which cause is most effective? https://80000hours.org/2014/01/which-cause-is-most-effective-300/ Tue, 21 Jan 2014 22:52:00 +0000 http://80000hours.org/2014/01/which-cause-is-most-effective-300/ In previous posts, we explained what causes are and presented a method for assessing them in terms of expected effectiveness.

      In this post, we apply this method to identify a list of causes that we think represent some particularly promising opportunities for having a social impact in your career (though there are many others we don't cover!).

      We’d like to emphasise that these are just informed guesses over which there’s disagreement. We don’t expect the results to be highly robust. However, you have to choose something to work on, so we think it’ll be useful to share our guesses to give you ideas and so we can get feedback on our reasoning - we’ve certainly had lots of requests to do so. In the future, we’d like more people to independently apply the methodology to a wider range of causes and do more research into the biggest uncertainties.

      The following is intended to be a list of some of the most effective causes in general to work on, based on broad human values. Which cause is most effective for an individual to work on also depends on what resources they have (money, skills, experience), their comparative advantages and how motivated they are. This list is just intended as a starting point, which needs to be combined with individual considerations. An individual’s list may differ due also to differences in values. After we present the list, we go over some of the key assumptions we made and how these assumptions affect the rankings.

      We intend to update the list significantly over time as more research is done into these issues. Fortunately, more and more cause prioritisation research is being done, so we’re optimistic our answers will become more solid over the next couple of years. This also means we think it’s highly important to stay flexible, build career capital, and keep your options open.

      In the rest of this post we:
      1. Provide a summary list of high priority causes
      2. Explain what each cause is and overview our reasons for including it
      3. Explain how key judgement calls alter the ranking
      4. Overview how we came up with the list and how we’ll take it forward
      5. Answer other common questions

      The post Which cause is most effective? appeared first on 80,000 Hours.

      ]]>
      Introduction

      In previous posts, we explained what causes are and presented a method for assessing them in terms of expected effectiveness.

      In this post, we apply this method to identify a list of causes that we think represent some particularly promising opportunities for having a social impact in your career (though there are many others we don’t cover!).

      We’d like to emphasise that these are just informed guesses, over which there’s disagreement. We don’t expect the results to be highly robust. However, you have to choose something to work on, so we think it’ll be useful to share our guesses to give you ideas and so we can get feedback on our reasoning – we’ve certainly had lots of requests to do so. In the future, we’d like more people to independently apply the methodology to a wider range of causes and do more research into the biggest uncertainties.

      The following is intended to be a list of some of the most effective causes in general to work on, based on broad human values. Which cause is most effective for an individual to work on also depends on what resources they have (money, skills, experience), their comparative advantages and how motivated they are. This list is just intended as a starting point, which needs to be combined with individual considerations. An individual’s list may differ due also to differences in values. After we present the list, we go over some of the key assumptions we made and how these assumptions affect the rankings.

      We intend to update the list significantly over time as more research is done into these issues. Fortunately, more and more cause prioritisation research is being done, so we’re optimistic our answers will become more solid over the next couple of years. This also means we think it’s highly important to stay flexible, build career capital, and keep your options open.

      Some further qualifications:

      • We’re not presenting our full reasoning in this post. That would take up too much space. Rather, we intend to write more about each individual cause as they arise in case studies.
      • The list is just some promising opportunities, and is not comprehensive. There are many causes we haven’t even been able to consider.
      • There is much variation within causes: an unpromising cause can contain a highly promising intervention and promising causes can contain useless interventions. We still think it’s useful to organise your career around a cause, but it’s important to remember that if an organisation supports a promising cause, it doesn’t guarantee the organisation is effective, and a cause being of low priority doesn’t rule out the organisation from being highly effective.
      • When we name organisations within each cause, do not take this as an endorsement of this organisation. If an organisation’s working on a high priority cause, we think that’s a point in the organisation’s favour, but the organisation can easily fail to be effective.
      • These causes are assessed from a global perspective. We don’t investigate which causes are most effective for helping your local community.
      • The ratings ‘2’, ‘3’, ‘4’, etc. are just meant as a relative assessment of the factor for this cause compared to the others. The numbers do not correspond to any scale.

      In the rest of this post we:

      1. Provide a summary list of high-priority causes
      2. Explain what each cause is and overview our reasons for including it
      3. Explain how key judgement calls alter the ranking
      4. Overview how we came up with the list and how we’ll take it forward
      5. Answer other common questions

      The List

      This is the list produced by applying our cause framework to the most promising causes we currently know. Do not read too much into the order of the list – it’s highly dependent on assumptions, which we’ll overview later.

      ‘5’ represents for ‘very high, relative to the others’, ‘3’: ‘average, relative to the others’, and ‘1’: ‘low, relative to the others’.

      Cause Importance Tractability Uncrowdedness
      Prioritisation research 5 4 5
      Promoting effective altruism 5 4 5
      Global catastrophic risks 4 2 5
      Research policy and infrastructure 3 3.5 3
      Ending factory farming 2 4 4
      Global health 2 5 3
      Improving decision making 3 3 3
      Immigration reform 3 2 4
      Geoengineering research 2 3 4
      Biomedical research 2 4 2
      Developing world economic empowerment 3 2 2

      Would you like to comment on these scores? Go here.

      Skip ahead to read more about each cause:

      1. Prioritisation research
      2. Promoting effective altruism
      3. Global catastrophic risks
      4. Research policy and infrastructure
      5. Ending factory farming
      6. Global health
      7. Improving decision making
      8. Immigration reform
      9. Geoengineering research
      10. Biomedical research
      11. Developing world economic empowerment

      Why these causes?

      Prioritisation Research

      What is this cause?
      Prioritisation research is activity aimed at working out which causes, interventions, organisations, policies, etc. do the most to make the world a better place. Organisations and projects within this cause include some policy think-tanks and some parts of economics. Within prioritisation research, we think the most high-priority area is long-run-focused cause-prioritisation. That is, research aimed at working out which causes do the most to make the world a better place in the long-run if we add more resources to them. Note that this research need not consist of detailed economic modelling. Cause-prioritisation can also involve down-to-earth projects like investigating room for more funding or aggregating expert opinion. Organisations within this sub-cause include the Copenhagen Consensus, GiveWell, the Future of Humanity Institute and the Centre for Effective Altruism (our parent charity).

      Why do we think it’s high-priority?

      (See our cause framework for more explanation of each factor)

      Important: 5
      Tractable: 4
      Uncrowded: 5

      We think cause-prioritisation is a highly effective cause, because: (i) we think there are likely to be large differences in the effectiveness of different causes, (ii) many don’t have a good understanding of these differences, and (iii) without a better understanding, we are unlikely to take the best opportunities to do good. We also think working on this cause offers high value of information. Since there hasn’t been a large systematic attempt to evaluate causes before, even if the project turns out not to produce useful answers, it’ll still be highly useful to rule it out as a promising project.

      At the same time, we think cause-prioritisation is tractable and uncrowded. Little has been directly spent on this kind of project so far – there are only three major organisations working on the cause and their annual budgets are under US$2m – but there are hundreds of billions of dollars at stake in philanthropy and government aid spending. What has been spent so far, however, has led to significant progress, for instance GiveWell identifying global health as a promising area to look for effective donation opportunities, the Copenhagen Consensus’s promotion of micronutrient supplements, and the development of better methodologies for prioritisation (e.g. how to make use of cost-effectiveness estimates). Moreover, there are promising lines of future research, and organisations within the cause that are short of funding and human capital (see here, here, and here).

      One important weakness of this cause is that, as with many research programs, it can be difficult to tell when you’re making progress, which lowers tractability.

      We’d like to flag that there are reasons we may be biased. Our parent charity, CEA, supports cause-prioritisation. Moreover, the creation of this list is itself an exercise in cause-prioritisation, so you might expect us to rate it highly. On the other hand, our high rating of cause-prioritisation is not an accident. CEA and 80,000 Hours aim to work on the most high-potential causes in the world. It’s because we think cause-prioritisation is a high-priority cause that we’re working on it (our money is where our mouth is). So, although it’s true there’s potential for a conflict of interest, there’s a good reason we’re in this situation. The greater risk is in the future: we’re likely to be biased towards continuing to believe prioritisation research is high-priority because we’re already working on it (an instance of the sunk cost bias) despite new evidence potentially suggesting otherwise. We’ll attempt to guard against this risk.

      You can see much more detail on the case for cause prioritisation in this draft report. We’re planning a more thorough overview of what opportunities are available within the cause.

      We also think that other types of prioritisation research are high-priority (e.g. charity evaluation), but they seem less important and less uncrowded compared to cause-prioritisation research.

      See all of our resources on prioritisation research

      Promoting effective altruism

      What is this cause?
      Promoting effective altruism means activities which expand the capabilities of those trying to do good in a cause-neutral, evidence-based and outcome-orientated way. Interventions within this area include advocacy of key ideas in effective altruism and network-building. Some organisations in this cause include GiveWell, the Centre for Effective Altruism (our parent charity), the Copenhagen Consensus, Leverage Research, Charity Science, the donation pledge organisations (Giving What We Can, The Life You Can Save, the Giving Pledge) and ourselves. More broadly, you could also include organisations with an effectiveness-minded approach, like the Gates Foundation and Evidence Action.

      How is it different from prioritisation research?
      Prioritisation research is working out which opportunities have the most impact, while promoting effective altruism is building capacity to act on this research. In practice, both need to be carried out at the same time, and many organisations engage in a mixture of both.

      Why do we think it’s high-priority?

      Important: 5
      Tractable: 4
      Uncrowded: 5

      Promoting effective altruism is effective because it’s a flexible multiplier on the next most high-priority cause. It’s important because we expect the most high-priority areas to change a great deal, so it’s good to build up general capabilities to take the best opportunities as they are discovered. Moreover, in the recent past, investing in promoting effective altruism has resulted in significantly more resources being invested in the most high-priority areas, than investing in them directly. For instance, for every US$1 invested in GiveWell and Giving What We Can, more than $7 have been moved to high-priority interventions. We think it’s highly important also because it’s a brand new area with high potential, so we expect further work to have high value of information.

      Promoting effective altruism seems uncrowded, because it’s a new cause so there appear to be lots of good opportunities within it which haven’t been taken yet. It seems tractable because there are definite advocacy opportunities, which have worked in the past and whose success can be measured, e.g. encouraging people to take the GWWC pledge. More direct evidence for effectiveness comes from the strong success to date of many of the projects in the area, like GiveWell.

      One important weakness of this cause is that, as with most advocacy projects, it’s difficult to be confident that the interventions which have worked in the past will continue working into the future. This lowers tractability. There are also reasons effective altruism might fail to be a good project – see here for some ideas.

      Again, there’s reason for us to be biased. 80,000 Hours is involved in promoting effective altruism, so it’s in our interests to say this is a high-priority cause. However, our prioritisation of promoting effective altruism is no accident. Our aim is to work on the most high-priority causes in the world. We set up 80,000 Hours precisely because we think promoting effective altruism is a high-priority cause, so it’s no surprise we rank it highly. The greater risk is in the future: we’re likely to be biased towards continuing to believe prioritisation research is high-priority because we’re already working on it (an instance of the sunk cost bias) despite new evidence potentially suggesting otherwise. We’ll attempt to guard against this risk.

      See all our resources on promoting effective altruism.

      Global catastrophic risks

      What is this cause?
      Working on global catastrophic risks means identifying and mitigating low probability but costly risks to society’s future. Examples of interventions in this category include setting up early warning systems for natural disasters, surveillance of pandemics, tracking asteroids (which has largely been completed for asteroids that threaten civilization, although not for comets), advocacy of non-proliferation of nuclear weapons, and research on other possible risks and methods for mitigating them. An unusual view we take seriously is that some of the most significant risks in this area will come from new technologies that may emerge this century, such as synthetic biology, distributed manufacturing, or artificial general intelligence (which we often call ‘unprecedented risks’). Organisations within this cause include various parts of government (e.g. DARPA, NASA), various think-tanks (e.g. those working on nuclear weapon risk), small parts of the insurance industry, the Global Catastrophic Risks Institute, the Nuclear Threat Initiative, the Skoll Foundation, the Sloan Foundation, the Machine Intelligence Research Institute, the Centre for the Study of Existential Risks at Cambridge and the Future of Humanity Institute at Oxford.

      Why do we think it’s high-priority?

      Important: 4
      Tractable: 2
      Uncrowded: 5

      Progress in this area has a clear link with better long-run outcomes for society – catastrophic risks cause huge damage and put society’s long-term future in danger. So we think this cause is highly important. Past lack of effort invested in many of these risks, and high uncertainty about how to mitigate them, also means we can expect high value of information from working on this cause. We don’t rate it as ‘5’, however, because we’re unsure of the importance of working on mitigating these risks directly, compared to working on other causes which also have the potential to benefit the long-run future, either through reducing catastrophic risks or through other means.

      This cause seems uncrowded because the risks considered relate to potential harm in the future, and there’s good reason to expect present society to undervalue the interests of future generations. It also seems that irrational biases discourage people from working in this area (e.g. undervaluing of small risks that haven’t occurred before).1 There’s more direct evidence for uncrowdedness in the fact that top foundations only spend 0.1% of their resources on these risks, which seems small relative to what’s at stake.

      The main weakness of this cause is tractability – there’s a huge amount of uncertainty about which interventions will effectively reduce these risks in the future. For instance, doing more research into the risks posed by synthetic biology could accidentally further the discovery of a dangerous application of synthetic biology. On the other hand, there are some interventions within this cause which seem relatively straightforward, like preparing better systems for managing disasters and performing better tracking of the people developing potentially dangerous technology. There have also been some good interventions within this cause in the past, such as asteroid tracking. See this rough analysis of the cost-effectiveness of asteroid tracking efforts to date, and GiveWell’s overview of asteroid tracking, which found it was promising but doesn’t currently have much room for more funding. Overall, however, due to lower tractability and probably lower importance, we currently prefer further prioritisation research and general capacity-building through promoting effective altruism.

      For more, see Global Catastrophic Risks by Nick Bostrom, Our Final Century by Lord Martin Rees, GiveWell’s shallow overviews of several sub-causes in this area, and conversation notes, and see all of our resources on this cause.

      Research policy and infrastructure

      What is this cause?
      Research policy and infrastructure is activity aimed at increasing the extent to which scientific research benefits society. Interventions within this cause include: promoting systematic reviews (e.g. Cochrane Collaboration), campaigns to enforce pre-registration of trials, replication projects, improving methods within specific fields (e.g. promoting the use of randomised controlled trials in development studies, as pursued by JPAL), and developing platforms to promote open science (promoting new ways to produce, share and evaluate scientific research, advanced by organisations like Academia.edu, Mandeley, Digital Science, and the Open Science Foundation). The Arnold Foundation is a large supporter of this cause.

      Why do we think it’s high-priority?

      Important: 3
      Tractable: 3.5
      Uncrowded: 3

      Scientific research is hugely important. It has driven much of our improvement in living standards in recent history, and is a major driver of long-term productivity (a couple of examples among many: smartphones, advances in HIV treatment, improvements in crop yields). However, it seems there could be considerable room to make some areas of science more efficient (e.g. see Bad Pharma by Ben Goldacre for problems with medical research, and see GiveWell’s overview of some of the problems in this field). The cause seems tractable because a variety of concrete proposals for improving the effectiveness of science are on the table. It also seems moderately uncrowded. We can expect basic scientific research in general not to receive enough investment because it’s difficult to capture the benefits for oneself, which will mean it’s undersupplied by the market. In addition, the benefits mostly accrue in the long-term future, so present people are under-incentivised to invest in it. Within this, research policy and infrastructure seem particularly neglected because most of the key players in scientific research do not seem incentivised or well-placed to promote it. It’s not part of traditional academic research, but it requires more scientific expertise than policy makers or businesses can easily provide.

      On the other hand, this cause seems more crowded than those above. GiveWell’s major investigation into open science (which they initially thought was the most promising sub-cause for donors) showed that there were major attempts by for-profit businesses to solve the problems. There has also been more progress to date, which makes interventions at the margin seem less tractable than the other causes.

      See all our resources on research policy and infrastructure.

      Ending factory farming

      What is this cause?
      Ending factory farming is activity aimed at stopping the practice of animals being raised in suffering for food. Broadly, the interventions in this area are advocacy aimed at reducing how much factory farming takes place, or research aimed at determining the most effective methods of advocacy, developing meat substitutes or higher-welfare farming methods. Some organisations in this cause include Animal Charity Evaluators, the Humane League, Vegan Outreach, Beyond Meat and New Harvest.

      Why do we think it’s high-priority?

      Important: 2
      Tractable: 4
      Uncrowded: 4

      Ending factory farming is important because animals suffer in huge numbers in factory farms. Around 60 billion animals are raised for food each year, the majority of which are in factory farms. Moreover, the cause seems tractable and uncrowded because there is some evidence that advocacy campaigns aimed at encouraging vegetarianism reduce the number of factory farmed animals for a very low cost. Developing meat substitutes could also provide a high-leverage way to reduce consumption of factory-farmed meat. Further, we can expect the cause to be undervalued because the interests of animals are not well-represented by our economic or political system. Even among charity, less than three percent of US donations in 2011 went to the “environment/animals” sector, which includes zoos, aquariums, and programs for “outdoor survival and beautification of open spaces”. Only a tiny portion of that went to animal charities, and within animal charities, the majority of attention is given to animal shelters rather than factory farming. We can also expect further work to have high value of information, because it seems relatively little is known about the most promising efforts within this cause. Finally, we take it seriously because a significant number of effective altruists who have thought about how to do the most good rate it as the most high-priority cause.

      If you attach importance to the long-run future, then the main disadvantage of acting against factory farming is that there doesn’t seem much reason to expect that reducing factory farming is among the best ways to contribute to a generally flourishing future in the long-term. Reducing factory farming can produce some long-term flow-through effects through reducing crop prices, carbon emissions and promoting anti-speciesism. Reducing crop prices and carbon emissions are reasons the Gates Foundation supports research into in vitro meat as a way to further global development. Nevertheless, if you want to promote these outcomes, it seems unlikely to us that working to reduce factory farming is the most effective way to go about it. We also don’t see strong reason to think that reducing crop prices and carbon emissions are among the best ways to promote general global development. For instance, Giving What We Can recently concluded that it’s currently likely to be significantly more effective to work on global health, than reducing carbon emissions. Nevertheless, there hasn’t been significant research into these issues, so we could easily change our mind.

      For more, see GiveWell’s overview of the cause and all our other resources on this cause.

      Global health

      What is this cause?
      Global health is activity to reduce the incidence of illness globally, and particularly in the developing world. We can improve global health through several broad avenues, including biomedical research (which is also treated separately below), improving public health, and promoting international aid. Some of the more promising projects within global health include expanding the availability of insecticide-treated bed-nets, deworming, developing vaccines for HIV and neglected tropical diseases, further cost-effectiveness research, and increasing the cost-effectiveness of existing aid and philanthropy. There are many organisations within this cause, including the World Bank, World Health Organisation, the Gates Foundation, a large number of medical research bodies, and all governments.

      Why do we think it’s high-priority?

      Important: 2
      Tractable: 5
      Uncrowded: 3

      Global health is important because health really matters to our wellbeing and productivity, yet millions of people suffer from ill health. The best thing about global health as a cause is that plenty of highly tractable interventions exist that could easily be expanded if more resources were added to the cause. For instance, interventions like insecticide-treated bed-nets have been shown by multiple randomised controlled trials to significantly reduce the burden of malaria, for very low costs (several thousand dollars per life saved). Most of the Copenhagen Consensus 2012’s top-ranked interventions were within global health (e.g. the top three interventions were micronutrients to school children, subsidy for malaria combination treatment, and expanded childhood immunisation coverage). An additional benefit is that the impact of health interventions is relatively easy to quantify, which makes it easier to select the best programs and learn from failure.

      If you attach importance to the long-run future, then the weakness of global health is that although we’re highly confident that the short-term impact is highly positive, we know very little about whether improving global health is a particularly promising way to develop a flourishing society in the long-run. Moreover, because it’s relatively well-explored, we don’t expect additional work to have particularly high value of information. Both of these problems reduce the importance score. The cause also receives significant attention from major strategic actors (like the Gates Foundation), and although we’re confident there are good opportunities for donors in the cause, we’re less sure how talent-constrained it is (particularly because it’s a fairly popular cause), so we think it does less well on crowdedness.

      For more, see GiveWell’s arguments in favour of global health as the top cause for donors. Also see GiveWell’s conversations within this cause, and their overviews of some relevant interventions. See all our resources on global health.

      Improving decision-making

      What is this cause?
      Improving decision-making means improving our ability to form accurate beliefs about the world and act on this information to achieve our goals. This is a broad cause, including a growing research program aimed at improving forecasting, for instance Philip Tetlock’s Good Judgement Project, studies of expert judgement in psychology and behavioural economics (see Thinking Fast and Slow by Kahneman for an overview), prediction markets (e.g. as promoted by Robin Hanson), and efforts to develop rationality training, as advanced by the Center for Applied Rationality.

      Why do we think it’s high-priority?

      Important: 3
      Tractable: 3
      Uncrowded: 3

      Improving decision-making is rated highly as a cause by many in the effective altruism community, including those at the Center for Applied Rationality. It’s an important cause because if we improve our general abilities to achieve our goals, then we can expect the world to be made a better place without knowing the details of what’s going to happen in the future. In particular, improved decision-making could increase society’s ability to deal with a variety of important global challenges (including global catastrophic risks). We don’t rate it as ‘4’ however, because we haven’t seen strong evidence to show that it’s more important than other types of general empowerment or more direct approaches like working directly on understanding catastrophic risks.

      The cause seems uncrowded, at least in some parts. For instance, the Centre for Applied Rationality is the only organisation we know to be working on rationality training. On the other hand, there are major research programs in psychology and economics working on some issues within this cause, so we don’t think it’s highly uncrowded. We also haven’t been presented with evidence that there’s a particularly pressing need for more resources within this cause.

      We rate tractability ‘3’, because although some approaches to improving decision-making have been identified, we haven’t seen much evidence to suggest they’ll be effective to implement in practice, or that they would have a large impact on our lives.

      See all our resources on improving decision making.

      Immigration reform

      What is this cause?
      Immigration reform is advocacy of loosening immigration restrictions in rich countries with stronger political institutions, especially for people who are migrating from poor countries with weaker political institutions. It also includes research aimed at analysing how to effectively implement immigration reform, which seems particularly high-priority given the uncertainties and fears around the potential harmful side effects of increased immigration. Some organisations in this cause include the Center for Global Development, Fwd.us and the Krieble Foundation.

      Why do we think it’s high-priority?

      Important: 3
      Tractable: 2
      Uncrowded: 4

      This cause is important because individual workers in poor countries could produce things of much greater economic value and better realise their potential in other ways if they lived in rich countries, meaning that much of the world’s human capital is being severely underutilised. This claim is unusually well-supported, by basic economic theory and the views of a large majority of economists. Immigration reform has the potential to yield a massive reduction in global poverty. For instance, remittances from migrants to their home countries are already twice as large as international aid, and this could be increased several fold. The cause seems uncrowded. Only four of the top 100 US foundations focus on it. It’s avoided by politicians because their constituents will not be the beneficiaries of the cause’s reforms – they’ll instead be members of the global poor. We don’t rate this cause more highly because many concerns have been raised around the political feasibility and social consequences of migration, which means that although increased migration is likely very beneficial in principle, we’re not sure which real interventions would have large positive effects.

      For more, see GiveWell’s shallow overviews within the cause: here and here, and see all our resources on this cause.

      Geoengineering research

      What is this cause?
      Geoengineering research is activity aimed at working out whether there are safe, effective ways to artificially alter the climate in order to prevent dangerous climate change. The main funding comes from governments and the Gates Foundation. People carry out the research within academic climate science.

      Why do we think it’s high-priority?

      Important: 2
      Tractable: 3
      Uncrowded: 4

      Geoengineering research could be important because geoengineering may be a very cheap way to prevent dangerous climate change, but we’re highly uncertain, so more research would have high value of information. At the same time, this cause seems highly neglected. Geoengineering overall only receives about 0.1% of total spending on climate research.2 Solar geoengineering only receives around US$10m in funding per year – small relative to its potential importance – so it seems reasonable to expect further research to reduce our uncertainty about the benefits of geoengineering. The Copenhagen Consensus has made a rough benefit-cost estimate, showing high cost-effectiveness, and it has been a high-priority area for GiveWell Labs to investigate.

      Note that whether geoengineering is a good idea is highly controversial among climate scientists. What we recommend in this cause is only further research into the benefits and costs of geoengineering. Some experts, however, even caution against further geoengineering research, since it may increase the chances that geoengineering is used inappropriately, so we’ve reduced the importance score. In addition, this research may prove to be highly intractable, due to the difficulties of modelling the climate.

      See all our resources on geoengineering.

      Biomedical research

      What is this cause?
      Biomedical research is research aimed at developing ways to improve health through current scientific means. The main types of interventions in this area include doing research, supporting research, and advocating better or more effective government spending. The main organisations in this cause are governments, universities, foundations like the Wellcome Trust, and the pharmaceutical industry.

      Why do we think it’s high-priority?

      Important: 2.5
      Tractable: 4
      Uncrowded: 2

      Biomedical research is important because health is a highly important part of wellbeing, and there’s a great deal we could do to improve our health. Biomedical research can have both considerable short run effects by combating disease and positive long-run effects by helping to build the store of scientific knowledge. In addition, biomedical research includes work on some potentially transformative developments, including synthetic biology (making it possible to make designer viruses), ending ageing and embryo selection. However, we don’t rate importance more highly, because society has already invested a great deal in biomedical research and the space for improvement seems smaller with other causes. We also rate it lower for value of information for the same reason.

      It’s tractable because there’s a large number of existing promising research programs that people can contribute to, which we expect to lead to more progress. It has also had a strong track record of success, yielding some of the most important advances in living standards over the last one hundred years. GiveWell performed a literature survey on the economic returns to biomedical research, finding some evidence for very high returns.

      Biomedical research receives a lot of attention and considerable support from government, industry and philanthropy, so it’s not uncrowded. However, we might still expect that it doesn’t receive enough investment relative to its importance. Why not? First, as with any type of innovation, it’s difficult to capture the benefits of research, which is reason to expect it to be undersupplied by the market, especially for more fundamental and less applied research. Second, the payoffs from biomedical research arrive decades in the future, or even later, which means that present society is likely to be under-incentivised to invest in it. Third, many of the benefits (at least for some diseases) will primarily be aimed at the global poor, so the financial incentives are lower than they should be. Within biomedical research, there are likely to be some neglected opportunities. For instance, we’ve across some intuitively plausible arguments that anti-ageing research has particularly high potential, though we haven’t vetted these claims.

      As further evidence, we’ve seen benefit-cost analysis produced by the Copenhagen Consensus to suggest that various types of biomedical research are highly cost-effective in expectation (e.g. HIV vaccine research), and it has been a significant research priority for GiveWell Labs.

      For more, see all our resources on biomedical research.

      Economic empowerment of the developing world

      What is this cause?
      Developing world economic empowerment is activity aimed at increasing the economic power and wealth of the global poor. It includes a wide range of activities, including efforts to increase crop yields, providing financial services to the global poor, cash transfers, providing training, increasing the ease of doing business, and making investments aimed at increasing economic output. There’s a huge number of organisations working for this cause, including the Gates Foundation, World Bank, Give Directly, and many major charities (e.g. Oxfam).

      Why do we think it’s high-priority?

      Important: 3
      Tractable: 2
      Uncrowded: 2

      This cause is important because 2.5 billion people live on less than US$2 a day. These people lack many of the basic necessities of life, including food, water, shelter and sanitation.

      The problem with this cause is that many of the interventions within it don’t have a track record of being highly effective. For instance, despite billions of dollars of investment, there isn’t much convincing evidence that microfinance (see GiveWell’s overview, or our own) has had an outsized economic impact. This form of aid has received some of the most criticism from economists, including William Easterly and Dambisa Moyo, who have argued that some economic aid has caused significantly more harm than good. It also strikes us as crowded, because it receives a large degree of attention from existing aid programs and NGOs. Overall, we think global health may be a more promising area if your aim is empowerment of the global poor, because there’s a wide variety of health interventions with good evidence for both significant health benefits and economic benefits.

      On the other hand, there are some promising interventions within this cause. Give Directly is highly rated by GiveWell – since we are so much richer than the global poor, even simple cash transfers can yield significant benefits, with good evidence. The Copenhagen Consensus rates research into increasing crop yields highly, and the Gates Foundation supports a variety of interventions within this cause.

      For more, see all our resources on developing world economic empowerment.

      What are the most important judgement calls we made in constructing this list?

      There are many difficult judgement calls behind our application of this methodology, which many people will disagree over. It’s tempting to think that the existence of these individual differences makes the entire project to generally prioritise causes a waste of time. We don’t think this is true. In practice, we think there is enough overlap in values that there’s a lot we can say about causes in general, especially for people aiming broadly to have a positive impact with a global perspective.

      This list has been constructed with the assumption that what’s ultimately valuable is something like human welfare, today and in the long-run, and that all people are equally valuable. For more, see our cause framework. If you primarily only value your local community, friends and family, then this list isn’t going to be of much use. However, if you care about the global perspective to some degree, then we suspect it will be useful.

      On the other hand, within this broad perspective there are still many difficult judgement calls that members of 80,000 Hours disagree about, which are important in determining the rankings. Our approach with these is to clearly flag them so that you can make up your own mind. We may also expand on certain specific topics if there’s enough demand. In the following section, we explore some of these judgement calls and provide alternative lists based on these different assumptions.

      Moral judgements

      The question of which causes are most important is in part a moral question. For instance, you’ll only think that the existence of factory farming is a problem if you think animal suffering is morally relevant (unless you’re pursuing it to reduce carbon emissions and crop prices). Other moral issues can also arise. For instance, you might think a cause could help more people have flourishing lives, but that we don’t have an immediate moral reason to do anything about it (e.g. actions aimed at helping people who don’t exist yet).

      In this list, one key judgement call is the relative importance of helping future people. We think future people deserve moral consideration, which means that the impact of our actions on the future is highly important. If someone thinks future generations deserve less consideration, they may favour global health, immigration reform and ending factory farming. The list might start with: promoting effective altruism, ending factory farming, global health, prioritisation research and immigration reform.

      There are many other moral judgements that may alter our list. For instance, if someone places significant moral weight on justice as an end in itself, then they may want to focus on improving global governance. Unfortunately, these kinds of trade-offs are highly unexplored.

      Judgement calls about how the world is

      The relative importance of the global catastrophic risks cause depends on how likely you think these events are, and how much we can do about them today or in the usefully near future. For each risk, it seems that experts often disagree about their likelihood and how best to mitigate them.

      Another major judgement call over which some people disagree is whether we can expect the future to be ‘good’ or ‘bad’ overall. If someone were to think of this as an uncertainty, or that it’s likely to be bad, then they’ll be less concerned about some catastrophic risks, and will instead likely want to focus on ways to make the future better. This person’s list might start with: prioritisation research, promoting effective altruism, ending factory farming, improving decision-making, and research policy and infrastructure.

      Relatedly, if someone thinks it may not be good to speed up technological or economic progress, then they may place less importance on global health, immigration reform, biomedical research and some research policy and infrastructure. Their list might instead start with: prioritisation research, promoting effective altruism, global catastrophic risks, ending factory farming, improving decision-making, geoengineering research and research policy and infrastructure.

      Each individual cause involves many assumptions about how the world is. For instance, behind prioritisation research lies the assumption that causes vary significantly in effectiveness, and that it’s possible to make progress in working out which are best. Behind promoting effective altruism lies the assumption that effective advocacy methods will continue to exist, and that a more effectiveness-minded approach is significantly better than what already exists.

      Judgement calls about what we can know, based on the evidence

      Some people differ over how generally sceptical to be about finding particularly good interventions. You can present two people – who believe malaria nets are equally effective – with some new data supporting the effectiveness of malaria nets, and they can arrive at different conclusions. Someone who’s more sceptical about some interventions being highly effective would be relatively unmoved. GiveWell have generally occupied this position. Someone relatively sceptical may decrease the tractability scores of most causes, except global health and some interventions within promoting effective altruism. Their list might start: promoting effective altruism, prioritisation research (still highly ranked due to importance and uncrowdedness), global health, immigration reform and ending factory farming.

      A related issue is how much to trust common sense. If you think that generally people have developed somewhat effective ways to achieve their ends, then you’ll place higher weight on common sense. If on the other hand you think that common sense is often badly wrong and easy to beat without much extra research, then you’ll put less weight on it. We put moderate weight on common sense. Someone putting higher weight on it might start their list: global health, promoting effective altruism, ending factory farming, biomedical research, prioritisation research and developing world economic empowerment. Someone putting lower weight on it may start their list: prioritisation research, promoting effective altruism, global catastrophic risks, research policy and infrastructure, ending factory farming, improving decision-making and immigration reform.

      Judgement calls about risk-aversion

      Some people want to do good with a high level of certainty. We think, however, that high uncertainty doesn’t matter in itself. We believe that in principle, what you should do is weight each intervention by the good it would do, calculated by the probability of success (calculate the expected value). If one intervention yields 10 units of good with a probability of 10%, that’s just as good as an intervention which yields one unit of good with certainty. If you’re risk-averse about doing good, you’ll prefer the intervention that does good with certainty.

      Someone more risk-averse than us, however, may disfavour global catastrophic risks, research causes and advocacy causes. Their list may start: promoting effective altruism, prioritisation research, global health, ending factory farming, and research policy and infrastructure.

      Which causes are most robust under uncertainty?

      In these alternative lists with different key judgement calls, the top couple of causes in the original list still score highly. That’s because just penalising them in one dimension, like importance, isn’t enough to make them unpromising overall, since they may still be high in the other dimensions. This gives us added confidence in the methodology, and is evidence for our earlier claim that generally prioritising causes is a useful project, despite differences in individual assumptions and values.

      If someone differed from us on multiple judgement calls, however, then their list might become substantially different. For instance, someone who’s sceptical, less concerned with the long-run future, and places high weight on common sense may start their list: global health, biomedical research, developing world economic empowerment, promoting effective altruism and ending factory farming.

      Our research process

      The framework and general process for assessing causes used is explained here.

      We applied this framework by:
      1. Gathering promising causes from leaders in prioritisation research (especially GiveWell, the CEA and the Copenhagen Consensus), our general knowledge and major strategic foundations, like the Gates Foundation.
      2. Intuitively applying the framework to these causes, with our basic reasoning for each cause explained above.
      3. Discussing causes heavily with people from the CEA Strategy Research team, who in turn are in touch with the Future of Humanity Institute and other groups within effective altruism.

      We should emphasise that this list involves many potentially controversial judgement calls, and we expect it to change significantly as more evidence comes in.

      We plan to take this research forward by:
      1. Continuing to update the list based on new findings from the CEA’s new prioritisation research team, GiveWell Labs, the Copenhagen Consensus and other new prioritisation research groups.
      2. Further explaining and deepening our research into specific causes as they arise during case studies.

      Some extra causes we considered including are: boosting technological progress, especially R&D to increase crop yields and green energy R&D (highly rated by the Copenhagen Consensus), basic science (which involves a substantial market failure), some sub-causes within education and governance (highly weighted by common sense, with potentially good long-run effects), political innovation, and trade reform (which, similarly to immigration reform, has the potential to have a huge positive impact on the global poor, and is highly rated by the Copenhagen Consensus).

      Other questions

      Why is this different from GiveWell’s list of top charities?

      There are several reasons. First, GiveWell primarily aims to find the best funding opportunities, so it has a different perspective to us. The best funding opportunities are very relevant to people pursuing earning-to-give, but we expect them to be different from the best opportunities to deploy your human capital. Malaria nets are a good funding opportunity precisely because they require relatively little additional skilled human capital. All mainly required is more nets to be made and shipped, which by this stage can easily be accomplished with additional money. We’re much less sure that working on the malaria nets intervention is the best thing to do with your human capital.

      We’re aiming to find the best causes for you to generally work within for at least the next couple of years. Our focus is broader than finding nonprofits with funding gaps. For instance, there are many promising activities within research and government, and we don’t think it’s obvious that you shouldn’t work on them rather than find a nonprofit to support. We’re also focusing on a smaller scale than GiveWell, which aims to find charities with room for at least US$1m more funding, whereas some of the causes on this list would struggle to absorb that many resources at comparable rates of return.

      Second, we think that GiveWell is more effective than their top recommended charities (reasons in their own words), but they don’t recommend themselves for preserving impartiality. GiveWell is an example of prioritisation research and promoting effective altruism, and we think the effectiveness of GiveWell, among other organisations, is evidence that these causes are in general more promising than global health (at least on the scale of investing less than US$1m).

      Third, we think we differ somewhat from GiveWell in our framework and key judgement calls. In particular, we think GiveWell might underweight the importance of long-run flow-through effects, animal welfare and value of information, while placing higher weight on common sense, and being more sceptical about the ease of finding unusually good interventions.

      Don’t the answers depend on the person?

      People can easily disagree over which cause is most generally effective, due to disagreements over key judgement calls – see our examples above.

      It’s also important to clarify that “which cause is it best for you to support” is a separate question. That’s because different people have different types of human capital and other resources to contribute. Since some causes are more in need of some types of human capital than others, different people should support different causes.

      Different people also have different comparative advantages and levels of motivation, which can also alter their choice of cause.

      Consider using our list as a starting point, combined with other particularly promising opportunities you know of. Think through how you might differ over key judgement calls, and then work out where you can make the biggest contribution within that list.

      We plan to write more about which causes require different types of skills as the issue comes up in future case studies.

      How do you judge effectiveness?

      This post explains our framework and process for assessing causes, in terms of effectiveness.

      Is there a cause you think is good that hasn’t been included? Post a write-up of it below!


      Thank you to Carl Shulman, Jonah Sinick and Nick Beckstead for comments, though they may not endorse all of the claims made.


      You might also be interested in:


      Notes and References


      1. Though there are also biases which cause overinvestment in some cases, e.g. investment in protection against terrorist attacks. 
      2. A written testimony to the House Committee on Science and Technology Hearing by Phil Rasch (2010). 

      The post Which cause is most effective? appeared first on 80,000 Hours.

      ]]>