Top-recommended careers (Topic archive) - 80,000 Hours https://80000hours.org/topic/careers/top-recommended-careers/ Mon, 05 Feb 2024 14:30:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps https://80000hours.org/podcast/episodes/nathan-labenz-ai-breakthroughs-controversies/ Wed, 24 Jan 2024 22:05:00 +0000 https://80000hours.org/?post_type=podcast&p=85532 The post Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps appeared first on 80,000 Hours.

]]>
The post Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps appeared first on 80,000 Hours.

]]>
Nathan Labenz on the final push for AGI, understanding OpenAI’s leadership drama, and red-teaming frontier models https://80000hours.org/podcast/episodes/nathan-labenz-openai-red-team-safety/ Fri, 22 Dec 2023 21:29:43 +0000 https://80000hours.org/?post_type=podcast&p=85127 The post Nathan Labenz on the final push for AGI, understanding OpenAI’s leadership drama, and red-teaming frontier models appeared first on 80,000 Hours.

]]>
The post Nathan Labenz on the final push for AGI, understanding OpenAI’s leadership drama, and red-teaming frontier models appeared first on 80,000 Hours.

]]>
Software and tech skills https://80000hours.org/skills/software-tech/ Mon, 18 Sep 2023 13:00:13 +0000 https://80000hours.org/?post_type=skill_set&p=83654 The post Software and tech skills appeared first on 80,000 Hours.

]]>

In a nutshell:

You can start building software and tech skills by trying out learning to code, and then doing some programming projects before applying for jobs. You can apply (as well as continue to develop) your software and tech skills by specialising in a related area, such as technical AI safety research, software engineering, or information security. You can also earn to give, and this in-demand skill set has great backup options.

Key facts on fit

There’s no single profile for being great at software and tech skills. It’s particularly cheap and easy to try out programming (which is a core part of this skill set) via classes online or in school, so we’d suggest doing that. But if you’re someone who enjoys thinking systematically, building things, or has good quantitative skills, those are all good signs.

Why are software and tech skills valuable?

By “software and tech” skills we basically mean what your grandma would call “being good at computers.”

When investigating the world’s most pressing problems, we’ve found that in many cases there are software-related bottlenecks.

For example, machine learning (ML) engineering is a core skill needed to contribute to AI safety technical research. Experts in information security are crucial to reducing the risks of engineered pandemics, as well as other risks. And software engineers are often needed by nonprofits, whether they’re working on reducing poverty or mitigating the risks of climate change.

Also, having skills in this area means you’ll likely be highly paid, offering excellent options to earn to give.

Moreover, basic programming skills can be extremely useful whatever you end up doing. You’ll find ways to automate tasks or analyse data throughout your career.

What does a career using software and tech skills involve?

A career using these skills typically involves three steps:

  1. Learn to code with a university course or self-study and then find positions where you can get great mentorship. (Read more about how to get started.)
  2. Optionally, specialise in a particular area, for example, by building skills in machine learning or information security.
  3. Apply your skills to helping solve a pressing global problem. (Read more about how to have an impact with software and tech.)

There’s no general answer about when to switch from a focus on learning to a focus on impact. Once you have some basic programming skills, you should look for positions that both further improve your skills and have an impact, and then decide based on which specific opportunities seem best at the time.

Software and tech skills can also be helpful in other, less directly-related career paths, like being an expert in AI hardware (for which you’ll also need a specialist knowledge skill set) or founding a tech startup (for which you’ll also need an organisation-building skill set). Being good with computers is also often part of the skills required for quantitative trading.

Programming also tends to come in handy in a wide variety of situations and jobs; there will be other great career paths that will use these skills that we haven’t written about.

How to evaluate your fit

How to predict your fit in advance

Some indications you’ll be a great fit include:

  • The ability to break down problems into logical parts and generate and test hypotheses
  • Willingness to try out many different solutions
  • High attention to detail
  • Broadly good quantitative skills

The best way to gauge your fit is just to try out programming.

It seems likely that some software engineers are significantly better than average — and we’d guess this is also true for other technical roles using software. In particular, these very best software engineers are often people who spend huge amounts of time practicing. This means that if you enjoy coding enough to want to do it both as a job and in your spare time, you are likely to be a good fit.

How to tell if you’re on track

If you’re at university or in a bootcamp, it’s especially easy to tell if you’re on track. Good signs are that you’re succeeding at your assigned projects or getting good marks. An especially good sign is that you’re progressing faster than many of your peers.

In general, a great indicator of your success is that the people you work with most closely are enthusiastic about you and your work, especially if those people are themselves impressive!

If you’re building these skills at an organisation, signs you’re on track might include:

  • You get job offers at organisations you’d like to work for.
  • You’re promoted within your first two years.
  • You receive excellent performance reviews.
  • You’re asked to take on progressively more responsibility over time.
  • After some time, you’re becoming someone in your team who people look to solve their problems, and people want you to teach them how to do things.
  • You’re building things that others are able to use successfully without your input.
  • Your manager / colleagues suggest you might take on more senior roles in the future.
  • You ask your superiors for their honest assessment of your fit and they are positive (e.g. they tell you you’re in the top 10% of people they can imagine doing your role).

How to get started building software and tech skills

Independently learning to code

As a complete beginner, you can write a Python program in less than 20 minutes that reminds you to take a break every two hours.

A great way to learn the very basics is by working through a free beginner course like Automate the Boring Stuff with Python by Al Seigart.

Once you know the fundamentals, you could try taking an intro to computer science or intro to programming course. If you’re not at university, there are plenty of courses online, such as:

Don’t be discouraged if your code doesn’t work the first time — that’s what normally happens when people code!

A great next step is to try out doing a project with other people. This lets you test out writing programs in a team and working with larger codebases. It’s easy to come up with programming projects to do with friends — you can see some examples here.

Once you have some more experience, contributing to open-source projects in particular lets you work with very large existing codebases.

Attending a coding bootcamp

We’ve advised many people who managed to get junior software engineer jobs in less than a year by going to a bootcamp.

Coding bootcamps are focused on taking people with little knowledge of programming to as highly paid a job as possible within a couple of months. This is a great entry route if you don’t already have much background, though some claim the long-term prospects are not as good as if you studied at university or in a particularly thorough way independently because you lack a deep understanding of computer science. Course Report is a great guide to choosing a bootcamp. Be careful to avoid low-quality bootcamps. To find out more, read our interview with an App Academy instructor.

Studying at university

Studying computer science at university (or another subject involving lots of programming) is a great option because it allows you to learn to code in an especially structured way and while the opportunity cost of your time is lower.

It will also give you a better theoretical understanding of computing than a bootcamp (which can be useful for getting the most highly-paid and intellectually interesting jobs), a good network, some prestige, and a better understanding of lower-level languages like C. Having a computer science degree also makes it easier to get a US work visa if you’re not from the US.

Doing internships

If you can find internships, ideally at the sorts of organisations you might want to work for to build your skills (like big tech companies or startups), you’ll gain practical experience and the key skills you wouldn’t otherwise pick up from academic degrees (e.g. using version control systems and powerful text editors). Take a look at our our list of companies with software and machine learning internships.

AI-assisted coding

As you’re getting started, it’s probably worth thinking about how developments in AI are going to affect programming in the future — and getting used to AI-assisted coding.

We’d recommend trying out using GitHub CoPilot, which writes code for you based on your comments. Cursor is a popular AI-assisted code editor based on VSCode.

You can also just ask AI chat assistants for help. ChatGPT is particularly helpful (although only if you use the paid version).

We think it’s reasonably likely that many software and tech jobs in the future will be heavily based on using tools like these.

Building a specialty

Depending on how you’re going to use software and tech skills, it may be useful to build up your skills in a particular area. Here’s how to get started in a few relevant areas:

If you’re currently at university, it’s worth checking if you can take an ML course (even if you’re not majoring in computer science).

But if that’s not possible, here are some suggestions of places you might start if you want to self-study the basics:

PyTorch is a very common package used for implementing neural networks, and probably worth learning! When I was first learning about ML, my first neural network was a 3-layer convolutional neural network with L2 regularisation classifying characters from the MNIST database. This is a pretty common first challenge and a good way to learn PyTorch.

You may also need to learn some maths.

The maths of deep learning relies heavily on calculus and linear algebra, and statistics can be useful too — although generally learning the maths is much less important than programming and basic, practical ML.

Again, if you’re still at university we’d generally recommend studying a quantitative degree (like maths, computer science, or engineering), most of which will cover all three areas pretty well.

If you want to actually get good at maths, you have to be solving problems. So, generally, the most useful thing that textbooks and online courses provide isn’t their explanations — it’s a set of exercises to try to solve in order, with some help if you get stuck.

If you want to self-study (especially if you don’t have a quantitative degree) here are some possible resources:

You might be able to find resources that cover all these areas, like Imperial College’s Mathematics for Machine Learning.

Most people get started in information security by studying computer science (or similar) at a university, and taking some cybersecurity courses — although this is by no means necessary to be successful.

You can get an introduction through the Google Foundations of Cybersecurity course. The full Google Cybersecurity Professional Certificate series is also worth watching to learn more on relevant technical topics.

For more, take a look at how to try out and get started in information security.

Data science combines programming with statistics.

One way to get started is by doing a bootcamp. The bootcamps are a similar deal to programming, although they tend to mainly recruit science PhDs. If you’ve just done a science PhD and don’t want to continue with academia, this is a good option to consider (although you should probably consider other ways of using the software and tech skills first). Similarly, you can learn data analysis, statistics, and modelling by taking the right graduate programme.

Data scientists are well paid — offering the potential to earn to give — and have high job satisfaction.

To learn more, see our full career review of data science.

Depending on how you’re aiming to have an impact with these skills (see the next section), you may also need to develop other skills. We’ve written about some other relevant skill sets:

For more, see our full list of impactful skills.

Once you have these skills, how can you best apply them to have an impact?

The problem you work on is probably the biggest driver of your impact. The first step is to make an initial assessment of which problems you think are most pressing (even if you change your mind over time, you’ll need to decide where to start working).

Once you’ve done that, the next step is to identify the highest-potential ways to use software and tech skills to help solve your top problems.

There are five broad categories here:

While some of these options (like protecting dangerous information) will require building up some more specialised skills, being a great programmer will let you move around most of these categories relatively easily, and the earning to give options means you’ll always have a pretty good backup plan.

Find jobs that use software and tech skills

See our curated list of job opportunities for this path.

    View all opportunities

    Career paths we’ve reviewed that use these skills

    Read next:  Explore other useful skills

    Want to learn more about the most useful skills for solving global problems, according to our research? See our list.

    Plus, join our newsletter and we’ll mail you a free book

    Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

    The post Software and tech skills appeared first on 80,000 Hours.

    ]]>
    Specialist knowledge relevant to a top problem https://80000hours.org/skills/specialist-knowledge/ Mon, 18 Sep 2023 12:21:34 +0000 https://80000hours.org/?post_type=skill_set&p=83644 The post Specialist knowledge relevant to a top problem appeared first on 80,000 Hours.

    ]]>
    What specialist knowledge is valuable?

    Many highly specific areas of knowledge seem applicable to solving the world’s most pressing problems, especially risks posed by biotechnology and artificial intelligence.

    In particular we’d highlight:

    • Subfields of biology relevant to pandemic prevention. Working on many of the possible technical solutions to reduce the risk of pandemics will require expertise in parts of biology. We’d particularly highlight synthetic biology, mathematical biology, virology, immunology, pharmacology, and vaccinology. This expertise can also be helpful for pursuing a biorisk-focused policy career. (Read more about careers to prevent catastrophic pandemics.)
    • AI hardware. Specialised hardware is a crucial input to the development of frontier AI systems. As a result, we expect expertise in AI hardware to become increasingly important to the governance of AI systems. (Read more about becoming an expert in AI hardware).
    • Economics. Understanding economics can be valuable in a huge range of impactful roles when combined with another skill set. For example, economics research is crucial for conducting global priorities research and improving decision making in large institutions. And a knowledge of economics can also support you in building policy and political skills, particularly for policy design and governance research.
    • Other areas we sometimes recommend include history, knowledge of China, and law.

    Of course, whatever skill set you focus on, you’ll likely need to build some specialist knowledge — for example, if you focus on policy and political skills, you’ll need to gain specialist knowledge in the area of policy you’re working in. Similarly, if you build software and tech skills, you could consider gaining specialist knowledge in machine learning or information security. The idea of the above list is just to highlight areas we think seem particularly valuable that you might not otherwise consider learning about.

    How should you get started building specialist knowledge?

    Each area is very different, so it’s hard to give any specific advice that applies to all of them.

    Besides the articles on specific areas linked above, we’d suggest checking out:

    All our career reviews relevant to building specialist knowledge

    Read next:  Explore other useful skills

    Want to learn more about the most useful skills for solving global problems, according to our research? See our list.

    Plus, join our newsletter and we’ll mail you a free book

    Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

    The post Specialist knowledge relevant to a top problem appeared first on 80,000 Hours.

    ]]>
    Lucia Coulter on preventing lead poisoning for $1.66 per child https://80000hours.org/podcast/episodes/lucia-coulter-lead-exposure-elimination-project/ Thu, 14 Dec 2023 21:36:16 +0000 https://80000hours.org/?post_type=podcast&p=84938 The post Lucia Coulter on preventing lead poisoning for $1.66 per child appeared first on 80,000 Hours.

    ]]>
    The post Lucia Coulter on preventing lead poisoning for $1.66 per child appeared first on 80,000 Hours.

    ]]>
    Research skills https://80000hours.org/skills/research/ Mon, 18 Sep 2023 15:15:19 +0000 https://80000hours.org/?post_type=skill_set&p=83656 The post Research skills appeared first on 80,000 Hours.

    ]]>
    Norman Borlaug was an agricultural scientist. Through years of research, he developed new, high-yielding, disease-resistant varieties of wheat.

    It might not sound like much, but as a result of Borlaug’s research, wheat production in India and Pakistan almost doubled between 1965 and 1970, and formerly famine-stricken countries across the world were suddenly able to produce enough food for their entire populations. These developments have been credited with saving up to a billion people from famine,1 and in 1970, Borlaug was awarded the Nobel Peace Prize.

    Many of the highest-impact people in history, whether well-known or completely obscure, have been researchers.

    In a nutshell: Talented researchers are a key bottleneck facing many of the world’s most pressing problems. That doesn’t mean you need to become an academic. While that’s one option (and academia is often a good place to start), lots of the most valuable research happens elsewhere. It’s often cheap to try out developing research skills while at university, and if it’s a good fit for you, research could be your highest impact option.

    Key facts on fit

    You might be a great fit if you have the potential to become obsessed with high-impact questions, have high levels of grit and self-motivation, are open to new ideas, are intelligent, and have a high degree of intellectual curiosity. You’ll also need to be a good fit for the particular area you’re researching (e.g. you might need quantitative ability).

    Why are research skills valuable?

    Not everyone can be a Norman Borlaug, and not every discovery gets adopted. Nevertheless, we think research can often be one of the most valuable skill sets to build — if you’re a good fit.

    We’ll argue that:

    Together, this suggests that research skills could be particularly useful for having an impact.

    Later, we’ll look at:

    Research seems to have been extremely high-impact historically

    If we think about what has most improved the modern world, much can be traced back to research: advances in medicine such as the development of vaccines against infectious diseases, developments in physics and chemistry that led to steam power and the industrial revolution, and the invention of the modern computer, an idea which was first proposed by Alan Turing in his seminal 1936 paper On Computable Numbers.2

    Many of these ideas were discovered by a relatively small number of researchers — but they changed all of society. This suggests that these researchers may have had particularly large individual impacts.

    Dr Nalin helped to invent oral rehydration therapy
    Dr. Nalin helped to save millions of lives with a simple innovation: giving patients with diarrhoea water mixed with salt and sugar.

    That said, research today is probably lower-impact than in the past. Research is much less neglected than it used to be: there are nearly 25 times as many researchers today as there were in 1930.3 It also turns out that more and more effort is required to discover new ideas, so each additional researcher probably has less impact than those that came before.4

    However, even today, a relatively small fraction of people are engaged in research. As an approximation, only 0.1% of the population are academics,5 and only about 2.5% of GDP is spent on research and development. If a small number of people account for a large fraction of progress, then on average each person’s efforts are significant.

    Moreover, we still think there’s a good case to be made for research being impactful on average today, which we cover in the next two sections.

    There are good theoretical reasons to think that research will be high-impact

    There’s little commercial incentive to focus on the most socially valuable research. And most researchers don’t get rich, even if their discoveries are extremely valuable. Alan Turing made no money from the discovery of the computer, and today it’s a multibillion-dollar industry. This is because the benefits of research often come a long time in the future and can’t usually be protected by patents. This means if you care more about social impact than profit, then it’s a good opportunity to have an edge.

    Research is also a route to leverage. When new ideas are discovered, they can be spread incredibly cheaply, so it’s a way that a single person can change a field. And innovations are cumulative — once an idea has been discovered, it’s added to our stock of knowledge and, in the ideal case, becomes available to everyone. Even ideas that become outdated often speed up the important future discoveries that supersede it.

    Research skills seem extremely useful to the problems we think are most pressing

    When you look at our list of the world’s most pressing problems — like preventing future pandemics or reducing risks from AI systems — expert researchers seem like a key bottleneck.

    For example, to reduce the risk posed by engineered pandemics, we need people who are talented at research to identify the biggest biosecurity risks and to develop better vaccines and treatments.

    To ensure that developments in AI are implemented safely and for the benefit of humanity, we need technical experts thinking hard about how to design machine learning systems safely and policy researchers to think about how governments and other institutions should respond. (See this list of relevant research questions.)

    And to decide which global priorities we should spend our limited resources on, we need economists, mathematicians, and philosophers to do global priorities research. For example, see the research agenda of the Global Priorities Institute at Oxford.

    We’re not sure why so many of the most promising ways to make progress on the problems we think are most pressing involve research, but it may well be due to the reasons in the section above — research offers huge opportunities for leverage, so if you take a hits-based approach to finding the best solutions to social problems, it’ll often be most attractive.

    In addition, our focus on neglected problems often means we focus on smaller and less developed areas, and it’s often unclear what the best solutions are in these areas. This means that research is required to figure this out.

    For more examples, and to get a sense of what you might be able to work on in different fields, see this list of potentially high-impact research questions, organised by discipline.

    If you’re a good fit, you can have much more impact than the average

    The sections above give reasons why research can be expected to be impactful in general. But as we’ll show below, the productivity of individual researchers probably varies a great deal (and more than in most other careers). This means that if you have reason to think your degree of fit is better than average, your expected impact could be much higher than the average.

    Depending on which subject you focus on, you may have good backup options

    Pursuing research helps you develop deep expertise on a topic, problem-solving, and writing skills. These can be useful in many other career paths. For example:

    • Many research areas can lead to opportunities in policymaking, since relevant technical expertise is valued in some of these positions. You might also have opportunities to advise policymakers and the public as an expert.
    • The expertise and credibility you can develop by focusing on research (especially in academia) can put you in a good position to switch your focus to communicating important ideas, especially those related to your speciality, either to the general public, policymakers, or your students.
    • If you specialise in an applied quantitative subject, it can open up certain high-paying jobs, such as quantitative trading or data science, which offer good opportunities for earning to give.

    Some research areas will have much better backup options than others — lots of jobs value applied quantitative skills, so if your research is quantitative you may be able to transition into work in effective nonprofits or government. A history academic, by contrast, has many fewer clear backup options outside of academia.

    What does building research skills typically involve?

    By ‘research skills’ we broadly mean the ability to make progress solving difficult intellectual problems.

    We find it especially useful to roughly divide research skills into three forms:

    Academic research

    Building academic research skills is the most predefined route. The focus is on answering relatively fundamental questions which are considered valuable by a specific academic discipline. This can be impactful either through generally advancing a field of research that’s valuable to society or finding opportunities to work on socially important questions within that field.

    Turing was an academic. He didn’t just invent the computer — during World War II he developed code-breaking machines that allowed the Allies to be far more effective against Nazi U-boats. Some historians estimate this enabled D-Day to happen a year earlier than it would have otherwise.6 Since World War II resulted in 10 million deaths per year, Turing may have saved about 10 million lives.

    Alan Turing aged 16
    Turing was instrumental in developing the computer. Sadly, he was prosecuted for being gay, perhaps contributing to his suicide in 1954.

    We’re particularly excited about academic research in subfields of machine learning relevant to reducing risks from AI, subfields of biology relevant to preventing catastrophic pandemics, and economics — we discuss which fields you should enter below.

    Academic careers are also excellent for developing credibility, leading to many of the backup options we looked at above, especially options in communicating important ideas or policymaking.

    Academia is relatively unique in how flexibly you can use your time. This can be a big advantage — you really get time to think deeply and carefully about things — but can be a hindrance, depending on your work style.

    See more about what academia involves in our career review on academia.

    Practical but big picture research

    Academia rewards a focus on questions that can be decisively answered with the methods of the field. However, the most important questions can rarely be answered rigorously — the best we can do is look at many weak forms of evidence and come to a reasonable overall judgement. which means while some of this research happens in academia, it can be hard to do that.

    Instead, this kind of research is often done in nonprofit research institutes, e.g. the Centre for the Governance of AI or Our World in Data, or independently.

    Your focus should be on answering the questions that seem most important (given your view of which global problems most matter) through whatever means are most effective.

    Some examples of questions in this category that we’re especially interested in include:

    • How likely is a pandemic worse than COVID-19 in the next 10 years?
    • How difficult is the AI alignment problem going to be to solve?
    • Which global problems are most pressing?
    • Is the world getting better or worse over time?
    • What can we learn from the history of philanthropy about which forms of philanthropy might be most effective?

    You can see a longer list of ideas in this article.

    Someone we know who’s had a big impact with research skills is Ajeya Cotra. Ajeya initially studied electrical engineering and computer science at UC Berkeley. In 2016, she joined Open Philanthropy as a grantmaker.7 Since then she’s worked on a framework for estimating when transformative AI might be developed, how worldview diversification could be applied to allocating philanthropic budgets, and how we might accidentally teach AI models to deceive us.

    Ajeya Cotra
    Ajeya was moved by many of the conclusions of effective altruism, which eventually led to her researching the transformative effects of AI.

    Applied research

    Then there’s applied research. This is often done within companies or nonprofits, like think tanks (although again, there’s also plenty of applied research happening in academia). Here the focus is on solving a more immediate practical problem (and if pursued by a company, where it might be possible to make profit from the solution) — and there’s lots of overlap with engineering skills. For example:

    • Developing new vaccines
    • Creating new types of solar cells or nuclear reactors
    • Developing meat substitutes

    Neel was doing an undergraduate degree in maths when he decided that he wanted to work in AI safety. Our team was able to introduce Neel to researchers in the field and helped him secure internships in academic and industry research groups. Neel didn’t feel like he was a great fit for academia — he hates writing papers — so he applied to roles in commercial AI research labs. He’s now a research engineer at DeepMind. He works on mechanistic interpretability research which he thinks could be used in the future to help identify potentially dangerous AI systems before they can cause harm.

    Neel Nanda
    Neel’s machine learning research is heavily mathematical — but has clear applications to reducing the risks from advanced AI.

    We also see “policy research” — which aims to develop better ideas for public policy — as a form of applied research.

    Stages of progression through building and using research skills

    These different forms of research blur into each other, and it’s often possible to switch between them during a career. In particular, it’s common to begin in academic research and then switch to more applied research later.

    However, while the skill sets contain a common core, someone who can excel in intellectual academic research might not be well-suited to big picture practical or applied research.

    The typical stages in an academic career involve the following steps:

    1. Pick a field. This should be heavily based on personal fit (where you expect to be most successful and enjoy your work the most), though it’s also useful to think about which fields offer the best opportunities to help tackle the problems you think are most pressing, give you expertise that’s especially useful given these problems, and use that at least as a tie-breaker. (Read more about choosing a field.)
    2. Earn a PhD.
    3. Learn your craft and establish your career — find somewhere you can get great mentorship and publish a lot of impressive papers. This usually means finding a postdoc with a good group and then temporary academic positions.
    4. Secure tenure.
    5. Focus on the research you think is most socially valuable (or otherwise move your focus towards communicating ideas or policy).

    Academia is usually seen as the most prestigious path…within academia. But non-academic positions can be just as impactful — and often more so since you can avoid some of the dysfunctions and distractions of academia, such as racing to get publications.

    At any point after your PhD (and sometimes with only a master’s), it’s usually possible to switch to applied research in industry, policy, nonprofits, and so on, though typically you’ll still focus on getting mentorship and learning for at least a couple of years. And you may also need to take some steps to establish your career enough to turn your attention to topics that seem more impactful.

    Note that from within academia, the incentives to continue with academia are strong, so people often continue longer than they should!

    If you’re focused on practical big picture research, then there’s less of an established pathway, and a PhD isn’t required.

    Besides academia, you could attempt to build these skills in any job that involves making difficult, messy intellectual judgement calls, such as investigative journalism, certain forms of consulting, buy-side research in finance, think tanks, or any form of forecasting.

    Personal fit is perhaps more important for research than other skills

    The most talented researchers seem to differ hugely in their impact compared to typical researchers across a wide variety of metrics and according to the opinions of other researchers.

    For instance, when we surveyed biomedical researchers, they said that very good researchers were rare, and they’d be willing to turn down large amounts of money if they could get a good researcher for their lab.8 Professor John Todd, who works on medical genetics at Cambridge, told us:

    The best people are the biggest struggle. The funding isn’t a problem. It’s getting really special people[…] One good person can cover the ground of five, and I’m not exaggerating.

    This makes sense if you think the distribution of research output is very wide — that the very best researchers have a much greater output than the average researcher.

    How much do researchers differ in productivity?

    It’s hard to know exactly how spread out the distribution is, but there are several strands of evidence that suggest the variability is very high.

    Firstly, most academic papers get very few citations, while a few get hundreds or even thousands. An analysis of citation counts in science journals found that ~47% of papers had never been cited, more than 80% had been cited 10 times or less, but the top 0.1% had been cited more than 1,000 times. A similar pattern seems to hold across individual researchers, meaning that only a few dominate — at least in terms of the recognition their papers receive.

    Citation count is a highly imperfect measure of research quality, so these figures shouldn’t be taken at face-value. For instance, which papers get cited the most may depend at least partly on random factors, academic fashions, and “winner takes all” effects — papers that get noticed early end up being cited by everyone to back up a certain claim, even if they don’t actually represent the research that most advanced the field.

    However, there are other reasons to think the distribution of output is highly skewed.

    William Shockley, who won the Nobel Prize for the invention of the transistor, gathered statistics on all the research employees in national labs, university departments, and other research units, and found that productivity (as measured by total number of publications, rate of publication, and number of patents) was highly skewed, following a log-normal distribution.

    Shockley suggests that researcher output is the product of several (normally distributed) random variables — such as the ability to think of a good question to ask, figure out how to tackle the question, recognize when a worthwhile result has been found, write adequately, respond well to feedback, and so on. This would explain the skewed distribution: if research output depends on eight different factors and their contribution is multiplicative, then a person who is 50% above average in each of the eight areas will in expectation be 26 times more productive than average.9

    When we looked at up-to-date data on how productivity differs across many different areas, we found very similar results. The bottom line is that research seems to perhaps be the area where we have the best evidence for output being heavy-tailed.

    Interestingly, while there’s a huge spread in productivity, the most productive academic researchers are rarely paid 10 times more than the median, since they’re on fixed university pay-scales. This means that the most productive researchers yield a large “excess” value to their field. For instance, if a productive researcher adds 10 times more value to the field than average, but is paid the same as average, they will be producing at least nine times as much net benefit to society. This suggests that top researchers are underpaid relative to their contribution, discouraging them from pursuing research and making research skills undersupplied compared to what would be ideal.

    Can you predict these differences in advance?

    Practically, the important question isn’t how big the spread is, but whether you could — early on in your career — identify whether or not you’ll be among the very best researchers.

    There’s good news here! At least in scientific research, these differences also seem to be at least somewhat predictable ahead of time, which means the people entering research with the best fit could have many times more expected impact.

    In a study, two IMF economists looked at maths professors’ scores in the International Mathematical Olympiad — a prestigious maths competition for high school students. They concluded that each additional point scored on the International Mathematics Olympiad “is associated with a 2.6 percent increase in mathematics publications and a 4.5 percent increase in mathematics citations.”

    We looked at a range of data on how predictable productivity differences are in various areas and found that they’re much more predictable in research.

    What does this mean for building research skills?

    The large spread in productivity makes building strong research skills a lot more promising if you’re a better fit than average. And if you’re a great fit, research can easily become your best option.

    And while these differences in output are not fully predictable at the start of a career, the spread is so large that it’s likely still possible to predict differences in productivity with some reliability.

    This also means you should mainly be evaluating your long-term expected impact in terms of your chances of having a really big success.

    That said, don’t rule yourself out too early. Firstly, many people systematically underestimate their skills. (Though others overestimate them!) Also, the impact of research can be so large that it’s often worth trying it out, even if you don’t expect you’ll succeed. This is especially true because the early steps of a research career often give you good career capital for many other paths.

    How to evaluate your fit

    How to predict your fit in advance

    It’s hard to predict success in advance, so we encourage an empirical approach: see if you can try it out and look at your track record.

    You probably have some track record in research: many of our readers have some experience in academia from doing a degree, whether or not they intended to go into academic research. Standard academic success can also point towards being a good fit (though is nowhere near sufficient!):

    • Did you get top grades at undergraduate level (a 1st in the UK or a GPA over 3.5 in the US)?
    • If you do a graduate degree, what’s your class rank (if you can find that out)? If you do a PhD, did you manage to author an article in a top journal (although note that this is easier in some disciplines than others)?

    Ultimately, though, your academic track record isn’t going to tell you anywhere near as much as actually trying out research. So it’s worth looking for ways to cheaply try out research (which can be easy if you’re at college). For example, try doing a summer research project and see how it goes.

    Some of the key traits that suggest you might be a good fit for a research skills seem to be:

    • Intelligence (Read more about whether intelligence is important for research.)
    • The potential to become obsessed with a topic (Becoming an expert in anything can take decades of focused practice, so you need to be able to stick with it.)
    • Relatedly, high levels of grit, self-motivation, and — especially for independent big picture research, but also for research in academia — the ability to learn and work productively without a traditional manager or many externally imposed deadlines
    • Openness to new ideas and intellectual curiosity
    • Good research taste, i.e. noticing when a research question matters a lot for solving a pressing problem

    There are a number of other cheap ways you might try to test your fit.

    Something you can do at any stage is practice research and research-based writing. One way to get started is to try learning by writing.

    You could also try:

    • Finding out what the prerequisites/normal backgrounds of people who go into a research area are to compare your skills and experience to them
    • Reading key research in your area, trying to contribute to discussions with other researchers (e.g. via a blog or twitter), and getting feedback on your ideas
    • Talking to successful researchers in a field and asking what they look for in new researchers

    How to tell if you’re on track

    Here are some broad milestones you could aim for while becoming a researcher:

    • You’re successfully devoting time to building your research skills and communicating your findings to others. (This can often be the hardest milestone to hit for many — it can be hard to simply sustain motivation and productivity given how self-directed research often needs to be.)
    • In your own judgement, you feel you have made and explained multiple novel, valid, nontrivially important (though not necessarily earth-shattering) points about important topics in your area.
    • You’ve had enough feedback (comments, formal reviews, personal communication) to feel that at least several other people (whose judgement you respect and who have put serious time into thinking about your area) agree, and (as a result) feel they’ve learned something from your work. For example, lots of this feedback could come from an academic supervisor. Make sure you’re asking people in a way that gives them affordance to say you’re not doing well.
    • You’re making meaningful connections with others interested in your area — connections that seem likely to lead to further funding and/or job opportunities. This could be from the organisations most devoted to your topics of interest; but, there could also be a “dissident” dynamic in which these organisations seem uninterested and/or defensive, but others are noticing this and offering help.

    If you’re finding it hard to make progress in a research environment, it’s very possible that this is the result of that particular environment, rather than the research itself. So it can be worth testing out multiple different research jobs before deciding this skill set isn’t for you.

    Within academic research

    Academia has clearly defined stages, so you can see how you’re performing at each of these.

    Very roughly, you can try asking “How quickly and impressively is my career advancing, by the standards of my institution and field?” (Be careful to consider the field as a whole, rather than just your immediate peers, who might be very different from average.) Academics with more experience than you may be able to help give you a clear idea of how things are going.

    We go through this in detail in our review of academic research careers.

    Within independent research

    As a very rough guideline, people who are an excellent fit for independent research can often reach the broad milestones above with a year of full-time effort purely focusing on building a research skill set, or 2–3 years of 20%-time independent effort (i.e. one day per week).

    Within research in industry or policy

    The stages here can look more like an organisation-building career, and you can also assess your fit by looking at your rate of progression through the organisation.

    How to get started building research skills

    As we mentioned above, if you’ve done an undergraduate degree, one obvious pathway into research is to go to graduate school (read our advice on choosing a graduate programme) and then attempt to enter academia before deciding whether to continue or pursue positions outside of academia later in your career.

    If you take the academic path, then the next steps are relatively clear. You’ll want to try to get excellent grades in undergraduate and in your master’s, ideally gain some kind of research experience in your summers, and then enter the best PhD programme you can. From there, focus on learning your craft by working under the best researcher you can find as a mentor and working in a top hub for your field. Try to publish as many papers as possible since that’s required to land an academic position.

    It’s also not necessary to go to graduate school to become a great researcher (though this depends a lot on the field), especially if you’re very talented.
    For instance, we interviewed Chris Olah, who is working on AI research without even an undergraduate degree.

    You can enter many non-academic research jobs without a background in academia. So one starting point for building up research skills would be getting a job at an organisation specifically focused on the type of question you’re interested in. For examples, take a look at our list of recommended organisations, many of which conduct non-academic research in areas relevant to pressing problems.

    More generally, you can learn research skills in any job that heavily features making difficult intellectual judgement calls and bets, preferably on topics that are related to the questions you’re interested in researching. These might include jobs in finance, political analysis, or even nonprofits.

    Another common route — depending on your field — is to develop software and tech skills and then apply them at research organisations. For instance, here’s a guide to how to transition from software engineering into AI safety research.

    If you’re interested in doing practical big-picture research (especially outside academia), it’s also possible to establish your career through self-study and independent work — during your free time or on scholarships designed for this (such as EA Long-Term Future Fund grants and Open Philanthropy support for individuals working on relevant topics).

    Some example approaches you might take to self-study:

    • Closely and critically review some pieces of writing and argumentation on relevant topics. Explain the parts you agree with as clearly as you can and/or explain one or more of your key disagreements.
    • Pick a relevant question and write up your current view and reasoning on it. Alternatively, write up your current view and reasoning on some sub-question that comes up as you’re thinking about it.
    • Then get feedback, ideally from professional researchers or those who use similar kinds of research in their jobs.

    It could also be beneficial to start with some easier versions of this sort of exercise, such as:

    • Explaining or critiquing interesting arguments made on any topic you find motivating to write about
    • Writing fact posts
    • Reviewing the academic literature on any topic of interest and trying to reach and explain a bottom-line conclusion

    In general, it’s not necessary to obsess over being “original” or having some new insight at the beginning. You can learn a lot just by trying to write up your current understanding.

    Choosing a research field

    When you’re getting started building research skills, there are three factors to consider in choosing a field:

    1. Personal fit — what are your chances of being a top researcher in the area? Even if you work on an important question, you won’t make much difference if you’re not particularly good at it or motivated to work on the problem.
    2. Impact — how likely is it that research in your field will contribute to solving pressing problems?
    3. Back-up options — how will the skills you build open up other options if you decide to change fields (or leave research altogether)?

    One way to go about making a decision is to roughly narrow down fields by relevance and back-up options and then pick among your shortlist based on personal fit.

    We’ve found that, especially when they’re getting started building research skills, people sometimes think too narrowly about what they can be good at and enjoy. Instead, they end up pigeonholing themselves in a specific area (for example being restricted by the field of their undergraduate degree). This can be harmful because it means people who could contribute to highly important research don’t even consider it. This increases the importance of writing a broad list of possible areas to research.

    Given our list of the world’s most pressing problems, we think some of the most promising fields to do research within are as follows:

    • Fields relevant to artificial intelligence, especially machine learning, but also computer science more broadly. This is mainly to work on AI safety directly, though there are also many opportunities to apply machine learning to other problems (as well as many back-up options).
    • Biology, particularly synthetic biology, virology, public health, and epidemiology. This is mainly for biosecurity.
    • Economics. This is for global priorities research, development economics, or policy research relevant to any cause area, especially global catastrophic risks.
    • Engineering — read about developing and using engineering skills to have an impact.
    • International relations/political science, including security studies and public policy — these enable you to do research into policy approaches to mitigating catastrophic risks and are also a good route into careers in government and policy more broadly.
    • Mathematics, including applied maths or statistics (or even physics). This may be a good choice if you’re very uncertain, as it teaches you skills that can be applied to a whole range of different problems — and lets you move into most of the other fields we list. It’s relatively easy to move from a mathematical PhD into machine learning, economics, biology, or political science, and there are opportunities to apply quantitative methods to a wide range of other fields. They also offer good back-up options outside of research.
    • There are many important topics in philosophy and history, but these fields are unusually hard to advance within, and don’t have as good back-up options. (We do know lots of people with philosophy PhDs who have gone on to do other great, non-philosophy work!)

    However, many different kinds of research skills can play a role in tackling pressing global problems.

    Choosing a sub-field can sometimes be almost as important as choosing a field. For example, in some sciences the particular lab you join will determine your research agenda — and this can shape your entire career.

    And as we’ve covered, personal fit is especially important in research. This can mean it’s easily worth going into a field that seems less relevant on average if you are an excellent fit. (This is due both to the value of the research you might produce and the excellent career capital that comes from becoming top of an academic field.)

    For instance, while we most often recommend the fields above, we’d be excited to see some of our readers go into history, psychology, neuroscience, and a whole number of other fields. And if you have a different view of global priorities from us, there might be many other highly relevant fields.

    Once you have these skills, how can you best apply them to have an impact?

    Richard Hamming used to annoy his colleagues by asking them “What’s the most important question in your field?”, and then after they’d explained, following up with “And why aren’t you working on it?”

    You don’t always need to work on the very most important question in your field, but Hamming has a point. Researchers often drift into a narrow speciality and can get detached from the questions that really matter.

    Now let’s suppose you’ve chosen a field, learned your craft, and are established enough that you have some freedom about where to focus. Which research questions should you focus on?

    Which research topics are the highest-impact?

    Charles Darwin travelled the oceans to carefully document different species of birds on a small collection of islands — documentation which later became fuel for the theory of evolution. This illustrates how hard it is to predict which research will be most impactful.

    What’s more, we can’t know what we’re going to discover until we’ve discovered it, so research has an inherent degree of unpredictability. There’s certainly an argument for curiosity-driven research without a clear agenda.

    That said, we think it’s also possible to increase your chances of working on something relevant, and the best approach is to try to find topics that both personally motivate you and seem more likely than average to matter. Here are some approaches to doing that.

    Using the problem framework

    One approach is to ask yourself which global problems you think are most pressing, and then try to identify research questions that are:

    • Important to making progress on those problems (i.e. if this question were answered, it would lead to more progress on these problems)
    • Neglected by other researchers (e.g. because they’re at the intersection of two fields, unpopular for bad reasons, or new)
    • Tractable (i.e. you can see a path to making progress)

    The best research questions will score at least moderately well on all parts of this framework. Building a perpetual motion machine is extremely important — if we could do it, then we’d solve our energy problems — but we have good reason to think it’s impossible, so it’s not worth working on. Similarly, a problem can be important but already have the attention of many extremely talented researchers, meaning your extra efforts won’t go very far.

    Finding these questions, however, is difficult. Often, the only way to identify a particularly promising research question is to be an expert in that field! That’s because (when researchers are doing their jobs), they will be taking the most obvious opportunities already.

    However, the incentives within research rarely perfectly line up with the questions that most matter (especially if you have unusual values, like more concern for future generations or animals). This means that some questions often get unfairly neglected. If you’re someone who does care a lot about positive impact and have some slack, you can have a greater-than-average impact by looking for them.

    Below are some more ways of finding those questions (which you can use in addition to directly applying the framework above).

    Rules of thumb for finding unfairly neglected questions

    • There’s little money in answering the question. This can be because the problem mostly affects poorer people, people who are in the future, or non-humans, or because it involves public goods. This means there’s little incentive for businesses to do research on this question.
    • The political incentives to answer the question are missing. This can happen when the problem hurts poorer or otherwise marginalised people, people who tend not to organise politically, people in countries outside the one where the research is most likely to get done, people who are in the future, or non-humans. This means there’s no incentive for governments or other public actors to research this question.
    • It’s new, doesn’t already have an established discipline, or is at the intersection of two disciplines. The first researchers in an area tend to take any low hanging fruit, and it gets harder and harder from there to make big discoveries. For example, the rate of progress within machine learning is far higher than the rate of progress within theoretical physics. At the same time, the structure of academia means most researchers stay stuck within the field they start in, and it can be hard to get funding to branch out into other areas. This means that new fields or questions at the intersection of two disciplines often get unfairly neglected and therefore provide opportunities for outsized impact.
    • There is some aspect of human irrationality that means people don’t correctly prioritise the issue. For instance, some issues are easy to visualise, which makes them more motivating to work on. People are scope blind which means they’re likely to neglect the issues with the very biggest scale. They’re also bad at reasoning about issues with low probability, which can make them either over-invest or under-invest in them.
    • Working on the question is low status. In academia, research that’s intellectually interesting and fits the research standards of the discipline are high status. Also, mathematical and theoretical work tends to be seen as higher status (and therefore helps to progress your career). But these don’t correlate that well with the social value of the question.
    • You’re bringing new skills or a new perspective to an established area. Progress often comes in science from bringing the techniques and insights of one field into another. For instance, Kahneman started a revolution in economics by applying findings from psychology. Cross-over is an obvious approach but is rarely used because researchers tend to be immersed in their own particular subject.

    If you think you’ve found a research question that’s short on talent, it’s worth checking whether the question is answerable. People might be avoiding the question because it’s just extremely difficult to find an answer. Or perhaps progress isn’t possible at all. Ask yourself, “If there were progress on this question, how would we know?”

    Finally, as we’ve discussed, personal fit is particularly important in research. So position yourself to work on questions where you maximise your chances of producing top work.

    Find jobs that use a research skills

    If you have these skills already or are developing it and you’re ready to start looking at job opportunities that are currently accepting applications, see our curated list of opportunities for this skill set:

      View all opportunities

      Career paths we’ve reviewed that use these skills

      Learn more about research

      See all our articles and podcasts on research careers.

      Read next:  Explore other useful skills

      Want to learn more about the most useful skills for solving global problems, according to our research? See our list.

      Plus, join our newsletter and we’ll mail you a free book

      Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

      The post Research skills appeared first on 80,000 Hours.

      ]]>
      Organisation-building https://80000hours.org/skills/organisation-building/ Mon, 18 Sep 2023 10:39:52 +0000 https://80000hours.org/?post_type=skill_set&p=83652 The post Organisation-building appeared first on 80,000 Hours.

      ]]>
      When most people think of careers that “do good,” the first thing they think of is working at a charity.

      The thing is, lots of jobs at charities just aren’t that impactful.

      Some charities focus on programmes that don’t work, like Scared Straight, which actually caused kids to commit more crimes. Others focus on ways of helping that, while thoughtful and helpful, don’t have much leverage, like knitting individual sweaters for penguins affected by oil spills (this actually happened!) instead of funding large-scale ocean cleanup projects.

      A penguin wearing a knitted sweater
      While this penguin certainly looks all warm and cosy, we’d guess that knitting each sweater one-by-one wouldn’t be the best use of an organisation’s time.

      But there are also many organisations out there — both for-profit and nonprofit — focused on pressing problems, implementing effective and scalable solutions, run by great teams, and in need of people.

      If you can build skills that are useful for helping an organisation like this, it could well be one of the highest-impact things you can do.

      In particular, organisations often need generalists able to do the bread and butter of building an organisation — hiring people, management, administration, communications, running software systems, crafting strategy, fundraising, and so on.

      We call these ‘organisation-building’ skills. They can be high impact because you can increase the scale and effectiveness of the organisation you’re working at, while also gaining skills that can be applied to a wide range of global problems in the future (and make you generally employable too).

      In a nutshell: Organisation-building skills — basically, skills that let you effectively and efficiently build, run, and generally boost an organisation you work for — can be extremely high impact if you use them to support an organisation working on an effective solution to a pressing problem. There are a wide variety of organisation-building skills, including operations, management, accounting, recruiting, communications, law, and so on. You could choose to become a generalist across several or specialise in just one.

      Key facts on fit

      In general, signs you’ll be a great fit include: you often find ways to do things better, really dislike errors, see issues that keep happening and think deeply about fixes, manage your time and plan complex projects, pick up new things fast, and really pay attention to details. But there is a very wide range of different roles, each with quite different requirements, especially in more specialised roles.

      Why are organisation-building skills valuable?

      A well-run organisation can take tens, hundreds, or even thousands of people working on solving the world’s most pressing problems and help them work together far more effectively.

      An employee with the right skills can often be a significant boost to an organisation, either by directly helping them deliver an impactful programme or by building the capacity of the organisation so that it can operate at a greater scale in the future. You could, for example, set up organisational infrastructure to enable the hiring of many more people in the future.

      What’s more, organisation-building skills can be applied at most organisations, which means you’ll have opportunities to help tackle many different global problems in the future. You’ll also be flexibly able to work on many different solutions to any given problem if you find better solutions later in your career.

      As an added bonus, the fact that pretty much all organisations need these skills means you’ll be employable if you decide to earn to give or step back from doing good all together. In fact, organisational management skills seem like some of the most useful and highest paid in the economy in general.

      It can be even more valuable to help found a new organisation rather than build an existing one, though this is a particularly difficult step to take when you’re early in your career. (Read more on whether you should found an organisation early in your career.) See our profile on founding impactful organisations to learn more.

      What does organisation-building typically involve?

      A high-impact career using organisation-building skills typically involves these rough stages:

      1. Building generally useful organisational skills, such as operations, people management, fundraising, administration, software systems, finance, etc.
      2. Then applying those skills to help build (or found) high-impact organisations

      The day-to-day of an organisation-building role is going to vary a lot depending on the job.

      Here’s a possible description that could help build some intuition.

      Picture yourself working from an office or, increasingly, from your own home. You’ll spend lots of time on your computer — you might be planning, organising tasks, updating project timelines, reworking a legal brief, or contracting out some marketing. You’ll likely spend some time communicating via email or chatting with colleagues. Your day will probably involve a lot of problem solving, making decisions to keep things going.

      If you work for a small organisation, especially in the early stages, your “office” could be anywhere — a home office, a local coffee shop, or a shared workspace. If you manage people, you’ll conduct one-on-one meetings to provide feedback, set goals, and discuss personal development. In a project-oriented role, you might spend lots of time developing strategy, or analysing data to evaluate your impact.

      What skills are needed to build organisations?

      Organisation builders typically have skills in areas like:

      • Operations management
      • Project management (including setting objectives, metrics, etc.)
      • People management and coaching (Some manager jobs require specialised skills, but some just require general management-associated skills like leadership, interpersonal communication, and conflict resolution.)
      • Executive leadership (setting and achieving organisation-wide goals, making top-level decisions about budgeting, etc.)
      • Entrepreneurship
      • Recruiting
      • Fundraising
      • Marketing (which also benefits from communications skills)
      • Communications and public relations (which also benefits from communications skills)
      • Human resources
      • Office management
      • Events management
      • Assistant and administrative work
      • Finance and accounting
      • Corporate and nonprofit law

      Many organisations have a significant need for generalists who span several of these areas. If your aim is to take a leadership position, it’s useful to have a shallow knowledge of several.

      You can also pick just one skill to specialise in — especially for areas like law and accounting that tend to be their own track.

      Generally, larger organisations have a greater need for specialists, while those with under 50 employees hire more generalists.

      Example people

      How to evaluate your fit

      How to predict your fit in advance

      There’s no need to focus on the specific job or sector you work in now — it’s possible to enter organisation-building from a very wide variety of areas. We’ve even known academic philosophers who have transitioned to organisation-building!

      Some common initial indicators of fit might include:

      • You have an optimisation mindset. You frequently notice how things could be done more efficiently and have a strong internal drive to prevent avoidable errors and make things run more smoothly.
      • You intuitively engage in systems thinking and enjoy going meta. This is a bit difficult to summarise, but involves things like: you’d notice when people ask you similar questions multiple times and then think about how to prevent the issue from coming up again. For example: “Can you give me access to this doc” turns into “What went wrong such that this person didn’t already have access to everything they need? How can we improve naming conventions or sharing conventions in the future?”
      • You’re reliable, self-directed, able to manage your time well, and you can create efficient and productive plans and keep track of complex projects.
      • You might also be good at learning quickly and have high attention to detail.

      Of course, different types of organisation-building will require different skills. For example, being a COO or events manager requires greater social and system building skills, whereas working in finance requires fewer social skills, but does require basic quantitative skills and perhaps more conscientiousness and attention to detail.

      If you’re really excited by a particular novel idea and have lots of energy and excitement for the idea, you might be a good fit for founding an organisation. (Read more about what it takes to successfully found a new organisation.)

      You should try doing some cheap tests first — these might include talking to someone who works at the organisation you’re interested in helping to build, volunteering to do a short project, or doing an internship. Then you might commit to working there for 2–24 months (being prepared to switch to something else if you don’t think you’re on track).

      How to tell if you’re on track

      All of these — individually or together — seem like good signs of being on track to build really useful organisation-building skills:

      • You get job offers (as a contractor or staff) at organisations you’d like to work for.
      • You’re promoted within your first two years.
      • You receive excellent performance reviews.
      • You’re asked to take on progressively more responsibility over time.
      • Your manager / colleagues suggest you might take on more senior roles in the future.
      • You ask your superiors for their honest assessment of your fit and they are positive (e.g. they tell you you’re in the top 10% of people they can imagine doing your role).
      • You’re able to multiply a superior’s time by over 2–20X, depending on the role type.
      • If you’re aiming to build a new organisation, write out some one-page summaries of ideas for new organisations you’d like to exist and get feedback from grantmakers and experts.
      • If founding a new organisation, you get seed funding from a major grantmaker, like Open Philanthropy, Longview Philanthropy, EA Funds, or a private donor.

      This said, if you don’t hit these milestones, you might still be a good fit for organisation-building — the issue might be that you’re at the wrong organisation or have the wrong boss.

      How to get started building organisation-building skills

      You can get started by finding any role that will let you start learning one of the skills listed above. Work in one specialisation will often give you exposure to the others, and it’s often possible to move between them.

      If you can do this at a high-performing organisation that’s also having a big impact right away, that’s great. If you’re aware of any organisations like these, it’s worth applying just in case.

      But, unfortunately, this is often not possible, especially if you’re fresh out of college, for a number of reasons:

      • The organisations have limited mentorship capacity, so they most often hire people with a couple of years of experience rather than those fresh out of college (though there are exceptions) and often aren’t in a good position to help you become excellent at these skills.
      • These organisations usually hire people who already have some expertise in the problem area they’re working on (e.g. AI safety, biosecurity), as these issues involve specialised knowledge.
      • We chose our recommended problems in large part because they’re unusually neglected. But the fact that they’re neglected also means there aren’t many open positions or training programmes.

      As a result, early in your career it can easily be worth pursuing roles at organisations that don’t have much impact in order to build your skills.

      The way to do this is to work at any organisation that’s generally high-performing, especially if you can work under someone who’s a good manager and will mentor you — the best way to learn how to run an organisation is to learn from people who are already excellent at this skill.

      Then, try to advance as quickly as you can within that organisation or move to higher-responsibility roles in other organisations after 1–3 years of high-performance.

      It can also help if the organisation is small but rapidly growing, since that usually makes it much easier to get promoted — and if the organisation succeeds in a big way, that will give you a lot of options in the future.

      In a small organisation you can also try out a wider range of roles, helping you figure out which aspects of organisation-building are the best fit for you and giving you the broad background that’s useful for leadership roles in the future. Moreover, many of the organisations we think are doing the best work on the most pressing problems are startups, so being used to this kind of environment can be an advantage.

      One option within this category we especially recommend is to consider becoming an early employee at a tech startup.

      If you pick well, working at a tech startup gives you many of the advantages of working at a small, growing, high-performing organisation mentioned above, while also offering high salaries and an introduction to the technology sector. (This is even better if you can find an organisation that will let you learn about artificial intelligence or synthetic biology.)

      We’ve advised many people who have developed organisation-building skills in startups and then switched to nonprofit work (or earned to give), while having good backup options.

      That said, smaller organisations have downsides such as being more likely to fail and less mentorship capacity. Many are also poorly run. So it’s important to pick carefully.

      Another option to consider in this category is working at a leading AI lab, because they can often offer good training, look impressive on your CV, and let you learn about AI. That said, you’ll need to think carefully about whether your work could be accelerating the risks from AI as well.

      One of the most common ways to build these skills is to work in large tech companies, consulting or professional services (or more indirectly, to train as a lawyer or in finance). These are most useful for learning how to apply these skills in very large corporate and government organisations, or to build a speciality like accounting. We think there are often more direct ways to do useful work on the problems we think are most pressing, but these prestigious corporate jobs can still be the best option for some.

      However, it’s important to remember you can build organisation-building skills in any kind of organisation: from nonprofits to academic research institutes to government agencies to giant corporations. What most matters is that you’re working with people who have this skill, who are able to train you.

      Should you found your own organisation early in your career?

      For a few people, founding an organisation fairly early in your career could be a fantastic career step. Whether or not the organisation you start succeeds, along the way you could gain strong organisation-building (and other) skills and a lot of career capital.

      We think you should be ambitious when deciding career steps, and it often makes sense to pursue high-upside options first when you’re doing some career exploration.

      This is particularly true if you:

      • Have an idea that you’ve seriously thought about, stress tested, and got positive feedback on from relevant experts
      • Have real energy and excitement for your idea (not for the idea of being an entrepreneur)
      • Understand that you’re likely to fail, and have good backup plans in place for that

      It can be hard to figure out if your idea is any good, or if you’ll be any good at this, in advance. One rule of thumb is that if, after six months to a year of work, you can be accepted to a top incubator (like Y Combinator), you’re probably on track. But if you can’t get into a top incubator, you should consider trying to build organisation-building skills in a different way (or try building a completely different skill set).

      There are many downsides of working on your own projects. In particular, you’ll get less direct feedback and mentorship, and your efforts will be spread thinly across many different types of tasks and skills, making it harder to develop specialist expertise.
      To learn more, see our article on founding new projects tackling top problems.

      Find jobs that use organisation-building skills

      See our curated list of job opportunities for this path, which you can filter by ‘management’ and ‘operations’ to find opportunities in this category (though there will also be jobs outside those filters where you can apply organisation-building skills).

        View all opportunities

        Once you have these skills, how can you best apply them to have an impact?

        The problem you work on is probably the biggest driver of your impact, so the first step is to decide which problems you think are most pressing.

        Once you’ve done that, the next step is to identify the highest-potential organisations working on your top problems.

        In particular, look for organisations that:

        1. Implement an effective solution, or one that has a good chance of having a big impact (even if it might not work)
        2. Have the potential to grow
        3. Are run by a great team
        4. Are in need of your skills

        These organisations will most often be nonprofits, but they could also be research institutes, political organisations, or for-profit companies with a social mission.1

        For specific ideas, see our list of recommended organisations. You can also find longer lists of suggestions within each of our problem profiles.

        Finally, see if you can get a job at one of these organisations that effectively uses your specific skills. If you can’t, that’s also fine — you can apply your skills elsewhere, for example through earning to give, and be ready to switch into working for a high-impact organisation in the future.

        Career paths we’ve reviewed that use organisation-building skills

        These are some reviews of career paths we’ve written that use ‘organisation-building’ skills:

        Read next:  Explore other useful skills

        Want to learn more about the most useful skills for solving global problems, according to our research? See our list.

        Plus, join our newsletter and we’ll mail you a free book

        Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

        The post Organisation-building appeared first on 80,000 Hours.

        ]]>
        Preventing catastrophic pandemics https://80000hours.org/problem-profiles/preventing-catastrophic-pandemics/ Thu, 23 Apr 2020 13:57:25 +0000 https://80000hours.org/?page_id=69550 The post Preventing catastrophic pandemics appeared first on 80,000 Hours.

        ]]>
        Some of the deadliest events in history have been pandemics. COVID-19 demonstrated that we’re still vulnerable to these events, and future outbreaks could be far more lethal.

        In fact, we face the possibility of biological disasters that are worse than ever before due to developments in technology.

        The chances of such catastrophic pandemics — bad enough to potentially derail civilisation and threaten humanity’s future — seem uncomfortably high. We believe this risk is one of the world’s most pressing problems.

        And there are a number of practical options for reducing global catastrophic biological risks (GCBRs). So we think working to reduce GCBRs is one of the most promising ways to safeguard the future of humanity right now.

        Summary

        Scale

        Pandemics — especially engineered pandemics — pose a significant risk to the existence of humanity. Though the risk is difficult to assess, some researchers estimate that there is a greater than 1 in 10,000 chance of a biological catastrophe leading to human extinction within the next 100 years, and potentially as high as 1 in 100. (See below.) And a biological catastrophe killing a large percentage of the population is even more likely — and could contribute to existential risk.

        Neglectedness

        Pandemic prevention is currently under-resourced. Even in the aftermath of the COVID-19 outbreak, spending on biodefense in the US, for instance, has only grown modestly — from an estimated $17 billion in 2019 to $24 billion in 2023.

        And little of existing pandemic prevention funding is specifically targeted at preventing biological disasters that could be most catastrophic.

        Solvability

        There are promising approaches to improve biosecurity and reducing pandemics risk, including research, policy interventions, and defensive technology development.

        Why focus your career on preventing severe pandemics?

        COVID-19 highlighted our vulnerability to worldwide pandemics and revealed weaknesses in our ability to respond. Despite advances in medicine and public health, around seven million deaths worldwide from the disease have been recorded, and many estimates put the figure far higher.

        Historical events like the Black Death and the 1918 flu show that pandemics can be some of the most damaging disasters for humanity, killing tens of millions and significant portions of the global population.

        It is sobering to imagine the potential impact of a pandemic pathogen that is much more contagious and deadly than any we’ve seen so far.

        Unfortunately, such a pathogen is possible in principle, particularly in light of advancing biotechnology. Researchers can design and create biological agents much more easily and precisely than before. (More on this below.) As the field advances, it may become increasingly feasible to engineer a pathogen that poses a major threat to all of humanity.

        States or malicious actors with access to these pathogens could use them as offensive weapons or wield them as threats to obtain leverage over others.

        Dangerous pathogens engineered for research purposes could also be released accidentally through a failure of lab safety.

        Either scenario could result in a catastrophic ‘engineered pandemic,’ which we believe could pose an even greater threat to humanity than pandemics that arise naturally, as we argue below.

        Thankfully, few people seek to use disease as a weapon, and even those willing to conduct such attacks may not aim to produce the most harmful pathogen possible. But the combined possibilities of accident, recklessness, desperation, and unusual malice suggest a disturbingly high chance of a pandemic pathogen being released that could kill a very large percentage of the population. The world might be especially at risk during great power conflicts.

        But could an engineered pandemic pose an extinction threat to humanity?

        There is reasonable debate here. In the past, societies have recovered from pandemics that killed as much as 50% of the population, and perhaps more.1

        But we believe future pandemics may be one of the largest contributors to existential risk this century, because it now seems within the reach of near-term biological advances to create pandemics that would kill greater than 50% of the population — not just in a particular area, but globally. It’s possible they could be bad enough to drive humanity to extinction, or at least be so damaging that civilisation never recovers.

        Reducing the risk of biological catastrophes by constructing safeguards against potential outbreaks and preparing to mitigate their worst effects therefore seems extremely important.

        It seems relatively uncommon for people in the broader field of biosecurity and pandemic preparedness to work specifically on reducing catastrophic risks and engineered pandemics. Projects that reduce the risk of biological catastrophe also seem to receive a relatively small proportion of health security funding.2

        In our view, the costs of biological disasters grow nonlinearly with severity because of the increasing potential for the event to contribute to existential risk. This suggests that projects to prevent the gravest outcomes in particular should receive more funding and attention than they currently do.

        In the rest of this section, we’ll discuss how artificial pandemics compare to natural pandemic risks. Later on, we’ll discuss what kind of work can and should be done in this area to reduce the risks.

        We also have a career review of biorisk research, strategy, and policy paths, which gives more specific and concrete advice about impactful roles to aim for and how to enter the field.

        Natural pandemics show how destructive biological threats can be

        Four of the worst pandemics in recorded history were:3

        1. The Plague of Justinian (541-542 CE) is thought to have arisen in Asia before spreading into the Byzantine Empire around the Mediterranean. The initial outbreak is thought to have killed around 6 million (about ~3% of world population)4 and contributed to reversing the territorial gains of the Byzantine empire.
        2. The Black Death (1335-1355 CE) is estimated to have killed 20–75 million people (about 10% of world population) and believed to have had profound impacts on the course of European history.
        3. The Columbian Exchange (1500-1600 CE) was a succession of pandemics, likely including smallpox and paratyphoid, brought by the European colonists that devastated Native American populations. It likely played a major role in the loss of around 80% of Mexico’s native population during the 16th century. Other groups in the Americas appear to have lost even greater proportions of their communities. Some groups may have lost as much as 98% of their people to these diseases.5
        4. The 1918 Influenza Pandemic (1918 CE) spread across almost the whole globe and killed 50–100 million people (2.5%–5% of the world population). It may have been deadlier than either world war.

        These historical pandemics show the potential for mass destruction from biological threats, and they are a threat worth mitigating all on their own. They also show that the key features of a global catastrophe, such as high proportional mortality and civilisational collapse, can be driven by highly destructive pandemics.

        But despite the horror of these past events, it seems unlikely that a natural pandemic could be bad enough on its own to drive humanity to total extinction in the foreseeable future, given what we know of events in natural history.6

        As philosopher Toby Ord argues in the section on natural risks in his book The Precipice, history suggests humanity faces a very low baseline extinction risk — the chance of being wiped out in ordinary circumstances — from natural causes over the course of, say, 100 years.

        That’s because if the baseline risk were around 10% per century, we’d have to conclude we’ve gotten very lucky for the 200,000 years or so of humanity’s existence. The fact of our existence is much less surprising if the risk has been about 0.001% per century.

        None of the worst plagues we know about in history was enough to destabilise civilization worldwide or clearly imperil our species’ future. And more broadly, pathogen-driven extinction events in nature appear to be relatively rare for animals.7

        Is the risk from natural pandemics increasing or decreasing?

        Are we safer from pandemics now than we used to be? Or do developments in human society actually put us at greater risk from natural pandemics?

        Good data on these questions is hard to find. The burden of infectious disease generally in human society is on a downward trend, but this doesn’t tell us much about whether infrequent outbreaks of mass pandemics could be getting worse.

        In the abstract, we can think of many reasons that the risk from naturally arising pandemics might be falling. They include:

        • We have better hygiene and sanitation than past eras, and these will likely continue to improve.
        • We can produce effective vaccinations and therapeutics.
        • We better understand disease transmission, infection, and effects on the body.
        • The human population is healthier overall.

        On the other hand:

        • Trade and air travel allow much faster and wider transmission of disease.8 For example, air travel seems to have played a large role in the spread of COVID-19 from country to country.9 In previous eras, the difficulty of travelling over long distances likely kept disease outbreaks more geographically confined.
        • Climate change may increase the likelihood of new zoonotic diseases.
        • Greater human population density may increase the likelihood that diseases will spread rapidly.
        • Much larger populations of domestic animals can potentially pass diseases on to humans.

        There are likely many other relevant considerations. Our guess is that the frequency of natural pandemics is increasing, but that they’ll be less bad on average.10 A further guess is that the second factor is more important than the first factor, netting out to reduced overall danger. There remain many open questions.

        Engineered pathogens could be even more dangerous

        But even if natural pandemic risks are declining, the risks from engineered pathogens are almost certainly growing.

        This is because advancing technology makes it increasingly feasible to create threatening viruses and infectious agents.11 Accidental and deliberate misuse of this technology is a credible global catastrophic risk and could potentially threaten humanity’s future.

        One way this could play out is if some dangerous actor wanted to bring back catastrophic outbreaks of the past.

        Polio, the 1918 pandemic influenza strain, and most recently horsepox (a close relative of smallpox) have all been recreated from scratch. The genetic sequence of all these pathogens and others are publicly available, and the progress and proliferation of biotechnology opens up terrifying opportunities.12

        Beyond the resurrection of past plagues, advanced biotechnology could let someone engineer a pathogen more dangerous than those that have occurred in natural history.

        When viruses evolve, they aren’t naturally selected to be as deadly or destructive as possible. But someone who is deliberately trying to cause harm could intentionally combine the worst features of possible viruses in a way that is very unlikely to happen naturally.

        Gene sequencing, editing, and synthesis are now possible and becoming easier. We’re getting closer to being able to produce biological agents the way we design and produce computers or other products (though how long it takes remains unclear). This may allow people to design and create pathogens that are deadlier or more transmissible, or perhaps have wholly new features. (Read more.)

        Scientists are also investigating what makes pathogens more or less lethal and contagious, which may help us better prevent and mitigate outbreaks.

        But it also means that the information required to design more dangerous pathogens is increasingly available.

        All the technologies involved have potential medical uses in addition to hazards. For example, viral engineering has been employed in gene therapy and vaccines (including some used to combat COVID-19).

        Yet knowledge of how to engineer viruses to be better as vaccines or therapeutics could be misused to develop ‘better’ biological weapons. Properly handling these advances involves a delicate balancing act.

        Hints of the dangers can be seen in the scientific literature. Gain-of-function experiments with influenza suggested that artificial selection could lead to pathogens with properties that enhance their danger.13

        And the scientific community has yet to establish strong enough norms to discourage and prevent the unrestricted sharing of dangerous findings, such as methods for making a virus deadlier. That’s why we warn people going to work in this field that biosecurity involves information hazards. It’s essential for people handling these risks to have good judgement.

        Scientists can make dangerous discoveries unintentionally in lab work. For example, vaccine research can uncover virus mutations that make a disease more infectious. And other areas of biology, such as enzyme research, show how our advancing technology can unlock new and potentially threatening capabilities that haven’t appeared before in nature.14

        In a world of many ‘unknown unknowns,’ we may find many novel dangers.

        So while the march of science brings great progress, it also brings the potential for bad actors to intentionally produce new or modified pathogens. Even with the vast majority of scientific expertise focused on benefiting humanity, a much smaller group can use the community’s advances to do great harm.

        If someone or some group has enough motivation, resources, and sufficient technical skill, it’s difficult to place an upper limit on how catastrophic an engineered pandemic they might one day create. As technology progresses, the tools for creating a biological disaster will become increasingly accessible; the barriers to achieving terrifying results may get lower and lower — raising the risk of a major attack. The advancement of AI, in particular, may catalyse the risk. (See more about this below.)

        Both accidental and deliberate misuse are threats

        We can divide the risks of artificially created pandemics into accidental and deliberate misuse — roughly speaking, imagine a science experiment gone wrong compared to a bioterrorist attack.

        The history of accidents and lab leaks which exposed people to dangerous pathogens is chilling:

        • In 1977, an unusual flu strain emerged that disproportionately sickened young people and was found to be genetically frozen in time from a 1950 strain, suggesting a lab origin from a faulty vaccine trial.
        • In 1978, a lab leak at a UK facility resulted in the last smallpox death.
        • In 1979, an apparent bioweapons lab in the USSR accidentally released anthrax spores that drifted over a town, sickening residents and animals, and killing about 60 people. Though initially covered up, Russian President Boris Yeltsin later revealed it was an airborne release from a military lab accident.
        • In 2014, dozens of CDC workers were potentially exposed to live anthrax after samples meant to be inactivated were improperly killed and shipped to lower-level labs that didn’t always use proper protective equipment.
        • We don’t really know how often this kind of thing happens because lab leaks are not consistently tracked. And there have been many more close calls.

        And history has seen many terrorist attacks and state development of mass-casualty weapons. Incidents of bioterrorism and biological warfare include:

        • In 1763, British forces at Fort Pitt gave blankets from a smallpox ward to Native American tribes, aiming to spread the disease and weaken these communities. It’s unclear if this effort achieved its aims, though smallpox devastated many of these groups.
        • During World War II, the Japanese military’s Unit 731 conducted horrific human experiments and biological warfare in China. They used anthrax, cholera, and plague, killing thousands and potentially many more. The details of these events were only uncovered later.
        • In the 1960s and 1970s, the South African government developed a covert chemical and biological warfare program known as Project Coast. The program aimed to develop biological and chemical agents targeted at specific ethnic groups and political opponents, including efforts to develop sterilisation and infertility drugs.
        • In 1984, followers of the Rajneesh movement contaminated salad bars in Oregon with Salmonella, causing more than 750 infections. It was an attempt to influence an upcoming election.
        • In 2001, shortly after the September 11 attacks, anthrax spores were mailed to several news outlets and two U.S. Senators, causing 22 infections and five deaths.

        So should we be more concerned about accidents or bioterrorism? We’re not sure. There’s not a lot of data to go on, and considerations pull in both directions.

        It may seem releasing a deadly pathogen on purpose is more concerning. As discussed, the worst pandemics would most likely be intentionally created rather than emerge by chance, as discussed above. Plus, there are ways to make a pathogen’s release more or less harmful, and an accidental release probably wouldn’t be optimised for maximum damage.

        On the other hand, many more people are well-intentioned and want to use biotechnology to help the world rather than harm it. And efforts to eliminate state bioweapons programs likely reduce the number of potential attackers. (But see more about the limits on these efforts below.) So it seems most plausible that there are more opportunities for a disastrous accident to occur than for a malicious actor to pull off a mass biological attack.

        We guess that, all things considered, the former considerations are the more significant factors.15 So we suspect that deliberate misuse is more dangerous than accidental releases, though both are certainly worth guarding against.

        This image is borrowed from Claire Zabel’s talk on biosecurity.16

        Overall, the risk seems substantial

        We’ve seen a variety of estimates regarding the chances of an existential biological catastrophe, including the possibility of engineered pandemics.17 Perhaps the best estimates come from the Existential Risk Persuasion Tournament (XPT).

        This project involved getting groups of both subject matter experts and experienced forecasters to estimate the likelihood of extreme events. For biological risks, the range of median estimates between forecasters and domain experts were as follows:

        • Catastrophic event (meaning an event in which 10% or more of the human population dies) by 2100: ~1–3%
        • Human extinction event: 1 in 50,000 to 1 in 100
        • Genetically engineered pathogen killing more than 1% of the population by 2100: 4– 10%18
        • Note: the forecasters tended to have lower estimates of the risk than domain experts.

        Although they are the best available figures we’ve seen, these numbers have plenty of caveats. The main three are:

        1. There is little evidence that anyone can achieve long-term forecasting accuracy. Previous forecasting work has assessed performance for questions that would resolve in months or years, not decades.
        2. There was a lot of variation in estimates within and between groups — some individuals gave numbers many times, or even many orders of magnitude, higher or lower than one another.19
        3. The domain experts were selected for those already working on catastrophic risks — the typical expert in some areas of public health, for example, might generally rate extreme risks lower.

        It’s hard to be confident about how to weigh up these different kinds of estimates and considerations, and we think reasonable people will come to different conclusions.

        Our view is that given how bad a catastrophic pandemic would be, the fact that there seems to be few limits on how destructive an engineered pandemic could be, and how broadly beneficial mitigation measures are, many more people should be working on this problem than current are.

        Reducing catastrophic biological risks is highly valuable according to a range of worldviews

        Because we prioritise world problems that could have a significant impact on future generations, we care most about work that will reduce the biggest biological threats — especially those that could cause human extinction or derail civilisation.

        But biosecurity and catastrophic risk reduction could be highly impactful for people with a range of worldviews, because:

        1. Catastrophic biological threats would harm near-term interests too. As COVID-19 showed, large pandemics can bring extraordinary costs to people today, and even more virulent or deadly diseases would cause even greater death and suffering.
        2. Interventions that reduce the largest biological risks are also often beneficial for preventing more common illnesses. Disease surveillance can detect both large and small outbreaks; counter-proliferation efforts can stop both higher- and lower-consequence acts of deliberate misuse; better PPE could prevent all kinds of infections; and so on.

        There is also substantial overlap between biosecurity and other world problems, such as global health (e.g. the Global Health Security Agenda), factory farming (e.g. ‘One Health‘ initiatives), and AI.

        How do catastrophic biorisks compare to AI risk?

        Of those who study existential risks, many believe that biological risks and AI risks are the two biggest existential threats. Our guess is that threats from catastrophic pandemics are somewhat less pressing than threats stemming from advanced AI systems.

        But they’re probably not massively less pressing.

        One feature of a problem that makes it more pressing is whether there are tractable solutions to work on in the area. Many solutions in the biosecurity space seem particularly tractable because:

        • There are already large existing fields of public health and biosecurity to work within.
        • The sciences of disease and medicine are well-established.
        • There are many promising interventions and research ideas that people can pursue. (See the next section.)

        We think there are also exciting opportunities to work on reducing risks from AI, but the field is much less developed than the science of medicine.

        The existence of this infrastructure in the biosecurity field may make the work more tractable, but it also makes it arguably less neglected — which would make it a less pressing problem. In part because AI risk has generally been seen as more speculative, and it would represent essentially a novel threat, fewer people have been working in the area. This has made AI risk more neglected than biorisk.

        In 2023, interest in AI safety and governance began to grow rather rapidly, making these fields somewhat less neglected than they had been previously. But they’re still quite new and so still relatively neglected compared to the field of biosecurity. Since we view more neglected problems as more pressing, this factor probably counts in favour of working on AI risk.

        We also consider problems that are larger in scale to be more pressing. We might measure the scale of the problem purely in terms of the likelihood of causing human extinction or an outcome comparably as bad. 80,000 Hours assesses the risk of an AI-caused existential catastrophe to be between 3% and 50% this century (though there’s a lot of disagreement on this question).Few if any researchers we know believe comparable biorisk is that high.

        At the same time, AI risk is more speculative than the risk from pandemics, because we know from direct experience that pandemics can be deadly on a large scale. So some people investigating these questions find biorisk to be a much more plausible threat.

        But in most cases, which problem you choose to work on shouldn’t be determined solely by your view of how pressing it is (though this does matter a lot!). You should also take into account your personal fit and comparative advantage.

        Finally, a note about how these issues relate:

        1. AI progress may be increasing catastrophic biorisk. Some researchers believe that advancing AI capabilities may increase the risk of a biological catastrophe. Jonas Sandrink at Oxford University, for example, has argued that advanced large language models may decrease the barriers to creating dangerous pathogens. AI biological design tools could also eventually enable sophisticated actors to cause even more harm than they otherwise would.
        2. There is overlap in the policy space between working to reduce biorisks and AI risks. Both require balancing the risk and reward of emerging technology, and the policy skills needed to succeed in these areas are similar. You can potentially pursue a career reducing risks from both frontier technologies.

        If your work can reduce risks on both fronts, then you might view the problems as more similarly pressing.

        There are clear actions we can take to reduce these risks

        Biosecurity and pandemic preparedness are multidisciplinary fields. To address these threats effectively, we need a range of approaches, including:

        • Technical and biological researchers to investigate and develop tools for controlling outbreaks
        • Entrepreneurs and industry professionals to develop and implement these
        • Strategic researchers and forecasters to develop plans
        • People in government to pass and implement policies aimed at reducing biological threats

        Specifically, you could:

        • Work with government, academia, industry, and international organisations to improve the governance of gain-of-function research involving potential pandemic pathogens, commercial DNA synthesis, and other research and industries that may enable the creation of (or expand access to) particularly dangerous engineered pathogens
        • Strengthen international commitments to not develop or deploy biological weapons, e.g. the Biological Weapons Convention (see below)
        • Develop new technologies that can mitigate or detect pandemics, or the use of biological weapons,20 including:
          • Broad-spectrum testing, therapeutics, and vaccines — and ways to develop, manufacture, and distribute all of these quickly in an emergency21
          • Detection methods, such as wastewater surveillance, that can find novel and dangerous outbreaks
          • Non-pharmaceutical interventions, such as better personal protective equipment
          • Other mechanisms for impeding high-risk disease transmission, such as anti-microbial far UVC light
        • Deploying and otherwise promoting the above technologies to protect society against pandemics and to lower the incentives for trying to create one
        • Improving information security to protect biological research that could be dangerous in the wrong hands
        • Investigating whether advances in AI will exacerbate biorisks and potential solutions to this challenge

        The broader field of biosecurity and pandemic preparedness has made major contributions to reducing catastrophic risks. Many of the best ways to prepare for more probable but less severe outbreaks will also reduce the worst risks.

        For example, if we develop broad-spectrum vaccines and therapeutics to prevent and treat a wide range of potential pandemic pathogens, this will be widely beneficial for public health and biosecurity. But it also likely decreases the risk of the worst-case scenarios we’ve been discussing — it’s harder to launch a catastrophic bioterrorist attack on a world that is prepared to protect itself against the most plausible disease candidates. And if any state or other actor who might consider manufacturing such a threat knows the world has a high chance of being protected against it, they have even less reason to try in the first place.

        Similar arguments can be made about improved PPE, some forms of disease surveillance, and indoor air purification.

        But if your focus is preventing the worst-case outcomes, you may want to focus on particular interventions within biosecurity and pandemic prevention over others.

        Some experts in this area, such as MIT biologist Kevin Esvelt, believe that the best interventions for reducing the risk from human-made pandemics will come from the world of physics and engineering, rather than biology.

        This is because for every biological countermeasure to reduce pandemic risk, such as vaccines, there may be tools in the biological sciences to overcome these obstacles — just as viruses can evolve to evade vaccine-induced immunity.

        And yet, there may be hard limits to the ability of biological threats to overcome physical countermeasures. For instance, it seems plausible that there may just be no viable way to design a virus that can penetrate sufficiently secure personal protective equipment or to survive under far-UVC light. If this argument is correct, then these or similar interventions could provide some of the strongest protection against the biggest pandemic threats.

        Two example ways to reduce catastrophe biological risks

        We illustrate two specific examples of work to reduce catastrophic biological risks below, though note that many other options are available (and may even be more tractable).

        1. Strengthen the Biological Weapons Convention

        The principal defence against proliferation of biological weapons among states is the Biological Weapons Convention. The vast majority of eligible states have signed or ratified it.

        Yet some states that signed or ratified the convention have also covertly pursued biological weapons programmes. The leading example was the Biopreparat programme of the USSR,22 which at its height spent billions and employed tens of thousands of people across a network of secret facilities.23

        Its activities are alleged to have included industrial-scale production of weaponised agents like plague, smallpox, and anthrax. They even reportedly succeeded in engineering pathogens for increased lethality, multi-resistance to therapeutics, evasion of laboratory detection, vaccine escape, and novel mechanisms of disease not observed in nature.24 Other past and ongoing violations in a number of countries are widely suspected.25

        The Biological Weapons Convention faces ongoing difficulties:

        • The convention lacks verification mechanisms for countries to demonstrate their compliance, and the technical and political feasibility of verification is fraught.
        • It also lacks an enforcement mechanism, so there are no consequences even if a state were out of compliance.
        • The convention struggles for resources. It has only a handful of full-time staff, and many states do not fulfil their financial obligations. The 2017 meeting of states’ parties was only possible thanks to overpayment by some states, and the 2018 meeting had to be cut short by a day due to insufficient funds.26

        Working to improve the convention’s effectiveness, increasing its funding, or promoting new international efforts that better achieve its aims could help reduce the risk of a major biological catastrophe.

        2. Govern dual-use research of concern

        As discussed above, some well-meaning research has the potential to increase catastrophic risks. Such research is often called ‘dual-use research of concern,’ since the research could be used in either beneficial or harmful ways.

        The primary concerns are that dangerous pathogens could be accidentally released or dangerous specimens and information produced by the research could fall into the hands of bad actors.

        Gain-of-function experiments by Yoshihiro Kawaoka and Ron Fouchier raised concerns in 2011. They published results showing they had modified avian flu to spread in ferrets — raising fears that it might also be enabled to spread to humans.

        The synthesis of horsepox is a more recent case. Good governance of this kind of research remains more aspiration than reality.

        Individual investigators often have a surprising amount of discretion when carrying out risky experiments. It’s plausible that typical scientific norms are not well-suited to appropriately managing the dangers intrinsic in some of this work.

        Even in the best case, where the scientific community is solely composed of those who only perform work which they sincerely believe is on balance good for the world, we might still face the unilateralist curse. This occurs when only one individual mistakenly concludes that a dangerous course of action should be taken, even when all their peers have ruled it out. This makes the chance of disaster much more likely, because it only takes one person making an incorrect risk assessment to impose major costs on the rest of society.

        And in reality, scientists are subject to other incentives besides the public good, such as publications, patents, and prestige. It would be better if safety-enhancing discoveries were found before easier to make dangerous discoveries arise. But the existing incentives may encourage researchers to conduct their work in ways that aren’t always optimal for the social good.

        Governance and oversight can mitigate risks posed by individual foibles or mistakes. The track record of such oversight bodies identifying concerns in advance is imperfect. The gain-of-function work on avian flu was initially funded by the NIH (the same body which would subsequently declare a moratorium on gain-of-function experiments), and passed institutional checks and oversight — concerns only began after the results of the work became known.

        When reporting the horsepox synthesis to the WHO advisory committee on variola virus research, the scientists noted:

        Professor Evans’ laboratory brought this activity to the attention of appropriate regulatory authorities, soliciting their approval to initiate and undertake the synthesis. It was the view of the researchers that these authorities, however, may not have fully appreciated the significance of, or potential need for, regulation or approval of any steps or services involved in the use of commercial companies performing commercial DNA synthesis, laboratory facilities, and the federal mail service to synthesise and replicate a virulent horse pathogen.

        One challenge is there is no bright line one can draw to rule out all concerning research. List-based approaches, such as select agent lists or the seven experiments of concern, may increasingly be unsuited to current and emerging practice, particularly in such a dynamic field.

        But it’s not clear what the alternative to necessarily incomplete lists would be. The consequences of scientific discovery are often not obvious ahead of time, so it may be difficult to say which kinds of experiments pose the greatest risks or in which cases the benefits outweigh the costs.

        Even if a more reliable governance could be constructed, the geographic scope would remain a challenge. Practitioners inclined toward more concerning work could migrate to more permissive jurisdictions. And even if one journal declines to publish a new finding on public safety grounds, a researcher can resubmit to another journal with laxer standards.27

        But we believe these challenges are surmountable.

        Research governance can adapt to modern challenges. Greater awareness of biosecurity issues can be spread in the scientific community. We can construct better means of risk assessment than blacklists (cf. Lewis et al. (2019)). Broader cooperation can mitigate some of the dangers of the unilateralist’s curse. There is ongoing work in all of these areas, and we can continue to improve practices and policies.

        Example reader

        What jobs are available?

        For our full article on pursuing work in biosecurity, you can read our biosecurity research and policy career review.

        If you want to focus on catastrophic pandemics in the biosecurity world, it might be easier to work on broader efforts that have more mainstream support first and then transition to more targeted projects later. If you are already working in biosecurity and pandemic preparedness (or a related field), you might want to advocate for a greater focus on measures that reduce risk robustly across the board, including in the worst-case scenarios.

        The world could be doing a lot more to reduce the risk of natural pandemics on the scale of COVID-19. It might be easiest to push for interventions targeted at this threat before looking to address the less likely, but more catastrophic possibilities. On the other hand, potential attacks or perceived threats to national security often receive disproportionate attention from governments compared to standard public health threats, so there may be more opportunities to reduce risks from engineered pandemics under some circumstances.

        To get a sense of what kinds of roles you might take on, you can check out our job board for openings related to reducing biological threats. This isn’t comprehensive, but it’s a good place to start:

        Our job board features opportunities in biosecurity and pandemic preparedness:

          View all opportunities

          Want to work on reducing risks of the worst biological disasters? We want to help.

          We’ve helped people formulate plans, find resources, and put them in touch with mentors. If you want to work in this area, apply for our free one-on-one advising service.

          Apply for advising

          We thank Gregory Lewis for contributing to this article, and thank Anemone Franz and Elika Somani for comments on the draft.

          Learn more

          Top recommendations

          Podcasts

          Further recommendations

          The post Preventing catastrophic pandemics appeared first on 80,000 Hours.

          ]]>
          Benjamin Todd on the history of 80,000 Hours https://80000hours.org/after-hours-podcast/episodes/benjamin-todd-history-80k/ Fri, 01 Dec 2023 21:20:31 +0000 https://80000hours.org/?post_type=podcast_after_hours&p=84722 The post Benjamin Todd on the history of 80,000 Hours appeared first on 80,000 Hours.

          ]]>
          The post Benjamin Todd on the history of 80,000 Hours appeared first on 80,000 Hours.

          ]]>
          Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe https://80000hours.org/podcast/episodes/jeff-sebo-ethics-digital-minds/ Wed, 22 Nov 2023 21:00:29 +0000 https://80000hours.org/?post_type=podcast&p=84537 The post Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe appeared first on 80,000 Hours.

          ]]>
          The post Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe appeared first on 80,000 Hours.

          ]]>