The 80,000 Hours team (Author archive) - 80,000 Hours https://80000hours.org/author/80000hours/ Mon, 05 Feb 2024 14:30:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 An apology for our mistake with the book giveaway https://80000hours.org/2024/01/an-apology-for-our-mistake-with-the-book-giveaway/ Fri, 05 Jan 2024 14:15:43 +0000 https://80000hours.org/?p=85320 The post An apology for our mistake with the book giveaway appeared first on 80,000 Hours.

]]>
80,000 Hours runs a programme where subscribers to our newsletter can order a free, paperback copy of a book to be sent to them in the mail. Readers choose between getting a copy of our career guide, Toby Ord’s The Precipice, and Will MacAskill’s Doing Good Better.

This giveaway has been open to all newsletter subscribers since early 2022. The number of orders we get depends on the number of new subscribers that day, but in general, we get around 150 orders a day.

Over the past week, however, we received an overwhelming number of orders. The offer of the free book appears to have been promoted by some very popular posts on Instagram, which generated an unprecedented amount of interest for us.

While we’re really grateful that these people were interested in what we have to offer, we couldn’t handle the massive uptick in demand. We’re a nonprofit funded by donations, and everything we provide is free. We had budgeted to run the book giveaway projecting the demand would be in line with what it’s been for the past two years. Instead, we had more than 20,000 orders in just a few days — which we anticipated would run through around six months of the book giveaway’s budget.

We’ve now paused taking new orders, and we’re unsure when we’ll be able to re-open them.

Because of this large spike in demand, we’ve had to tell many people who subscribed to our newsletter hoping to get a physical book that we’re not able to complete their order.

We deeply regret this mistake. We should have had a better process in place to pause the book giveaway much sooner, so that no orders were placed that we couldn’t fulfil, and so no one signed up to the newsletter thinking they would get a hard copy of a book when they wouldn’t.

Update — January 26, 2024:

While we may have to close orders if we get overwhelmed again in the future, we have made several changes to improve the process of the book giveaway to address this problem.

  1. We added new terms and conditions to the giveaway so new subscribers are better informed about the availability of books in certain formats, our data privacy policy, and the circumstances in which we may be unable to fulfil paperback orders.
  2. We improved the system that alerts us to unexpectedly high volumes of paperback book orders so that more of us are aware sooner.
  3. We developed clearer internal recommendations and procedures for when and how to pause the giveaway.

These changes will help us respond more quickly to these situations in the future, which we hope will limit the number of orders placed that we cannot fulfil.

Our readers’ trust in our services is extremely important to us, and we’re very sorry to let down the people who won’t get the books they signed up for.

We understand that this might make some readers trust us less. All we can say is that we commit to doing better in the future. We’re reviewing our book giveaway processes so that going forward, we will be able to consistently fulfil all orders as expected.

If you’re reading this and you were one of the users affected:

  • Please accept our sincerest apologies for not being able to deliver on our promise to you.
  • You can still get access to the 80,000 Hours career guide in these ways:

We’d also like to address any concerns readers may have concerning the processing of user data that we obtained during this period:

If you’d like to unsubscribe from our newsletter, because of this or any other reason, you can do so at any time by clicking the ‘unsubscribe’ link in the footer of any email from us. If you unsubscribe, we won’t email you again.

User data collected by us will be processed in accordance with our privacy policy, which you can read on Effective Ventures’ website here.

We will never sell any user data, for any reason.

Users who ordered a book will also have provided some of their personal data to our distribution partner, Impact Books, such as the delivery address for their book and their email. You can read their privacy policy here. Like us, they will never sell your data, for any reason.

We’ve asked Impact Books to delete all the personal data they had gathered from any user whose order we did not fulfil, and they will do so. So you can be confident that we will not benefit in any way from your provision of this data.

We hope this clears up some potential concerns in this area.

We apologise once again for not sending out all the requested books, and we’re really sorry that we let people down.

We think our book giveaway is a valuable service, so we’re motivated to get it restarted in a sustainable way — and we will strive to make sure we avoid a mistake like this in the future. We also hope that some of those who are disappointed to not receive a paperback book can make use of other versions of our advice, which are (and will remain) available for free online.

Update — Book giveaway re-opened on January 26, 2024:

We have re-opened our book giveaway for free paperback orders! If you have already signed up to our newsletter, you can order a paperback book by emailing book.giveaway@80000hours.org. Otherwise, you can get your book by subscribing to our newsletter as normal.

We greatly appreciate the patience of our new subscribers while we prepared to re-open the giveaway.

The post An apology for our mistake with the book giveaway appeared first on 80,000 Hours.

]]>
Announcing our plan to become an independent organisation https://80000hours.org/2023/12/announcing-plan/ Fri, 29 Dec 2023 14:39:54 +0000 https://80000hours.org/?p=85263 The post Announcing our plan to become an independent organisation appeared first on 80,000 Hours.

]]>
We are excited to share that 80,000 Hours has officially decided to spin out as a project from our parent organisations and establish an independent legal structure.

80,000 Hours is a project of the Effective Ventures group — the umbrella term for Effective Ventures Foundation and Effective Ventures Foundation USA, Inc., which are two separate legal entities that work together. It also includes the projects Giving What We Can, the Centre for Effective Altruism, and others.

We’re incredibly grateful to the Effective Ventures leadership and team and the other orgs for all their support, particularly in the last year. They devoted countless hours and enormous effort to helping ensure that we and the other orgs could pursue our missions.

And we deeply appreciate Effective Ventures’ support in our spin-out. They recently announced that all of the other organisations under their umbrella will likewise become their own legal entities; we’re excited to continue to work alongside them to improve the world.

Back in May, we investigated whether it was the right time to spin out of our parent organisations. We’ve considered this option at various points in the last three years.

There have been many benefits to being part of a larger entity since our founding. But as 80,000 Hours and the other projects within Effective Ventures have grown, we concluded we can now best pursue our mission and goals independently. Effective Ventures leadership approved the plan.

Becoming our own legal entity will allow us to:

  • Match our governing structure to our function and purpose
  • Design operations systems that best meet our staff’s needs
  • Reduce interdependence with other entities that raises financial, legal, and reputational risks

There’s a lot for us to do to make this happen. We’re currently in the process of finding a new CEO to lead us in our next chapter. We’ll also need a new board to oversee our work, and new staff for our internal systems team and other growing programmes.

We’re excited to begin this next chapter and to continue providing research and support to help people have high-impact careers!

Join the 350,000 people aiming to have a greater impact with their careers.

Sign up and we’ll send you:

  • Weekly job opportunities
  • Opportunities to meet others
  • Details on how to get one-on-one coaching from our team

The post Announcing our plan to become an independent organisation appeared first on 80,000 Hours.

]]>
Preventing catastrophic pandemics https://80000hours.org/problem-profiles/preventing-catastrophic-pandemics/ Thu, 23 Apr 2020 13:57:25 +0000 https://80000hours.org/?page_id=69550 The post Preventing catastrophic pandemics appeared first on 80,000 Hours.

]]>
Some of the deadliest events in history have been pandemics. COVID-19 demonstrated that we’re still vulnerable to these events, and future outbreaks could be far more lethal.

In fact, we face the possibility of biological disasters that are worse than ever before due to developments in technology.

The chances of such catastrophic pandemics — bad enough to potentially derail civilisation and threaten humanity’s future — seem uncomfortably high. We believe this risk is one of the world’s most pressing problems.

And there are a number of practical options for reducing global catastrophic biological risks (GCBRs). So we think working to reduce GCBRs is one of the most promising ways to safeguard the future of humanity right now.

Summary

Scale

Pandemics — especially engineered pandemics — pose a significant risk to the existence of humanity. Though the risk is difficult to assess, some researchers estimate that there is a greater than 1 in 10,000 chance of a biological catastrophe leading to human extinction within the next 100 years, and potentially as high as 1 in 100. (See below.) And a biological catastrophe killing a large percentage of the population is even more likely — and could contribute to existential risk.

Neglectedness

Pandemic prevention is currently under-resourced. Even in the aftermath of the COVID-19 outbreak, spending on biodefense in the US, for instance, has only grown modestly — from an estimated $17 billion in 2019 to $24 billion in 2023.

And little of existing pandemic prevention funding is specifically targeted at preventing biological disasters that could be most catastrophic.

Solvability

There are promising approaches to improve biosecurity and reducing pandemics risk, including research, policy interventions, and defensive technology development.

Why focus your career on preventing severe pandemics?

COVID-19 highlighted our vulnerability to worldwide pandemics and revealed weaknesses in our ability to respond. Despite advances in medicine and public health, around seven million deaths worldwide from the disease have been recorded, and many estimates put the figure far higher.

Historical events like the Black Death and the 1918 flu show that pandemics can be some of the most damaging disasters for humanity, killing tens of millions and significant portions of the global population.

It is sobering to imagine the potential impact of a pandemic pathogen that is much more contagious and deadly than any we’ve seen so far.

Unfortunately, such a pathogen is possible in principle, particularly in light of advancing biotechnology. Researchers can design and create biological agents much more easily and precisely than before. (More on this below.) As the field advances, it may become increasingly feasible to engineer a pathogen that poses a major threat to all of humanity.

States or malicious actors with access to these pathogens could use them as offensive weapons or wield them as threats to obtain leverage over others.

Dangerous pathogens engineered for research purposes could also be released accidentally through a failure of lab safety.

Either scenario could result in a catastrophic ‘engineered pandemic,’ which we believe could pose an even greater threat to humanity than pandemics that arise naturally, as we argue below.

Thankfully, few people seek to use disease as a weapon, and even those willing to conduct such attacks may not aim to produce the most harmful pathogen possible. But the combined possibilities of accident, recklessness, desperation, and unusual malice suggest a disturbingly high chance of a pandemic pathogen being released that could kill a very large percentage of the population. The world might be especially at risk during great power conflicts.

But could an engineered pandemic pose an extinction threat to humanity?

There is reasonable debate here. In the past, societies have recovered from pandemics that killed as much as 50% of the population, and perhaps more.1

But we believe future pandemics may be one of the largest contributors to existential risk this century, because it now seems within the reach of near-term biological advances to create pandemics that would kill greater than 50% of the population — not just in a particular area, but globally. It’s possible they could be bad enough to drive humanity to extinction, or at least be so damaging that civilisation never recovers.

Reducing the risk of biological catastrophes by constructing safeguards against potential outbreaks and preparing to mitigate their worst effects therefore seems extremely important.

It seems relatively uncommon for people in the broader field of biosecurity and pandemic preparedness to work specifically on reducing catastrophic risks and engineered pandemics. Projects that reduce the risk of biological catastrophe also seem to receive a relatively small proportion of health security funding.2

In our view, the costs of biological disasters grow nonlinearly with severity because of the increasing potential for the event to contribute to existential risk. This suggests that projects to prevent the gravest outcomes in particular should receive more funding and attention than they currently do.

In the rest of this section, we’ll discuss how artificial pandemics compare to natural pandemic risks. Later on, we’ll discuss what kind of work can and should be done in this area to reduce the risks.

We also have a career review of biorisk research, strategy, and policy paths, which gives more specific and concrete advice about impactful roles to aim for and how to enter the field.

Natural pandemics show how destructive biological threats can be

Four of the worst pandemics in recorded history were:3

  1. The Plague of Justinian (541-542 CE) is thought to have arisen in Asia before spreading into the Byzantine Empire around the Mediterranean. The initial outbreak is thought to have killed around 6 million (about ~3% of world population)4 and contributed to reversing the territorial gains of the Byzantine empire.
  2. The Black Death (1335-1355 CE) is estimated to have killed 20–75 million people (about 10% of world population) and believed to have had profound impacts on the course of European history.
  3. The Columbian Exchange (1500-1600 CE) was a succession of pandemics, likely including smallpox and paratyphoid, brought by the European colonists that devastated Native American populations. It likely played a major role in the loss of around 80% of Mexico’s native population during the 16th century. Other groups in the Americas appear to have lost even greater proportions of their communities. Some groups may have lost as much as 98% of their people to these diseases.5
  4. The 1918 Influenza Pandemic (1918 CE) spread across almost the whole globe and killed 50–100 million people (2.5%–5% of the world population). It may have been deadlier than either world war.

These historical pandemics show the potential for mass destruction from biological threats, and they are a threat worth mitigating all on their own. They also show that the key features of a global catastrophe, such as high proportional mortality and civilisational collapse, can be driven by highly destructive pandemics.

But despite the horror of these past events, it seems unlikely that a natural pandemic could be bad enough on its own to drive humanity to total extinction in the foreseeable future, given what we know of events in natural history.6

As philosopher Toby Ord argues in the section on natural risks in his book The Precipice, history suggests humanity faces a very low baseline extinction risk — the chance of being wiped out in ordinary circumstances — from natural causes over the course of, say, 100 years.

That’s because if the baseline risk were around 10% per century, we’d have to conclude we’ve gotten very lucky for the 200,000 years or so of humanity’s existence. The fact of our existence is much less surprising if the risk has been about 0.001% per century.

None of the worst plagues we know about in history was enough to destabilise civilization worldwide or clearly imperil our species’ future. And more broadly, pathogen-driven extinction events in nature appear to be relatively rare for animals.7

Is the risk from natural pandemics increasing or decreasing?

Are we safer from pandemics now than we used to be? Or do developments in human society actually put us at greater risk from natural pandemics?

Good data on these questions is hard to find. The burden of infectious disease generally in human society is on a downward trend, but this doesn’t tell us much about whether infrequent outbreaks of mass pandemics could be getting worse.

In the abstract, we can think of many reasons that the risk from naturally arising pandemics might be falling. They include:

  • We have better hygiene and sanitation than past eras, and these will likely continue to improve.
  • We can produce effective vaccinations and therapeutics.
  • We better understand disease transmission, infection, and effects on the body.
  • The human population is healthier overall.

On the other hand:

  • Trade and air travel allow much faster and wider transmission of disease.8 For example, air travel seems to have played a large role in the spread of COVID-19 from country to country.9 In previous eras, the difficulty of travelling over long distances likely kept disease outbreaks more geographically confined.
  • Climate change may increase the likelihood of new zoonotic diseases.
  • Greater human population density may increase the likelihood that diseases will spread rapidly.
  • Much larger populations of domestic animals can potentially pass diseases on to humans.

There are likely many other relevant considerations. Our guess is that the frequency of natural pandemics is increasing, but that they’ll be less bad on average.10 A further guess is that the second factor is more important than the first factor, netting out to reduced overall danger. There remain many open questions.

Engineered pathogens could be even more dangerous

But even if natural pandemic risks are declining, the risks from engineered pathogens are almost certainly growing.

This is because advancing technology makes it increasingly feasible to create threatening viruses and infectious agents.11 Accidental and deliberate misuse of this technology is a credible global catastrophic risk and could potentially threaten humanity’s future.

One way this could play out is if some dangerous actor wanted to bring back catastrophic outbreaks of the past.

Polio, the 1918 pandemic influenza strain, and most recently horsepox (a close relative of smallpox) have all been recreated from scratch. The genetic sequence of all these pathogens and others are publicly available, and the progress and proliferation of biotechnology opens up terrifying opportunities.12

Beyond the resurrection of past plagues, advanced biotechnology could let someone engineer a pathogen more dangerous than those that have occurred in natural history.

When viruses evolve, they aren’t naturally selected to be as deadly or destructive as possible. But someone who is deliberately trying to cause harm could intentionally combine the worst features of possible viruses in a way that is very unlikely to happen naturally.

Gene sequencing, editing, and synthesis are now possible and becoming easier. We’re getting closer to being able to produce biological agents the way we design and produce computers or other products (though how long it takes remains unclear). This may allow people to design and create pathogens that are deadlier or more transmissible, or perhaps have wholly new features. (Read more.)

Scientists are also investigating what makes pathogens more or less lethal and contagious, which may help us better prevent and mitigate outbreaks.

But it also means that the information required to design more dangerous pathogens is increasingly available.

All the technologies involved have potential medical uses in addition to hazards. For example, viral engineering has been employed in gene therapy and vaccines (including some used to combat COVID-19).

Yet knowledge of how to engineer viruses to be better as vaccines or therapeutics could be misused to develop ‘better’ biological weapons. Properly handling these advances involves a delicate balancing act.

Hints of the dangers can be seen in the scientific literature. Gain-of-function experiments with influenza suggested that artificial selection could lead to pathogens with properties that enhance their danger.13

And the scientific community has yet to establish strong enough norms to discourage and prevent the unrestricted sharing of dangerous findings, such as methods for making a virus deadlier. That’s why we warn people going to work in this field that biosecurity involves information hazards. It’s essential for people handling these risks to have good judgement.

Scientists can make dangerous discoveries unintentionally in lab work. For example, vaccine research can uncover virus mutations that make a disease more infectious. And other areas of biology, such as enzyme research, show how our advancing technology can unlock new and potentially threatening capabilities that haven’t appeared before in nature.14

In a world of many ‘unknown unknowns,’ we may find many novel dangers.

So while the march of science brings great progress, it also brings the potential for bad actors to intentionally produce new or modified pathogens. Even with the vast majority of scientific expertise focused on benefiting humanity, a much smaller group can use the community’s advances to do great harm.

If someone or some group has enough motivation, resources, and sufficient technical skill, it’s difficult to place an upper limit on how catastrophic an engineered pandemic they might one day create. As technology progresses, the tools for creating a biological disaster will become increasingly accessible; the barriers to achieving terrifying results may get lower and lower — raising the risk of a major attack. The advancement of AI, in particular, may catalyse the risk. (See more about this below.)

Both accidental and deliberate misuse are threats

We can divide the risks of artificially created pandemics into accidental and deliberate misuse — roughly speaking, imagine a science experiment gone wrong compared to a bioterrorist attack.

The history of accidents and lab leaks which exposed people to dangerous pathogens is chilling:

  • In 1977, an unusual flu strain emerged that disproportionately sickened young people and was found to be genetically frozen in time from a 1950 strain, suggesting a lab origin from a faulty vaccine trial.
  • In 1978, a lab leak at a UK facility resulted in the last smallpox death.
  • In 1979, an apparent bioweapons lab in the USSR accidentally released anthrax spores that drifted over a town, sickening residents and animals, and killing about 60 people. Though initially covered up, Russian President Boris Yeltsin later revealed it was an airborne release from a military lab accident.
  • In 2014, dozens of CDC workers were potentially exposed to live anthrax after samples meant to be inactivated were improperly killed and shipped to lower-level labs that didn’t always use proper protective equipment.
  • We don’t really know how often this kind of thing happens because lab leaks are not consistently tracked. And there have been many more close calls.

And history has seen many terrorist attacks and state development of mass-casualty weapons. Incidents of bioterrorism and biological warfare include:

  • In 1763, British forces at Fort Pitt gave blankets from a smallpox ward to Native American tribes, aiming to spread the disease and weaken these communities. It’s unclear if this effort achieved its aims, though smallpox devastated many of these groups.
  • During World War II, the Japanese military’s Unit 731 conducted horrific human experiments and biological warfare in China. They used anthrax, cholera, and plague, killing thousands and potentially many more. The details of these events were only uncovered later.
  • In the 1960s and 1970s, the South African government developed a covert chemical and biological warfare program known as Project Coast. The program aimed to develop biological and chemical agents targeted at specific ethnic groups and political opponents, including efforts to develop sterilisation and infertility drugs.
  • In 1984, followers of the Rajneesh movement contaminated salad bars in Oregon with Salmonella, causing more than 750 infections. It was an attempt to influence an upcoming election.
  • In 2001, shortly after the September 11 attacks, anthrax spores were mailed to several news outlets and two U.S. Senators, causing 22 infections and five deaths.

So should we be more concerned about accidents or bioterrorism? We’re not sure. There’s not a lot of data to go on, and considerations pull in both directions.

It may seem releasing a deadly pathogen on purpose is more concerning. As discussed, the worst pandemics would most likely be intentionally created rather than emerge by chance, as discussed above. Plus, there are ways to make a pathogen’s release more or less harmful, and an accidental release probably wouldn’t be optimised for maximum damage.

On the other hand, many more people are well-intentioned and want to use biotechnology to help the world rather than harm it. And efforts to eliminate state bioweapons programs likely reduce the number of potential attackers. (But see more about the limits on these efforts below.) So it seems most plausible that there are more opportunities for a disastrous accident to occur than for a malicious actor to pull off a mass biological attack.

We guess that, all things considered, the former considerations are the more significant factors.15 So we suspect that deliberate misuse is more dangerous than accidental releases, though both are certainly worth guarding against.

This image is borrowed from Claire Zabel’s talk on biosecurity.16

Overall, the risk seems substantial

We’ve seen a variety of estimates regarding the chances of an existential biological catastrophe, including the possibility of engineered pandemics.17 Perhaps the best estimates come from the Existential Risk Persuasion Tournament (XPT).

This project involved getting groups of both subject matter experts and experienced forecasters to estimate the likelihood of extreme events. For biological risks, the range of median estimates between forecasters and domain experts were as follows:

  • Catastrophic event (meaning an event in which 10% or more of the human population dies) by 2100: ~1–3%
  • Human extinction event: 1 in 50,000 to 1 in 100
  • Genetically engineered pathogen killing more than 1% of the population by 2100: 4– 10%18
  • Note: the forecasters tended to have lower estimates of the risk than domain experts.

Although they are the best available figures we’ve seen, these numbers have plenty of caveats. The main three are:

  1. There is little evidence that anyone can achieve long-term forecasting accuracy. Previous forecasting work has assessed performance for questions that would resolve in months or years, not decades.
  2. There was a lot of variation in estimates within and between groups — some individuals gave numbers many times, or even many orders of magnitude, higher or lower than one another.19
  3. The domain experts were selected for those already working on catastrophic risks — the typical expert in some areas of public health, for example, might generally rate extreme risks lower.

It’s hard to be confident about how to weigh up these different kinds of estimates and considerations, and we think reasonable people will come to different conclusions.

Our view is that given how bad a catastrophic pandemic would be, the fact that there seems to be few limits on how destructive an engineered pandemic could be, and how broadly beneficial mitigation measures are, many more people should be working on this problem than current are.

Reducing catastrophic biological risks is highly valuable according to a range of worldviews

Because we prioritise world problems that could have a significant impact on future generations, we care most about work that will reduce the biggest biological threats — especially those that could cause human extinction or derail civilisation.

But biosecurity and catastrophic risk reduction could be highly impactful for people with a range of worldviews, because:

  1. Catastrophic biological threats would harm near-term interests too. As COVID-19 showed, large pandemics can bring extraordinary costs to people today, and even more virulent or deadly diseases would cause even greater death and suffering.
  2. Interventions that reduce the largest biological risks are also often beneficial for preventing more common illnesses. Disease surveillance can detect both large and small outbreaks; counter-proliferation efforts can stop both higher- and lower-consequence acts of deliberate misuse; better PPE could prevent all kinds of infections; and so on.

There is also substantial overlap between biosecurity and other world problems, such as global health (e.g. the Global Health Security Agenda), factory farming (e.g. ‘One Health‘ initiatives), and AI.

How do catastrophic biorisks compare to AI risk?

Of those who study existential risks, many believe that biological risks and AI risks are the two biggest existential threats. Our guess is that threats from catastrophic pandemics are somewhat less pressing than threats stemming from advanced AI systems.

But they’re probably not massively less pressing.

One feature of a problem that makes it more pressing is whether there are tractable solutions to work on in the area. Many solutions in the biosecurity space seem particularly tractable because:

  • There are already large existing fields of public health and biosecurity to work within.
  • The sciences of disease and medicine are well-established.
  • There are many promising interventions and research ideas that people can pursue. (See the next section.)

We think there are also exciting opportunities to work on reducing risks from AI, but the field is much less developed than the science of medicine.

The existence of this infrastructure in the biosecurity field may make the work more tractable, but it also makes it arguably less neglected — which would make it a less pressing problem. In part because AI risk has generally been seen as more speculative, and it would represent essentially a novel threat, fewer people have been working in the area. This has made AI risk more neglected than biorisk.

In 2023, interest in AI safety and governance began to grow rather rapidly, making these fields somewhat less neglected than they had been previously. But they’re still quite new and so still relatively neglected compared to the field of biosecurity. Since we view more neglected problems as more pressing, this factor probably counts in favour of working on AI risk.

We also consider problems that are larger in scale to be more pressing. We might measure the scale of the problem purely in terms of the likelihood of causing human extinction or an outcome comparably as bad. 80,000 Hours assesses the risk of an AI-caused existential catastrophe to be between 3% and 50% this century (though there’s a lot of disagreement on this question).Few if any researchers we know believe comparable biorisk is that high.

At the same time, AI risk is more speculative than the risk from pandemics, because we know from direct experience that pandemics can be deadly on a large scale. So some people investigating these questions find biorisk to be a much more plausible threat.

But in most cases, which problem you choose to work on shouldn’t be determined solely by your view of how pressing it is (though this does matter a lot!). You should also take into account your personal fit and comparative advantage.

Finally, a note about how these issues relate:

  1. AI progress may be increasing catastrophic biorisk. Some researchers believe that advancing AI capabilities may increase the risk of a biological catastrophe. Jonas Sandrink at Oxford University, for example, has argued that advanced large language models may decrease the barriers to creating dangerous pathogens. AI biological design tools could also eventually enable sophisticated actors to cause even more harm than they otherwise would.
  2. There is overlap in the policy space between working to reduce biorisks and AI risks. Both require balancing the risk and reward of emerging technology, and the policy skills needed to succeed in these areas are similar. You can potentially pursue a career reducing risks from both frontier technologies.

If your work can reduce risks on both fronts, then you might view the problems as more similarly pressing.

There are clear actions we can take to reduce these risks

Biosecurity and pandemic preparedness are multidisciplinary fields. To address these threats effectively, we need a range of approaches, including:

  • Technical and biological researchers to investigate and develop tools for controlling outbreaks
  • Entrepreneurs and industry professionals to develop and implement these
  • Strategic researchers and forecasters to develop plans
  • People in government to pass and implement policies aimed at reducing biological threats

Specifically, you could:

  • Work with government, academia, industry, and international organisations to improve the governance of gain-of-function research involving potential pandemic pathogens, commercial DNA synthesis, and other research and industries that may enable the creation of (or expand access to) particularly dangerous engineered pathogens
  • Strengthen international commitments to not develop or deploy biological weapons, e.g. the Biological Weapons Convention (see below)
  • Develop new technologies that can mitigate or detect pandemics, or the use of biological weapons,20 including:
    • Broad-spectrum testing, therapeutics, and vaccines — and ways to develop, manufacture, and distribute all of these quickly in an emergency21
    • Detection methods, such as wastewater surveillance, that can find novel and dangerous outbreaks
    • Non-pharmaceutical interventions, such as better personal protective equipment
    • Other mechanisms for impeding high-risk disease transmission, such as anti-microbial far UVC light
  • Deploying and otherwise promoting the above technologies to protect society against pandemics and to lower the incentives for trying to create one
  • Improving information security to protect biological research that could be dangerous in the wrong hands
  • Investigating whether advances in AI will exacerbate biorisks and potential solutions to this challenge

The broader field of biosecurity and pandemic preparedness has made major contributions to reducing catastrophic risks. Many of the best ways to prepare for more probable but less severe outbreaks will also reduce the worst risks.

For example, if we develop broad-spectrum vaccines and therapeutics to prevent and treat a wide range of potential pandemic pathogens, this will be widely beneficial for public health and biosecurity. But it also likely decreases the risk of the worst-case scenarios we’ve been discussing — it’s harder to launch a catastrophic bioterrorist attack on a world that is prepared to protect itself against the most plausible disease candidates. And if any state or other actor who might consider manufacturing such a threat knows the world has a high chance of being protected against it, they have even less reason to try in the first place.

Similar arguments can be made about improved PPE, some forms of disease surveillance, and indoor air purification.

But if your focus is preventing the worst-case outcomes, you may want to focus on particular interventions within biosecurity and pandemic prevention over others.

Some experts in this area, such as MIT biologist Kevin Esvelt, believe that the best interventions for reducing the risk from human-made pandemics will come from the world of physics and engineering, rather than biology.

This is because for every biological countermeasure to reduce pandemic risk, such as vaccines, there may be tools in the biological sciences to overcome these obstacles — just as viruses can evolve to evade vaccine-induced immunity.

And yet, there may be hard limits to the ability of biological threats to overcome physical countermeasures. For instance, it seems plausible that there may just be no viable way to design a virus that can penetrate sufficiently secure personal protective equipment or to survive under far-UVC light. If this argument is correct, then these or similar interventions could provide some of the strongest protection against the biggest pandemic threats.

Two example ways to reduce catastrophe biological risks

We illustrate two specific examples of work to reduce catastrophic biological risks below, though note that many other options are available (and may even be more tractable).

1. Strengthen the Biological Weapons Convention

The principal defence against proliferation of biological weapons among states is the Biological Weapons Convention. The vast majority of eligible states have signed or ratified it.

Yet some states that signed or ratified the convention have also covertly pursued biological weapons programmes. The leading example was the Biopreparat programme of the USSR,22 which at its height spent billions and employed tens of thousands of people across a network of secret facilities.23

Its activities are alleged to have included industrial-scale production of weaponised agents like plague, smallpox, and anthrax. They even reportedly succeeded in engineering pathogens for increased lethality, multi-resistance to therapeutics, evasion of laboratory detection, vaccine escape, and novel mechanisms of disease not observed in nature.24 Other past and ongoing violations in a number of countries are widely suspected.25

The Biological Weapons Convention faces ongoing difficulties:

  • The convention lacks verification mechanisms for countries to demonstrate their compliance, and the technical and political feasibility of verification is fraught.
  • It also lacks an enforcement mechanism, so there are no consequences even if a state were out of compliance.
  • The convention struggles for resources. It has only a handful of full-time staff, and many states do not fulfil their financial obligations. The 2017 meeting of states’ parties was only possible thanks to overpayment by some states, and the 2018 meeting had to be cut short by a day due to insufficient funds.26

Working to improve the convention’s effectiveness, increasing its funding, or promoting new international efforts that better achieve its aims could help reduce the risk of a major biological catastrophe.

2. Govern dual-use research of concern

As discussed above, some well-meaning research has the potential to increase catastrophic risks. Such research is often called ‘dual-use research of concern,’ since the research could be used in either beneficial or harmful ways.

The primary concerns are that dangerous pathogens could be accidentally released or dangerous specimens and information produced by the research could fall into the hands of bad actors.

Gain-of-function experiments by Yoshihiro Kawaoka and Ron Fouchier raised concerns in 2011. They published results showing they had modified avian flu to spread in ferrets — raising fears that it might also be enabled to spread to humans.

The synthesis of horsepox is a more recent case. Good governance of this kind of research remains more aspiration than reality.

Individual investigators often have a surprising amount of discretion when carrying out risky experiments. It’s plausible that typical scientific norms are not well-suited to appropriately managing the dangers intrinsic in some of this work.

Even in the best case, where the scientific community is solely composed of those who only perform work which they sincerely believe is on balance good for the world, we might still face the unilateralist curse. This occurs when only one individual mistakenly concludes that a dangerous course of action should be taken, even when all their peers have ruled it out. This makes the chance of disaster much more likely, because it only takes one person making an incorrect risk assessment to impose major costs on the rest of society.

And in reality, scientists are subject to other incentives besides the public good, such as publications, patents, and prestige. It would be better if safety-enhancing discoveries were found before easier to make dangerous discoveries arise. But the existing incentives may encourage researchers to conduct their work in ways that aren’t always optimal for the social good.

Governance and oversight can mitigate risks posed by individual foibles or mistakes. The track record of such oversight bodies identifying concerns in advance is imperfect. The gain-of-function work on avian flu was initially funded by the NIH (the same body which would subsequently declare a moratorium on gain-of-function experiments), and passed institutional checks and oversight — concerns only began after the results of the work became known.

When reporting the horsepox synthesis to the WHO advisory committee on variola virus research, the scientists noted:

Professor Evans’ laboratory brought this activity to the attention of appropriate regulatory authorities, soliciting their approval to initiate and undertake the synthesis. It was the view of the researchers that these authorities, however, may not have fully appreciated the significance of, or potential need for, regulation or approval of any steps or services involved in the use of commercial companies performing commercial DNA synthesis, laboratory facilities, and the federal mail service to synthesise and replicate a virulent horse pathogen.

One challenge is there is no bright line one can draw to rule out all concerning research. List-based approaches, such as select agent lists or the seven experiments of concern, may increasingly be unsuited to current and emerging practice, particularly in such a dynamic field.

But it’s not clear what the alternative to necessarily incomplete lists would be. The consequences of scientific discovery are often not obvious ahead of time, so it may be difficult to say which kinds of experiments pose the greatest risks or in which cases the benefits outweigh the costs.

Even if a more reliable governance could be constructed, the geographic scope would remain a challenge. Practitioners inclined toward more concerning work could migrate to more permissive jurisdictions. And even if one journal declines to publish a new finding on public safety grounds, a researcher can resubmit to another journal with laxer standards.27

But we believe these challenges are surmountable.

Research governance can adapt to modern challenges. Greater awareness of biosecurity issues can be spread in the scientific community. We can construct better means of risk assessment than blacklists (cf. Lewis et al. (2019)). Broader cooperation can mitigate some of the dangers of the unilateralist’s curse. There is ongoing work in all of these areas, and we can continue to improve practices and policies.

Example reader

What jobs are available?

For our full article on pursuing work in biosecurity, you can read our biosecurity research and policy career review.

If you want to focus on catastrophic pandemics in the biosecurity world, it might be easier to work on broader efforts that have more mainstream support first and then transition to more targeted projects later. If you are already working in biosecurity and pandemic preparedness (or a related field), you might want to advocate for a greater focus on measures that reduce risk robustly across the board, including in the worst-case scenarios.

The world could be doing a lot more to reduce the risk of natural pandemics on the scale of COVID-19. It might be easiest to push for interventions targeted at this threat before looking to address the less likely, but more catastrophic possibilities. On the other hand, potential attacks or perceived threats to national security often receive disproportionate attention from governments compared to standard public health threats, so there may be more opportunities to reduce risks from engineered pandemics under some circumstances.

To get a sense of what kinds of roles you might take on, you can check out our job board for openings related to reducing biological threats. This isn’t comprehensive, but it’s a good place to start:

Our job board features opportunities in biosecurity and pandemic preparedness:

    View all opportunities

    Want to work on reducing risks of the worst biological disasters? We want to help.

    We’ve helped people formulate plans, find resources, and put them in touch with mentors. If you want to work in this area, apply for our free one-on-one advising service.

    Apply for advising

    We thank Gregory Lewis for contributing to this article, and thank Anemone Franz and Elika Somani for comments on the draft.

    Learn more

    Top recommendations

    Podcasts

    Further recommendations

    The post Preventing catastrophic pandemics appeared first on 80,000 Hours.

    ]]>
    How 80,000 Hours has changed some of our advice after the collapse of FTX https://80000hours.org/2023/05/how-80000-hours-has-changed-some-of-our-advice-after-the-collapse-of-ftx/ Fri, 12 May 2023 06:06:36 +0000 https://80000hours.org/?p=81669 The post How 80,000 Hours has changed some of our advice after the collapse of FTX appeared first on 80,000 Hours.

    ]]>
    Following the bankruptcy of FTX and the federal indictment of Sam Bankman-Fried, many members of the team at 80,000 Hours were deeply shaken. As we have said, we had previously featured Sam on our site as a positive example of earning to give, a mistake we now regret. We felt appalled by his conduct and at the harm done to the people who had relied on FTX.

    These events were emotionally difficult for many of us on the team, and we were troubled by the implications it might have for our attempts to do good in the world. We had linked our reputation with his, and his conduct left us with serious questions about effective altruism and our approach to impactful careers.

    We reflected a lot, had many difficult conversations, and worked through a lot of complicated questions. There’s still a lot we don’t know about what happened, there’s a diversity of views within the 80,000 Hours team, and we expect the learning process to be ongoing.

    Ultimately, we still believe strongly in the principles that drive our work, and we stand by the vast majority of our advice. But we did make some significant updates in our thinking, and we’ve changed many parts of the site to reflect them. We wrote this post to summarise the site updates we’ve made and to explain the motivations behind them, for transparency purposes and to further highlight the themes that unify the changes.

    We also support many efforts to push for broader changes in the effective altruism community, like improved governance.1 But 80,000 Hours’ written advice is primarily aimed at personal career choices, so we focused on the actions and attitudes of individuals in these updates to the site’s content.

    The changes we made

    We think that while ambition in doing good is still underrated by many, we think it’s more important now to emphasise the downsides of ambition. Our articles on being more ambitious and the potential for accidental harm had both mentioned the potential risks, but we’ve expanded on these discussions and made the warnings more salient for the reader.

    We expanded our discussion of the reasons against pursuing a harmful career. And we’ve added more discussion in many places, most notably our article on the definition of “social impact” and in a new blog post from Benjamin Todd on moderation, about why we don’t encourage people to focus solely, to the exclusion of all other values, on aiming at what they think is impartially good.

    We also used this round of updates to correct some other issues that came up during the reflections on our advice after the collapse of FTX.

    The project to make these website changes was implemented by Benjamin Todd, Cody Fenwick and Arden Koehler, with some input from the rest of the team.

    Here is a summary of all the changes we made:

    • We updated our advice on earning to give to include Sam as a negative example, and we discussed at more length the risks of harm or corruption. We express more scepticism about highly ambitious earning to give (though we don’t rule it out, and we think it can still be used for good with the right safeguards).
    • In our article on leverage, we added discussion of the downsides and responsibility that comes with having a lot of leverage, such as the importance of governance and accountability for influential people.
    • We clarified our views on risk and put more emphasis on how you should generally only seek upsides after limiting downsides, for both yourself and the world.
    • We put greater emphasis on respecting a range of values and cultivating character in addition to caring about impact, as well as not doing things that seem very wrong from a commonsense perspective for what one perceives as the “greater good.”
    • We added a lot more advice on how to avoid accidentally doing harm.
    • We took easy opportunities to tone down language around maximisation and optimisation. For instance, we talk about doing more good, or doing good as one important goal among several, rather than the most good. There’s a lot of room for debate about these issues, and we’re not in total agreement on the team about the precise details, but we generally think it’s plausible that Sam’s unusual willingness to fully embrace naive maximising contributed to the decision making behind FTX’s collapse.
    • We slightly reduced how much we emphasise the importance of getting involved with the effective altruism community, which now has a murkier historical impact compared to what we thought before the collapse. (To be clear, we still think there are tons of great things about the EA community, continue to encourage people to get involved in it, and continue to count ourselves as part of it!)
    • We released a newsletter about character virtue and a blog post about moderation.
    • We’ve started doing more vetting of the case studies we feature on the site.
    • We have moved the “Founder of new project tackling top problems” out of our priority paths and into the “high-impact but especially competitive” section on the career reviews page. This move was in part driven by the change in the funding landscape after the collapse of FTX — but also because the recent proliferation of new such projects likely reduces the marginal value of the typical additional project.

    We’re still considering some other changes, such as to our ranking of effective altruism community building and certain other careers, as well as doing even more to emphasise character, governance, oversight, and related issues. But we didn’t want to wait to be ‘done’ with these edits, to the degree we ever will be ‘done’ learning lessons from this episode, before sharing this interim update with readers.

    Some of the articles that saw the most changes were:

    We’ve also updated some of our marketing materials, mostly by toning down calls to “maximise impact.” We still think it’s really important to be scope sensitive, and helping more individuals is better than helping fewer — some of the core ideas of effective altruism. But handling these ideas in a naive way, as the maximising language may incline some toward, can be counterproductive and miss out on important considerations.

    We think there’s a lot more we can learn from what happened. Here are some of the reflections members of the 80k team have had:

    We think the edits we’ve made are only a small part of the response that’s needed, but hopefully they move things in the right direction.

    The post How 80,000 Hours has changed some of our advice after the collapse of FTX appeared first on 80,000 Hours.

    ]]>
    80,000 Hours two-year review: 2021 and 2022 https://80000hours.org/2023/03/80000-hours-two-year-review-2021-and-2022/ Wed, 08 Mar 2023 11:29:04 +0000 https://80000hours.org/?p=80949 The post 80,000 Hours two-year review: 2021 and 2022 appeared first on 80,000 Hours.

    ]]>
    We’ve released our review of our programmes for the years 2021 and 2022. The full document is available for the public, and we’re sharing the summary below.

    You can find our previous evaluations here. We have also updated our mistakes page.


    80,000 Hours delivers four programmes: website, job board, podcast, and one-on-one. We also have a marketing team that attracts users to these programmes, primarily by getting them to visit the website.

    Over the past two years, three of four programmes grew their engagement 2-3x:

    • Podcast listening time in 2022 was 2x higher than in 2020
    • Job board vacancy clicks in 2022 were 3x higher than in 2020
    • The number of one-on-one team calls in 2022 was 3x higher than in 2020

    Web engagement hours fell by 20% in 2021, then grew by 38% in 2022 after we increased investment in our marketing.

    From December 2020 to December 2022, the core team grew by 78% from 14 FTEs to 25 FTEs.

    Ben Todd stepped down as CEO in May 2022 and was replaced by Howie Lempel.

    The collapse of FTX in November 2022 caused significant disruption. As a result, Howie went on leave from 80,000 Hours to be Interim CEO of Effective Ventures Foundation (UK). Brenton Mayer took over as Interim CEO of 80,000 Hours. We are also spending substantially more time liaising with management across the Effective Ventures group, as we are a project of the group.

    We had previously held up Sam Bankman-Fried as a positive example of one of our highly rated career paths, a decision we now regret and feel humbled by. We are updating some aspects of our advice in light of our reflections on the FTX collapse and the lessons the wider community is learning from these events.

    In 2023, we will make improving our advice a key focus of our work. As part of this, we’re aiming to hire for a senior research role.

    We plan to continue growing our main four programmes and will experiment with additional projects, such as relaunching our headhunting service and creating a new, scripted podcast with a different host. We plan to grow the team by roughly 50% in 2023, adding an additional 12 people.

    Our baseline non-marketing budget is $8.8m for 2023 and $13.7m for 2024. We’re keen to fundraise above our baseline budget and also interested in expanding our runway – though expect that the amount we raise in practice will be heavily affected by the funding landscape.

    We would like to increase the number of people and organisations donating to 80,000 Hours, so if you would consider donating, please contact michelle.hutchinson@80000hours.org.

    The post 80,000 Hours two-year review: 2021 and 2022 appeared first on 80,000 Hours.

    ]]>
    How to choose where to donate https://80000hours.org/articles/best-charity/ Wed, 09 Nov 2016 18:13:57 +0000 https://80000hours.org/?post_type=article&p=36393 The post How to choose where to donate appeared first on 80,000 Hours.

    ]]>
    If you want to make a difference, and are happy to give toward wherever you think you can do the most good (regardless of cause area), how do you choose where to donate? This is a brief summary of the most useful tips we have.

    How to choose an effective charity

    First, plan your research

    One big decision to make is whether to do your own research or delegate your decision to someone else. Below are some considerations.

    If you trust someone else’s recommendations, you can defer to them.

    If you know someone who shares your values and has already put a lot of thought into where to give, then consider simply going with their recommendations.

    But it can be better to do your own research if any of these apply to you:

    • You think you might find something higher impact according to your values than even your best advisor would find (because you have unique values, good research skills, or access to special information — e.g. knowing about a small project a large donor might not have looked into).
    • You think you might be able to productively contribute to the broader debate about which charities should be funded (producing research is a public good for other donors).
    • You want to improve your knowledge of effective altruism and charity evaluation.

    Consider entering a donor lottery.

    A donor lottery allows you to donate into a fund with other small donors, in exchange for a proportional chance to be able to choose where the whole fund gets donated. For example, you might put $20,000 into a fund in exchange for a 20% chance of being able to choose where $100,000 from that fund gets donated.

    Why might you want to do this? If you win the lottery, it’s worthwhile doing a great deal of research into where it’s best to give, to allocate that $100,000 as well as possible. If you don’t win, you don’t have to do any research, and whoever wins the lottery does it instead. In short, it’s probably more efficient for small donors to pool their funds, and for one of them to do in-depth research, rather than for each of them to do a small amount of research. This is because there are some fixed costs of understanding the landscape — it doesn’t generally become 100 times harder to figure out where to donate 100 times the funds.

    Giving What We Can organises donor lotteries once a year.

    If you’re going to do your own research, decide how much you should do.

    The more you’re giving as a percentage of your annual income, the more time it’s worth spending on research. Roughly speaking, a 1% donation might be worth a few hours of work, while a 50% donation could be worth a month of research. On the other hand, the more you earn per hour, it may be that the less time you should take off for independent research, as that may be dominated by simply earning and giving more.

    Another factor is how much you expect the research to affect your decisions. For example, if you haven’t thought about this much before, it’s worth doing more research. But even if you have thought about it a lot, bear in mind you could be overconfident in your current views (or things might have changed since you last looked into it), so a bit of research might be a good idea to ensure your donations are doing the most good.

    Finally, younger people should sometimes do more research, since it will help them learn about charity evaluation, which will inform their giving in future years (and perhaps their career decisions as well). As a young person, giving 1% per year and spending a weekend thinking about it is a great way to learn about effective giving. If you’re a bit older, giving 10%, and don’t expect your views to change, then perhaps one or two days of research is worth it. If you’re giving more than 10%, more time is probably justified.

    Second, choose an effective charity

    If you’re doing your own research, we recommend working through these steps:

    1. Decide which global problems you think are most pressing right now.

    You want to find charities that are working on big but neglected problems, and where there’s a clear route to progress — this is where it’s easiest to have a big impact. If you’re new to 80,000 Hours, learn about how we approach figuring out which global problems are most pressing, or see a list of problems we think especially need attention.

    2. Find the best organisations within your top 2–3 problem areas.

    Look for charities that are well-run, have a great team and potential to grow, and are working on a justified programme.

    Many charitable programmes don’t work, so focus on organisations that do at least one of the following:

    • Implement programmes that have been rigorously tested (most haven’t).
    • Are running pilot programmes that will be tested in the future.
    • Would be so valuable if they worked that it’s worth taking a chance on them — even if the likelihood of success is low. Organisations in this category have a ‘high-risk, high-reward’ proposition, such as scientific research, policy advocacy, or the potential to grow very rapidly.

    If you’re doing your own intensive research, then at this stage you typically need to talk to people in the area to figure out which organisations are doing good work. One starting point might be our lists of top-recommended organisations.

    3. If you have to break a tie, choose the one that’s furthest from meeting its funding needs.

    Some organisations already have a lot of funding, and may not have the capacity to effectively use additional funds. For instance, GiveWell has tried to find a good organisation that provides individuals with vaccines to fund, but funders like the Gates Foundation take most of the promising opportunities. You can assess an organisation’s room for more funding by looking at where they intend to spend additional donations, either by reading their plans or talking to them.

    This consideration is a bit less important than others: if you support a great organisation working on a neglected problem, then they’ll probably figure out a good way to use the money, even if they get a lot.

    Learn more about how to find effective charities

    • When can small donors make donations that are even more effective than large donors? This article lists situations when small donors have an advantage over large donors — ideally you’d choose one of these situations to focus on. It also includes more thoughts on whether to delegate your decision or do your own research.

    • Tips on how to evaluate charities from GiveWell. Bear in mind that the process for evaluating a large organisation is different from evaluating a startup. With large, stable organisations, you can extrapolate forward from their recent performance. With new and rapidly growing organisations, what matters is the long-term potential upside (and their chances of getting there), more than what they’ve accomplished in the past.

    We are not experts in charity evaluation — but there are people who are! Not every cause area has charity evaluators, but in global health and animal welfare the recommendations are more developed.

    A good place to start are the following lists, which are updated annually.

    Donating to expert-led funds rather than directly to charities

    The best charity to give to is both hard to determine and constantly changing. So, we think a reasonable option for people who don’t have much time for their own research is to give to expert-managed funds that are aligned with your principles. (Our principles are broadly in line with effective altruism, which is why we highlight effective altruism funds below.)

    When donating to a fund, you choose how to split your giving across different focus areas — global health, animal welfare, community infrastructure, and the long-term future — and an expert committee in each area makes grants, with the aim of selecting the most effective charities. This is a great way to delegate your decision to people who might have a better view of the options, provided you feel reasonably aligned with the committees.

    EA Funds options:

    Founders Pledge also has an expert-led fund for climate change.

    The Giving What We Can donation platform lists more recommended effective altruism funds:

    Donate now
    (Note that EA Funds is a project of the Effective Ventures Foundation, our parent charity, and due to our similar views on how to do the most good, we have received grants from both funds in the past.)

    You can also see some notes from our president, Benjamin Todd, on how he would decide where to donate.

    Topping up grants from other donors you broadly agree with

    If you prefer to have more control over where your money is going, you could also directly ‘top up’ a particular past grant made by one of the funds you think is effective, or another large donor, such as Open Philanthropy — read more about this option here:

    We think the leading foundation that takes an effective altruism approach to giving is Open Philanthropy.1 (Disclosure: it is our largest funder.) You can learn more about Open Philanthropy’s mindset and research in our interviews with current and former research staff.

    Open Philanthropy has far more research capacity than any individual donor, but you can roughly match the cost effectiveness of its grants without needing to invest much effort at all. One way to do this is by co-funding the same projects, or giving based on what its analysts have learned.

    Open Philanthropy often doesn’t want to provide 100% of an organisation’s funding, so that organisations don’t become too dependent on it alone. This creates a need for smaller donors to ‘top up’ its funding.

    In light of the above, Open Philanthropy maintains a database of all its grants, which you can filter by year and focus area.

    Also, some grantmakers at Open Philanthropy offer annual giving suggestions for individual donors that you can follow.

    For instance, if you’re interested in giving to support pandemic preparedness, you can get a list of all its grants in that area, read through some recent ones, and donate to an organisation you find attractive and which still has room to absorb more funding.

    Below is a list of Open Philanthropy’s focus areas and associated grants.

    Our top-priority areas

    Other focus areas we’ve investigated

    Focus areas we know less about

    Reading the research conducted by other informed donors

    Here are some other resources you could draw on:

    • Technical AI safety research: A contributor at the Effective Altruism Forum publishes a review of organisations most years— here’s their December 2021 update.
    • Global health and development: GiveWell identifies and recommends charities that are evidenced-based, thoroughly vetted, and underfunded. Many of the staff at GiveWell also write about where they are giving personally, and make suggestions for the public. Here’s their post from 2022.
    • Farmed animal welfare: Animal Charity Evaluators uses four criteria to recommend charities they believe most effectively help animals.
    • ‘S-risks’: The German Effective Altruism Foundation has launched its own expert-advised fund focused on the possibility that future technologies could lead to large amounts of suffering.
    • See all posts about where to donate on 80,000 Hours and on the EA Forum.

    Should you give now or later?

    It might be more effective to invest your money, grow it, and donate a larger sum later. We have an article on this, or you can read this more recent and technical exploration of the considerations. Here are all our resources on the ‘now vs later’ question.

    How should you handle taxes and giving?

    If you’re in the US, here’s an introductory guide to giving, taxes, and personal finance, and a more advanced one. You may also be interested in this guide to choosing a donor-advised fund.

    If you’re in the UK, here’s a guide to income tax and donations.

    You can also see Giving What We Can’s article on tax deductibility of donations by country.

    Next steps

    The post How to choose where to donate appeared first on 80,000 Hours.

    ]]>
    What 80,000 Hours learned by anonymously interviewing people we respect https://80000hours.org/2020/06/lessons-from-anonymous-interviews/ Thu, 18 Jun 2020 14:48:27 +0000 https://80000hours.org/?p=69994 The post What 80,000 Hours learned by anonymously interviewing people we respect appeared first on 80,000 Hours.

    ]]>
    We recently released the fifteenth and final installment in our series of posts with anonymous answers.

    These are from interviews with people whose work we respect and whose answers we offered to publish without attribution.

    It features answers to 23 different questions including How have you seen talented people fail in their work? and What’s one way to be successful you don’t think people talk about enough?.

    We thought a lot of the responses were really interesting; some were provocative, others just surprising. And as intended, they spanned a wide range of opinions.

    For example, one person had seen talented people fail by being too jumpy:

    “It seems particularly common in effective altruism for people to be happy to jump ship onto some new project that seems higher impact at the time. And I think that this tendency systematically underestimates the costs of switching, and systematically overestimates the benefits — so you get kind of a ‘grass is greener’ effect.

    In general, I think, if you’re taking a job, you should be imagining that you’re going to do that job for several years. If you’re in a job, and you’re not hating it, it’s going pretty well — and some new opportunity presents itself, I think you should be extremely reticent to jump ship.

    I think there are also a lot of gains from focusing on one activity or a particular set of activities; you get increasing returns for quite a while. And if you’re switching between things often, you lose that benefit.”

    But another thought that you should actually be pretty open to leaving a job after ~6 months:

    “Critically, once you do take a new job — immediately start thinking “is there something else that’s a better fit?” There’s still a taboo around people changing jobs quickly. I think you should maybe stay 6 months in a role just so they’re not totally wasting their time in training you — but the expectation should be that if someone finds out a year in that they’re not enjoying the work, or they’re not particularly suited to it, it’s better for everyone involved if they move on. Everyone should be actively helping them to find something else.

    Doing something you don’t enjoy or aren’t particularly good at for 1 or 2 years isn’t a tragedy — but doing it for 20 or 30 years is.”

    More broadly, the project emphasised the need for us to be careful when giving advice as 80,000 Hours.

    In the words of one guest:

    “trying to give any sort of general career advice — it’s a fucking nightmare. All of this stuff, you just kind of need to figure it out for yourself. Is this actually applying to me? Am I the sort of person who’s too eager to change jobs, or too hesitant? Am I the sort of person who works themselves too hard, or doesn’t work hard enough?”

    This theme was echoed in a bunch of responses (1, 2, 3, 4, 5, 6).

    And this wasn’t the only recurring theme — here are another 12:

    You can find the complete collection here.

    We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

    These quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own.

    All entries in this series

    1. What’s good career advice you wouldn’t want to have your name on?
    2. How have you seen talented people fail in their work?
    3. What’s the thing people most overrate in their career?
    4. If you were at the start of your career again, what would you do differently this time?
    5. If you’re a talented young person how risk averse should you be?
    6. Among people trying to improve the world, what are the bad habits you see most often?
    7. What mistakes do people most often make when deciding what work to do?
    8. What’s one way to be successful you don’t think people talk about enough?
    9. How honest & candid should high-profile people really be?
    10. What’s some underrated general life advice?
    11. Should the effective altruism community grow faster or slower? And should it be broader, or narrower?
    12. What are the biggest flaws of 80,000 Hours?
    13. What are the biggest flaws of the effective altruism community?
    14. How should the effective altruism community think about diversity?
    15. Are there any myths that you feel obligated to support publicly? And five other questions.

    The post What 80,000 Hours learned by anonymously interviewing people we respect appeared first on 80,000 Hours.

    ]]>
    Policy and research ideas to reduce existential risk https://80000hours.org/2020/04/longtermist-policy-ideas/ Mon, 27 Apr 2020 22:46:38 +0000 https://80000hours.org/?p=69591 The post Policy and research ideas to reduce existential risk appeared first on 80,000 Hours.

    ]]>
    In his book The Precipice: Existential Risk and the Future of Humanity, 80,000 Hours trustee Dr Toby Ord suggests a range of research and practical projects that governments could fund to reduce the risk of a global catastrophe that could permanently limit humanity’s prospects.

    He compiles over 50 of these in an appendix, which we’ve reproduced below. You may not be convinced by all of these ideas, but they help to give a sense of the breadth of plausible longtermist projects available in policy, science, universities and business.

    There are many existential risks and they can be tackled in different ways, which makes it likely that great opportunities are out there waiting to be identified.

    Many of these proposals are discussed in the body of The Precipice. We’ve got a 3 hour interview with Toby you could listen to, or you can get a copy of the book mailed you for free by joining our newsletter:

    Policy and research recommendations

    Engineered Pandemics

    • Bring the Biological Weapons Convention into line with the Chemical Weapons Convention: taking its budget from $1.4 million up to $80 million, increasing its staff commensurately, and granting the power to investigate suspected breaches.
    • Strengthen the WHO’s ability to respond to emerging pandemics through rapid disease surveillance, diagnosis and control. This involves increasing its funding and powers, as well as R&D on the requisite technologies.
    • Ensure that all DNA synthesis is screened for dangerous pathogens. If full coverage can’t be achieved through self regulation by synthesis companies, then some form of international regulation will be needed.
    • Increase transparency around accidents in BSL-3 and BSL-4 laboratories.
    • Develop standards for dealing with information hazards, and incorporate these into existing review processes.
    • Run scenario-planning exercises for severe engineered pandemics.

    Unaligned Artificial Intelligence

    • Foster international collaboration on safety and risk management.
    • Explore options for the governance of advanced AI.
    • Perform technical research on aligning advanced artificial intelligence with human values.
    • Perform technical research on other aspects of AGI safety, such as secure containment and tripwires.

    Asteroids & Comets

    • Research the deflection of 1 km+ asteroids and comets, perhaps restricted to methods that couldn’t be weaponised such as those that don’t lead to accurate changes in trajectory.
    • Bring short-period comets into the same risk framework as near-Earth asteroids.
    • Improve our understanding of the risks from long-period comets.
    • Improve our modelling of impact winter scenarios, especially for 1–10 km asteroids. Work with experts in climate modelling and nuclear winter modelling to see what modern models say.

    Supervolcanic Eruptions

    • Find all the places where supervolcanic eruptions have occurred in the past.
    • Improve the very rough estimates on how frequent these eruptions are, especially for the largest eruptions.
    • Improve our modelling of volcanic winter scenarios to see what sizes of eruption could pose a plausible threat to humanity.
    • Liaise with leading figures in the asteroid community to learn lessons from them in their modelling and management.

    Stellar Explosions

    • Build a better model for the threat including known distributions of parameters instead of relying on representative examples. Then perform sensitivity analysis on that model—are there any plausible parameters that could make this as great a threat as asteroids?
    • Employ blue-sky thinking about any ways current estimates could be underrepresenting the risk by a factor of a hundred or more.

    Nuclear Weapons

    • Restart the Intermediate-Range Nuclear Forces Treaty (INF).
    • Renew the New START arms control treaty, due to expire in February 2026.
    • Take US ICBMs off hair-trigger alert (officially called Launch on Warning).
    • Increase the capacity of the International Atomic Energy Agency (IAEA) to verify nations are complying with safeguards agreements.
    • Work on resolving the key uncertainties in nuclear winter modelling.
    • Characterise the remaining uncertainties then use Monte Carlo techniques to show the distribution of outcome possibilities, with a special focus on the worst-case possibilities compatible with our current understanding.
    • Investigate which parts of the world appear most robust to the effects of nuclear winter and how likely civilisation is to continue there.

    Climate

    • Fund research and development of innovative approaches to clean energy.
    • Fund research into safe geoengineering technologies and geoengineering governance.
    • The US should re-join the Paris Agreement.
    • Perform more research on the possibilities of a runaway greenhouse effect or moist greenhouse effect. Are there any ways these could be more likely than is currently believed? Are there any ways we could decisively rule them out?
    • Improve our understanding of the permafrost and methane clathrate feedbacks.
    • Improve our understanding of cloud feedbacks.
    • Better characterise our uncertainty about the climate sensitivity: what can and can’t we say about the right-hand tail of the distribution.
    • Improve our understanding of extreme warming (e.g. 5–20 °C), including searching for concrete mechanisms through which it could pose a plausible threat of human extinction or the global collapse of civilisation.

    Environmental Damage

    • Improve our understanding of whether any kind of resource depletion currently poses an existential risk.
    • Improve our understanding of current biodiversity loss (both regional and global) and how it compares to that of past extinction events.
    • Create a database of existing biological diversity to preserve the genetic material of threatened species.

    General

    • Explore options for new international institutions aimed at reducing existential risk, both incremental and revolutionary.
    • Investigate possibilities for making the deliberate or reckless imposition of human extinction risk an international crime.
    • Investigate possibilities for bringing the representation of future generations into national and international democratic institutions.
    • Each major world power should have an appointed senior government position responsible for registering and responding to existential risks that can be realistically foreseen in the next 20 years.
    • Find the major existential risk factors and security factors — both in terms of absolute size and in the cost-effectiveness of marginal changes.
      • (Editor’s note: existential risk factors are problems, like a shortage of natural resources, that don’t directly risk extinction, but could nonetheless indirectly raise the risk of a disaster. Security factors are the reverse, and might include better mechanisms for resolving disputes between major military powers.)
    • Target efforts at reducing the likelihood of military conflicts between the US, Russia and China.
    • Improve horizon-scanning for unforeseen and emerging risks.
    • Investigate food substitutes in case of extreme and lasting reduction in the world’s ability to supply food.
    • Develop better theoretical and practical tools for assessing risks with extremely high stakes that are either unprecedented or thought to have extremely low probability.
    • Improve our understanding of the chance civilisation will recover after a global collapse, what might prevent this, and how to improve the odds.
    • Develop our thinking about grand strategy for humanity.
    • Develop our understanding of the ethics of existential risk and valuing the long-term future.

    Learn more

    The post Policy and research ideas to reduce existential risk appeared first on 80,000 Hours.

    ]]>