Unconventional advice (Topic archive) - 80,000 Hours https://80000hours.org/topic/career-advice-strategy/unconventional-advice/ Sun, 04 Feb 2024 02:00:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 Emily Oster on what the evidence actually says about pregnancy and parenting https://80000hours.org/podcast/episodes/emily-oster-pregnancy-parenting-careers/ Thu, 01 Feb 2024 21:55:24 +0000 https://80000hours.org/?post_type=podcast&p=85618 The post Emily Oster on what the evidence actually says about pregnancy and parenting appeared first on 80,000 Hours.

]]>
The post Emily Oster on what the evidence actually says about pregnancy and parenting appeared first on 80,000 Hours.

]]>
Bryan Caplan on why you should stop reading the news https://80000hours.org/podcast/episodes/bryan-caplan-stop-reading-the-news/ Fri, 17 Nov 2023 21:10:39 +0000 https://80000hours.org/?post_type=podcast&p=84469 The post Bryan Caplan on why you should stop reading the news appeared first on 80,000 Hours.

]]>
The post Bryan Caplan on why you should stop reading the news appeared first on 80,000 Hours.

]]>
Why many people underrate investigating the problem they work on https://80000hours.org/2023/07/why-i-think-many-people-underrate-investigating-the-problem-they-work-on/ Mon, 31 Jul 2023 13:15:15 +0000 https://80000hours.org/?p=82893 The post Why many people underrate investigating the problem they work on appeared first on 80,000 Hours.

]]>
The idea this week: thinking about which world problem is most pressing may matter more than you realise.

I’m an advisor for 80,000 Hours, which means I talk to a lot of thoughtful people who genuinely want to have a positive impact with their careers. One piece of advice I consistently find myself giving is to consider working on pressing world problems you might not have explored yet.

Should you work on climate change or AI risk? Mitigating antibiotic resistance or preventing bioterrorism? Preventing disease in low-income countries or reducing the harms of factory farming?

Your choice of problem area can matter a lot. But I think a lot of people under-invest in building a view of which problems they think are most pressing.

I think there are three main reasons for this:

1. They think they can’t get a job working on a certain problem, so the argument that it’s important doesn’t seem relevant.

I see this most frequently with AI. People think that they don’t have aptitude or interest in machine learning, so they wouldn’t be able to contribute to mitigating catastrophic risks from AI.

But I don’t think this is true. There are potentially really impactful roles for reducing AI risk in:

Many roles in these fields don’t necessarily need you to have a background in ML or technical expertise. In general, I think that there are lots of ways to contribute to most problems.

Once you’ve determined which problem you’d like to solve, it’s much easier to try and identify which paths might suit you best.

2. People often fail to explore different issues because they become focused on one problem, such as climate change or AI. They believe, in my view correctly, that these are among the world’s most crucial issues, but then stop looking for alternatives.

This approach can be limiting. This is particularly true for those skilled in operations, fundraising, or policy-making since these skills are applicable across many issues. Others may have strengths that are especially well suited to a particularly pressing problem. If you’re a little flexible with your cause selection, you’ll increase your chances of finding very impactful work.

For example, Gregory Lewis, who has written articles for 80,000 Hours on biorisk, thinks AI risk is probably the most pressing problem in the world. But his reasoning didn’t stop there. In part because he has a background as a doctor, he concluded that he’s best suited to working on preventing catastrophic pandemics.

3. Some people defer too much to other people and organisations like 80,000 Hours who work on cause prioritisation full time.

This came up recently on our podcast. Lennart Heim, who researches compute governance to reduce risks from AI, initially underestimated the value of his expertise in hardware because he assumed that if it were significant, someone else would already be working on the topic. He later realised that important issues can go unnoticed, and he took the initiative to work on it himself.

There are other cause areas — like US-China relations and improving information security — that 80,000 Hours prioritises now much more now than we once did. People with skills in these areas might’ve undervalued their ability to contribute if they had simply deferred to us before we recognised the importance of these areas.

Another way this can go wrong is that some people work on problems they haven’t investigated, leading to low motivation and burnout. I’ve advised people who got jobs working in an area just because they heard it’s important, but once they were there, they found it hard to buy into the organisation’s approach to having an impact.


So, how much time should you invest in your cause prioritisation investigation? That’s a tricky question, but we have a blog post that offers some guidance.

If you’re grappling with some of these questions, we recommend that you apply for advising! We’re here to give personalised advice to help our advisees increase their positive impact.

This blog post was first released to our newsletter subscribers.

Join over 350,000 newsletter subscribers who get content like this in their inboxes weekly — and we’ll also send you a free ebook.

Learn more:

The post Why many people underrate investigating the problem they work on appeared first on 80,000 Hours.

]]>
Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame https://80000hours.org/after-hours-podcast/episodes/luisa-keiran-free-will-guilt-shame/ Sat, 22 Apr 2023 01:45:27 +0000 https://80000hours.org/?post_type=podcast_after_hours&p=81527 The post Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame appeared first on 80,000 Hours.

]]>
The post Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame appeared first on 80,000 Hours.

]]>
Spencer Greenberg on stopping valueless papers from getting into top journals https://80000hours.org/podcast/episodes/spencer-greenberg-stopping-valueless-papers/ Fri, 24 Mar 2023 04:01:41 +0000 https://80000hours.org/?post_type=podcast&p=81212 The post Spencer Greenberg on stopping valueless papers from getting into top journals appeared first on 80,000 Hours.

]]>
The post Spencer Greenberg on stopping valueless papers from getting into top journals appeared first on 80,000 Hours.

]]>
John McWhorter on key lessons from linguistics, the virtue of creoles, and language extinction https://80000hours.org/podcast/episodes/john-mcwhorter-language-extinction/ Tue, 20 Dec 2022 23:47:48 +0000 https://80000hours.org/?post_type=podcast&p=80322 The post John McWhorter on key lessons from linguistics, the virtue of creoles, and language extinction appeared first on 80,000 Hours.

]]>
The post John McWhorter on key lessons from linguistics, the virtue of creoles, and language extinction appeared first on 80,000 Hours.

]]>
Why being open to changing our minds is especially important right now https://80000hours.org/2022/11/why-being-open-to-changing-our-minds-is-especially-important-right-now/ Fri, 25 Nov 2022 21:21:08 +0000 https://80000hours.org/?p=80027 The post Why being open to changing our minds is especially important right now appeared first on 80,000 Hours.

]]>
If something surprises you, your view of the world should change in some way.

We’ve argued that you should approach your career like a scientist doing experiments: be willing to test out many different paths and gather evidence about where you can have the most impact.

More generally, this approach of open truth-seeking — being constantly, curiously on the lookout for new evidence and arguments, and always being ready to change our minds — is a virtue we think is absolutely crucial to doing good.

This blog post was first released to our newsletter subscribers.

Join over 350,000 newsletter subscribers who get content like this in their inboxes weekly — and we’ll also mail you a free book!

One of our first-ever podcast episodes was an interview with Julia Galef, author of The Scout Mindset (before she wrote the book!).

Julia argues — in our view, correctly — that it’s easy to end up viewing the world like a soldier, when really you should be more like a scout.

Soldiers have set views and beliefs, and defend those beliefs. When we are acting like soldiers, we display motivated reasoning: for example, confirmation bias, where we seek out information that supports our existing beliefs and misinterpret information that is evidence against our position so that it seems like it’s not.

Scouts, on the other hand, need to form correct beliefs. So they have to change their minds as they view more of the landscape.

Acting like a scout isn’t always easy:

  • There’s lots of psychological evidence suggesting that we all have cognitive biases that cloud our thinking.
  • It can sometimes be really painful to admit you were wrong or to come to think something unpleasant, even if that’s what the evidence suggests.
  • Even if you know you should change your beliefs, it’s difficult to know how much they should change in response to new evidence — the subject of our interview with Spencer Greenberg.
  • Having good judgement is actually just a difficult skill that needs to be practiced and developed over time.

But if we want to form correct beliefs about the world and how it works, we have to try.

And if we want to do good, forming correct beliefs about how our actions will impact others seems pretty crucial.

Why are we talking about this now?

Up until a few weeks ago, we’d held up Sam Bankman-Fried as a positive example of someone pursuing a high-impact career, and had written about how we encouraged him to use a strategy of earning to give.

Sam had pledged to donate 99% of his earnings to charity — and a year ago his net worth was estimated to be more than $20 billion. We were excited about what he might achieve with his philanthropy.

On November 11, Sam’s company, FTX, declared bankruptcy, and its collapse is likely to cause a tremendous amount of harm.

Sam appears to have made decisions which were, to say the least, seriously harmful.

If newspaper reports are accurate, customer deposits that were meant to be safely held by FTX were being used to make risky investments — investments which left FTX owing billions of dollars more than it had.

These reported actions are appalling.

We failed to see this coming.

So this week’s thoughts on the scout mindset are as much a reminder for us at 80,000 Hours as anyone else.

In the coming weeks and months, we want to thoughtfully examine what we believed and why — and in particular, where we were wrong — so we can, where needed, change our views.

There are so many questions for us to consider, in order to shape our future actions — here are a few:

There aren’t yet clear answers to these questions.

As we learn more about what happened and the wider effects of the collapse of FTX, we’re going to do our best to act like scouts, not soldiers: to defend beliefs only if they’re worthy of defence, and to be prepared and ready to change our minds.

We’ve released a statement regarding the collapse of FTX, and hope to write more on the topic soon.

We’re optimistic that the work of identifying, prioritising, and pursuing solutions to some of the world’s most pressing problems will continue. We hope you’ll be with us in that project.

Learn more

The post Why being open to changing our minds is especially important right now appeared first on 80,000 Hours.

]]>
The importance of considering speculative ideas https://80000hours.org/2022/10/the-importance-of-considering-speculative-ideas/ Sun, 30 Oct 2022 01:27:19 +0000 https://80000hours.org/?p=79748 The post The importance of considering speculative ideas appeared first on 80,000 Hours.

]]>
Let’s admit it: some of the things we think about at 80,000 Hours are considered weird by a lot of other people.

Our list of the most pressing problems has some pretty widely accepted concerns, to be sure: we care about mitigating climate change, preventing nuclear war, and ensuring good governance.

But one of our highest priorities is preventing an AI-related catastrophe, which sounds like science fiction to a lot of people. And, though we know less about them, we’re also interested in speculative issues — such as atomically precise manufacturing, artificial sentience, and wild animal suffering. These aren’t typically the kind of issues activists distribute flyers about.

Should it make us nervous that some of our ideas are out of the mainstream? It’s probably a good idea in these cases to take a step back, reexamine our premises, and consult others we trust about our conclusions. But we shouldn’t be too shocked if some of our beliefs end up at odds with common sense — indeed, I think everyone has good reason to be open to believing weird ideas.

This blog post was first released to our newsletter subscribers.

Join over 350,000 newsletter subscribers who get content like this in their inboxes weekly — and we’ll also mail you a free book!

One of the best reasons for this view relates to another of 80,000 Hours’ top priorities: preventing catastrophic pandemics. I’d guess few people think it’s strange to be concerned about pandemics now, as COVID-19 has killed more than 6 million people worldwide and thrown the global economy into chaos.

But 80,000 Hours has been worried about pandemics for a while — we had a podcast episode about the threat in 2017 (with our now-CEO Howie Lempel). It might have seemed odd at that time to be worrying about such an extreme scenario when there are so many important problems to be addressed in the world every day. In fact, 80,000 Hours cofounder Will MacAskill reported in his new book, What We Owe the Future, that he was met with laughter when he pitched pandemic preparedness as a top policy priority to the first minister of Scotland in 2017.

Now, though, it’s clear that failing to heed these warnings was a big mistake; we wish preparing for catastrophic outbreaks of infectious disease had just been normal.

And if your goal is to have a big impact on the world, you’d be well-advised to consider at least some weird ways of going about it. A lot of the most obvious ways of doing good in the world already have a lot of people working on them.

If you can find a problem or a solution to work on that seems off the beaten path, you may have a better chance at making a difference. One way to do that is to focus on a population whose moral status is often discounted, such as factory farmed animals (maybe even shrimp) — or future people.

This won’t always work out, but ideas that seem weird are more likely to be unduly neglected.

Looking back, there was a time when the threats from nuclear weapons and climate change — which I listed earlier as “widely accepted concerns” — were completely novel and unconventional. So our sense of what might be weird is highly contextual and, in a sense, parochial. We should be reluctant to give too much weight to ‘weirdness’ concerns when thinking about serious matters.

However, there’s certainly something to be said for keeping yourself grounded. When you follow enough arguments to their apparently logical endpoints, you can end up reaching some very bizarre and counterintuitive conclusions. So if you come to conclusions that seem totally disconnected from what you think is valuable, it may be worth operating under the assumption that something in your line of reasoning has gone wrong, even if you can’t pinpoint it.

It would be most surprising, though, if we’ve just now come to the end of moral discovery. Gender equality, racial justice, religious liberty, LGBTQ+ rights, and democracy all took a long time to gain wider acceptance. New moral ideas always seem strange at first. And many people will resist them.

But if you’re comfortable with being a little weird, you might find yourself on an important moral frontier. And if you’re trying to be ambitious about making the world a better place and improving the prospects for future generations, that’s where you should want to be.

Learn more

The post The importance of considering speculative ideas appeared first on 80,000 Hours.

]]>
Know what you’re optimising for https://80000hours.org/2022/06/know-what-youre-optimising-for/ Wed, 15 Jun 2022 13:42:49 +0000 https://80000hours.org/?p=78037 The post Know what you’re optimising for appeared first on 80,000 Hours.

]]>

There is (sometimes) such a thing as a free lunch

You live in a world where most people, most of the time, think of things as categorical, rather than continuous. People either agree with you or they don’t. Food is healthy or unhealthy. Your career is ‘good for the world,’ or it’s neutral, or maybe even it’s bad — but it’s only the category that matters, not the size of the benefit or harm. Ideas are wrong, or they are right. Predictions end up confirmed or falsified.

In my view, one of the central ideas of effective altruism is the realisation that ‘doing good’ is not such a binary. That as well as it mattering that we help others at all, it matters how much we help. That helping more is better than helping less, and helping a lot more is a lot better.

For me, this is also a useful framing for thinking rationally. Here, rather than ‘goodness,’ the continuous quantity is truth. The central realisation is that ideas are not simply true or false; they are all flawed attempts to model reality, and just how flawed is up for grabs. If we’re wrong, our response should not be to give up, but to try to be less wrong.

When you realise something is continuous that most people are treating as binary, this is a good indication that you’re in a situation where it’s unusually easy to achieve something you care about. Because if most people don’t see huge differences between options that you do, you can concentrate on the very best options and face little competition from others.

Sometimes the converse is also true: people may treat something as continuous, and work hard at it, despite the returns to working harder actually being very small.

An example that sticks in my mind from my time teaching maths is about how neatly work is presented. Lots of people care about neat work or good presentation, and sometimes there’s a very good reason for this. If work is messy enough that it’s difficult to read, or that the student is making mistakes caused by misreading their own writing, this is important to fix!

The problem is, the returns on neatness suddenly drop off a cliff when the work is clear enough to be easily readable, and yet some students will put huge amounts of effort into making their work look not just clear, but unnecessarily neat.

Worse still, some teachers will praise this additional effort, implying it’s a good thing that someone takes three times as long as they need to on every piece of work just to make it look nice. But it’s usually1 not — that extra time could be used for learning, or just hanging out with friends!

I remember speaking to some students who were struggling with their workload, only to discover that they were doing each piece of work twice: once to get the maths, and another to copy everything out beautifully to hand in. It broke my heart.

Even when it’s fairly normal to try really hard at something, it’s worth checking that more effort is reliably leading to more of what you care about. That is to say, there are some things you should half-ass with everything you’ve got.

Thinking about these ideas as I tried to help my students — and now as I try to help the people I advise — I’ve noticed two ideas that frequently appear in the advice I give.

  1. Try optimising for something.
  2. Know what you’re optimising for.

In the rest of this article, I describe how I think about applying these two ideas, and the sort of mistakes that I hope they can prevent. I include lots of examples, and most of these are linked to career decisions inspired by real conversations I’ve had, though none were written with a specific person in mind, and all of the names are made up.

I also try to include some more abstract mathematical intuition, made (hopefully) clearer with the addition of some pretty graphs.

At the end of the article, I try to think of ways in which the advice might not apply or be misleading, though you may well generate others as you read, and trying to do so seems like a useful exercise.

Idea #1: Consider optimising for something

You are allowed to try really hard to achieve a thing you care about, even when it’s a thing not that many people try hard to achieve — in some ways, especially in those cases. You don’t have to stop at ‘enough,’ or even at ‘lots’ — you can keep going. You can add More Dakka.

The thought of trying really hard at something feels very natural to some people, including many who I expect might find useful ideas in the rest of the article. But to many others, it feels gross, or unnatural, or in some way ‘not allowed’ — ‘tryhard’ is a term some people even use to insult others! It’s for this last reason that I framed this idea in terms of permission — I don’t think you need it, but if you found the idea off-putting, now you have permission to do it anyways.

Idea #2: Know what you’re optimising for

This idea is about being deliberate in what you’re trying hard to achieve. It’s about trying to ensure that the subject of the majority of your effort is in fact the most important thing. In some sense, like optimising at all, it’s about permission: knowing that you are allowed to realise that one thing is much more important for you to get than all of the others, and trying to get it (even if it’s not the typical thing people want).

Know what you’re optimising for is also, I suspect, often about picking only one thing at a time, even if multiple things are important. Even in cases where picking one thing doesn’t seem best, asking the question “Which one thing should I optimise for?” seems like it might produce useful insights.

People often optimise for the wrong thing

I first saw people repeatedly optimising for the wrong thing when I was teaching. Students care about many things, from status among their peers to getting good enough grades for university. Many of these things are directly rewarded by people that students interact with: parents will praise good grades; other students will let you know what they think of you; and some teachers will be fairly transparent about who they think the smart kids are (even if they try to hide it).

Importantly, though several of these things are correlated with learning, none of them are perfect indicators of actually learning. Even though most people agree to some extent that one of the major purposes of school is learning, learning has a really weak reward signal, and it’s easy to drift through school without really trying to learn.

There’s a difference between doing things that are somewhat correlated with things you want (or even doing things that you expect to lead to things you want), and trying really unusually hard to actually get what you want. Sometimes working out what you actually want can be really hard — for many, working out what one ultimately values can be a lifetime’s work. However, I’ve been frequently surprised, during my time as an advisor, by how often it’s been sufficient to just ask:

It looks like you’re trying to achieve X here. Is X really the thing you want?

The mistake of optimising for not quite the thing you want can be particularly easy to miss if the thing is useful in general, but in this instance is not useful for you. For one thing, it’s hard to internally notice without specifically looking for it. But you’re also less likely to have others point out this mistake, because things that are useful in general seem more ‘normal’ to have as a goal. For instance, appearing high status seems pretty useful, and it’s a goal that many people have to some extent, so who’s going to stop and ask you whether you really endorse playing as many status games as you are?

Perhaps a more relevant example is that I often see (usually young) effective altruists optimising for impact per unit of time, rather than for the total impact they expect to have over their career. They ask themselves what the most impactful thing they can do right now is, and then do that. This often works well, and there are many worse heuristics to use. Unfortunately, it’s not always the case that trying to do the very best thing right now puts you in the best position to do the most good overall.

People seem to accept this when it comes to going to university. Choosing to do an undergraduate degree is to some extent like choosing to take a negative salary job — which usually doesn’t produce any useful output to others — purely to learn a lot and set yourself up well to achieve things later. For many people, this is a great idea! But then something strange happens when people graduate. For an altruist, taking a role in a for-profit company where you’ll gain a whole bunch of useful skills can look very unattractive, as you won’t be having any direct impact. Taking a salary hit for an opportunity to learn a ton also doesn’t look good (that is, unless the opportunity is called ‘grad school,’ in which case it looks fine again). Neither of these strategies are necessarily best, but they are at least worth considering! The lost impact or salary at the outset might be made up for many times over if you’re able to access more impactful opportunities later.

The law of equal and opposite advice applies in many places, and this is one of them. Just as you might make the mistake of under-investing in yourself, you can also stay in the ‘building up to have a big impact later’ phase for too long. Someone I advised not too long ago referred to themself as “an option value addict,” which I thought was a great way to frame this idea. While the idea of option value — that it can be useful to preserve your ability to choose something later — is a really valuable one, it’s only valuable to keep options that you actually have some chance of choosing. The smaller the chance that you ever take a particular option, the less valuable it is to preserve it — so thinking about how likely you personally are to use it ends up being important.

For example, it might be worthwhile for some people to keep an extremely low profile on all forms of social media in case a spicy social media presence prevents them from later working for an intelligence agency or running for office. But if you have absolutely no intention of ever working in government, this reason doesn’t apply to you! (There are, of course, other reasons one might want to limit social media exposure.)

Trying to optimise for too many things can lead to optimising for nothing in particular

As well as optimising for the wrong things, I often speak to people who are shooting for too many things at once. This typically plays out in one of two ways:

  • People try to optimise for so many things that they don’t end up making progress on any.
  • People just don’t optimise at all — because when so many things seem important, where do you even start?

In both cases, this often ends up with people trying to find an option that looks at least kind-of good according to multiple different criteria. Doing well on many uncorrelated criteria is pretty hard.2 This often leads to only one option being considered… and that option not looking great.

What might this look like?

The examples below have been inspired by conversations I’ve had. Each involves a hypothetical person describing an option which seems pretty good. It might even be the best option they have. But all of these pretty good options follow the pattern of ‘this thing looks good for many different reasons’ — and ‘looks good for many reasons’ misses the importance of scale: that doing much, much better in one way is often better than doing a little better in several ways at the same time. The people in the examples would benefit from considering what their decision would look like if they picked one source of value, and tried to get as much of that as possible.

Alex

If I join this cleantech startup, I will be contributing to the fight against climate change. It’s also a startup, so there’s some chance it will go really well — so this is also an earning-to-give strategy, and I might learn some things by being there.

  • If I’m hoping to pursue a ‘hits-based’ earning-to-give strategy as a startup founder or early-stage employee, almost all the expected value is going to come from the outcomes where the project really takes off. If I look around the startup space for other options, how likely does it seem that this is the one that will take off? Can I find a much better opportunity if I drop the requirement that it has to be in cleantech?
  • When I really reflect on which causes seem important, I realise that I’m quite likely to make my donations to reducing global catastrophic biological risks, rather than climate charities. There’s a lot of need for founders in the biosecurity space, and my skills and earnings won’t be that useful in the next few years, so maybe the learning value from being part of an early-stage startup is the most important consideration here. Does the cleantech startup look best on that basis, or is there somewhere else I might be able to learn much more, even if the primary motive of the founders is profit rather than climate change?

Luisa

If I do this data science in healthcare internship, I’ll learn some useful machine learning skills, and I might be able to directly contribute to reducing harm from heart disease.

  • Developing my machine learning skills seems like the most important thing for me to focus on, given what I want to work on after graduating. It’s not clear that this internship is going to be particularly helpful — I’m probably just going to end up cleaning data. I don’t learn well without structure though. Could I find someone to supervise or mentor me through a machine learning project?
  • I’m pretty sure I’ll learn loads during summer; I’ve done really well at teaching myself programming so far and would probably learn even more if I didn’t do the internship. But I don’t want to have to move back into my parents’ house in the middle of nowhere where I’ll be miserable, and the pay from the internship will mean I can afford to stay in a city, see my friends, and keep motivated. If the main thing I’m getting from the internship is money, can I apply for a grant? Or can I find something shorter which will still pay me enough, or something where I’ll be writing more code even if it’s not in healthcare?

Benjamin

This role isn’t directly relevant to the cause I think is most important, but it’s still helping somewhat, and it’s fairly well paid so I can also contribute with my donations.

  • If I just took the highest-salary job I could, how much more would I be able to donate? Would that do more good than my direct work in my current role? I think my donations are directly saving a lot of lives, so I should at least run the numbers.
  • I’m giving away a decent fraction of my salary anyway, so I’m happy to live on less than this job is giving me. Did I restrict my options too much by looking for such a high salary? I should look at whether there are any jobs I could take where I’d be able to do much more good directly than the total of my current work and donations are doing now.

When facing a situation with multiple potential sources of value, you might be able to get outsized gains by just pushing really hard on one of them. In particular, it’s possible to get gains so big that they more than outweigh losses elsewhere.

It’s not always the case that you can completely trade off different good things against each other — many people, for example, want to have at least some interest in their work. But it is sometimes the case, and it’s worth noticing when you’re in one of those situations. In particular, if the different good things you’re achieving are all roughly described as ‘positive effects on the world,’ you can estimate the size of the effects and see how they trade off against each other. What matters is that you’re doing good, not how you’re doing it. Of course, be careful not to take that last part too far.

The ‘alternative framings’ in the examples above all replace optimising for nothing in particular with just optimising for one thing. The other things either got dropped entirely, or were only satisficed,3 rather than optimised for. This isn’t an accident. Picking one thing forces you to be deliberate about which thing you shoot for, and it makes it seem possible to actually optimise. I think those benefits alone are enough to at least consider just picking one thing.

But I actually suspect that something even stronger is true: often just having a single goal is best.

The intuition here is that when you value things differently to the population average, your best options are likely to be skewed towards the things you care relatively more about. Markets are fairly efficient for average preferences, but when your preferences are different to the average, you might find big inefficiencies. For example, if you’re househunting and you absolutely love cooking but never work from home, it’s worth looking for places that have unusually big kitchens compared to the size of the other rooms. Most people are willing to pay more for bigger rooms, or a home office — if you don’t need those things, don’t pay for them!

Let’s sketch some graphs to try to see what’s going on here. Consider the case where you care about two things — let’s say salary and interestingness. (Often you’ll care about more than two things, but 2D plots are easier to sketch, and I suspect that the effect I sketch below is even stronger in higher dimensions.) You might expect the job market to look something like Figure 1:

Initial distribution of jobs
Figure 1. Initial distribution of jobs

Let’s assume that the average person cares equally about salary and interestingness, and rates them by just adding up the two scores. When this is the case, we should expect that higher-salaried jobs that are more interesting will be harder to get.

In Figure 2, I’ve colour coded easy jobs to get as black/purple and harder jobs to get as orange/yellow below. But what if I care much more about my job being interesting than it paying well? In this case, the best jobs for me won’t be quite the same as the hardest for me to get. I’ve shown this preference in Figure 3 by colour coding a different plot from bright yellow (perfect for me) to dark purple (terrible for me). I assumed that I still cared about salary, but that interest was three times as important — so to rank the jobs, I multiplied the interest score by three before adding salary.

Jobs colour coded by competitiveness
Figure 2. Jobs colour coded by competitiveness
Jobs colour coded by personal preference
Figure 3. Jobs colour coded by personal preference

I want to look for jobs that are easier for me to get (darker on Figure 2), and that I’ll actually want (lighter on Figure 3). The easiest jobs for me to get are in the bottom left, which doesn’t help much, as I don’t want these. The jobs I want most are in the top right, which also doesn’t help much as these are hardest to get. If my theory is correct, I should get the best tradeoffs between these two things by focusing hard on the thing I care more about than average (interest), while not worrying as much about the thing I care less about than average (salary). This would tell me to look first in the bottom right of the graph.

It’s a little hard to tell from just these two figures exactly how well the theory is doing, so let’s make things a bit easier to see in Figure 4 below. First, I removed the top 10% most popular jobs among the general public, to represent some jobs being competitive enough to not even be worth trying. I then also removed the bottom 50% of the jobs according to my preferences, to represent wanting to look for something better than average. Both of these cutoffs are arbitrary, but the conclusion doesn’t change when you pick different ones.

Jobs I'll be able to get that I also want
Figure 4. Jobs I’ll be able to get that I also want

As expected, the best-looking options I’ll actually be able to get look like very interesting, low-salary jobs.

In practice, all of the tradeoffs above will be much less clean. Preferences about different options probably shouldn’t be linear, for example, certainly not in the case of salary. Despite all this, the conclusion remains that if your preferences are in some way different from the average, some of the best places to look to exploit the differences are the extremes.

When do I expect this not to apply?

Multiplicative factors

In the sorts of situations I describe above, the total value tends to come from the values from each different consideration being added up: my job being interesting makes me a bit happier, and so does being paid more; donating money to effective charities saves lives, and so does working for one of those charities. In these cases, less of one thing pretty directly gets traded for more of another. Even in these cases, it can still be worth getting to some minimum level,3 if you get most of the gains from getting to that level and/or it’s easy.

Sometimes though, success looks more like a bunch of factors multiplied together than a bunch of things added together. When this is the case, it becomes really important that none of those factors end up getting set too low, which can be catastrophic.

In my view, the most important example of something that can be a multiplier on everything else you’re doing is personal health and wellbeing, especially when it is in danger of dropping below a certain level. Burnout is already a big risk when you’re optimising for doing as much as possible to help, especially among people who really care about others. In fact, one of my biggest concerns in writing this piece is that it might make this risk higher.

In some sense, we can frame this problem as a mistake of optimising for the wrong thing: impact right now instead of impact over the long run. But on this topic, the thing I care most about is not what it says about optimisation — I care most that you take care of yourself as your number one priority. These resources provide useful perspectives on this risk, as well as some ideas for how to reduce it:

Very good might be good enough

You’ll often find that as you keep trying to push the envelope further, it gets harder and harder to make progress. At some point then, even after you’ve seen substantial gains from deciding to optimise at all, you may reach a point where effort on the most important thing is going to pay off less than effort on something else.

This might happen because there are fewer and fewer people who you can learn from. It could be that you are in fact now making much fewer mistakes in your efforts, and the fewer mistakes you make, the harder it is to catch and eliminate them. Maybe it’s just that you’re starting to enter the domain of people who are really trying, and competition is heating up. Whatever the reason is, there’s a chance that this is the time to pick a second thing, and push on that too. In particular, when it comes to personal skill development, not only can it be easier to get extremely good at two things than truly world-class at one, in this case your skill set might look quite special.

Next steps

People who know what they are optimising for might ask themselves things like:

  • Is what I’m trying to achieve in this situation the right thing?
  • Am I trying to achieve multiple things at once? Is that the best strategy?
  • Does the thing I’m trying to achieve actually lead to something I want?
  • What would it look like if I focused on the most important thing and dropped the others?

It might be worth thinking about some aspect of your life, and ask yourself those questions now. Did one work particularly well, or can you think of an alternative question that works better for you?

After reading this article, you may well think that this kind of mindset isn’t well-suited to the way you think. If that’s the case, that’s fine! Hopefully you now at least have a different perspective you can look at some decisions with. Even if it seems unlikely you’ll use it often, it might shed some light on decisions made by people like me.

The post Know what you’re optimising for appeared first on 80,000 Hours.

]]>
Nova DasSarma on why information security may be critical to the safe development of AI systems https://80000hours.org/podcast/episodes/nova-dassarma-information-security-and-ai-systems/ Tue, 14 Jun 2022 21:46:23 +0000 https://80000hours.org/?post_type=podcast&p=78027 The post Nova DasSarma on why information security may be critical to the safe development of AI systems appeared first on 80,000 Hours.

]]>
The post Nova DasSarma on why information security may be critical to the safe development of AI systems appeared first on 80,000 Hours.

]]>