Computer Science PhD (Topic archive) - 80,000 Hours https://80000hours.org/topic/careers/other-careers/computer-science-phd/ Wed, 31 Jan 2024 18:30:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 Software and tech skills https://80000hours.org/skills/software-tech/ Mon, 18 Sep 2023 13:00:13 +0000 https://80000hours.org/?post_type=skill_set&p=83654 The post Software and tech skills appeared first on 80,000 Hours.

]]>

In a nutshell:

You can start building software and tech skills by trying out learning to code, and then doing some programming projects before applying for jobs. You can apply (as well as continue to develop) your software and tech skills by specialising in a related area, such as technical AI safety research, software engineering, or information security. You can also earn to give, and this in-demand skill set has great backup options.

Key facts on fit

There’s no single profile for being great at software and tech skills. It’s particularly cheap and easy to try out programming (which is a core part of this skill set) via classes online or in school, so we’d suggest doing that. But if you’re someone who enjoys thinking systematically, building things, or has good quantitative skills, those are all good signs.

Why are software and tech skills valuable?

By “software and tech” skills we basically mean what your grandma would call “being good at computers.”

When investigating the world’s most pressing problems, we’ve found that in many cases there are software-related bottlenecks.

For example, machine learning (ML) engineering is a core skill needed to contribute to AI safety technical research. Experts in information security are crucial to reducing the risks of engineered pandemics, as well as other risks. And software engineers are often needed by nonprofits, whether they’re working on reducing poverty or mitigating the risks of climate change.

Also, having skills in this area means you’ll likely be highly paid, offering excellent options to earn to give.

Moreover, basic programming skills can be extremely useful whatever you end up doing. You’ll find ways to automate tasks or analyse data throughout your career.

What does a career using software and tech skills involve?

A career using these skills typically involves three steps:

  1. Learn to code with a university course or self-study and then find positions where you can get great mentorship. (Read more about how to get started.)
  2. Optionally, specialise in a particular area, for example, by building skills in machine learning or information security.
  3. Apply your skills to helping solve a pressing global problem. (Read more about how to have an impact with software and tech.)

There’s no general answer about when to switch from a focus on learning to a focus on impact. Once you have some basic programming skills, you should look for positions that both further improve your skills and have an impact, and then decide based on which specific opportunities seem best at the time.

Software and tech skills can also be helpful in other, less directly-related career paths, like being an expert in AI hardware (for which you’ll also need a specialist knowledge skill set) or founding a tech startup (for which you’ll also need an organisation-building skill set). Being good with computers is also often part of the skills required for quantitative trading.

Programming also tends to come in handy in a wide variety of situations and jobs; there will be other great career paths that will use these skills that we haven’t written about.

How to evaluate your fit

How to predict your fit in advance

Some indications you’ll be a great fit include:

  • The ability to break down problems into logical parts and generate and test hypotheses
  • Willingness to try out many different solutions
  • High attention to detail
  • Broadly good quantitative skills

The best way to gauge your fit is just to try out programming.

It seems likely that some software engineers are significantly better than average — and we’d guess this is also true for other technical roles using software. In particular, these very best software engineers are often people who spend huge amounts of time practicing. This means that if you enjoy coding enough to want to do it both as a job and in your spare time, you are likely to be a good fit.

How to tell if you’re on track

If you’re at university or in a bootcamp, it’s especially easy to tell if you’re on track. Good signs are that you’re succeeding at your assigned projects or getting good marks. An especially good sign is that you’re progressing faster than many of your peers.

In general, a great indicator of your success is that the people you work with most closely are enthusiastic about you and your work, especially if those people are themselves impressive!

If you’re building these skills at an organisation, signs you’re on track might include:

  • You get job offers at organisations you’d like to work for.
  • You’re promoted within your first two years.
  • You receive excellent performance reviews.
  • You’re asked to take on progressively more responsibility over time.
  • After some time, you’re becoming someone in your team who people look to solve their problems, and people want you to teach them how to do things.
  • You’re building things that others are able to use successfully without your input.
  • Your manager / colleagues suggest you might take on more senior roles in the future.
  • You ask your superiors for their honest assessment of your fit and they are positive (e.g. they tell you you’re in the top 10% of people they can imagine doing your role).

How to get started building software and tech skills

Independently learning to code

As a complete beginner, you can write a Python program in less than 20 minutes that reminds you to take a break every two hours.

A great way to learn the very basics is by working through a free beginner course like Automate the Boring Stuff with Python by Al Seigart.

Once you know the fundamentals, you could try taking an intro to computer science or intro to programming course. If you’re not at university, there are plenty of courses online, such as:

Don’t be discouraged if your code doesn’t work the first time — that’s what normally happens when people code!

A great next step is to try out doing a project with other people. This lets you test out writing programs in a team and working with larger codebases. It’s easy to come up with programming projects to do with friends — you can see some examples here.

Once you have some more experience, contributing to open-source projects in particular lets you work with very large existing codebases.

Attending a coding bootcamp

We’ve advised many people who managed to get junior software engineer jobs in less than a year by going to a bootcamp.

Coding bootcamps are focused on taking people with little knowledge of programming to as highly paid a job as possible within a couple of months. This is a great entry route if you don’t already have much background, though some claim the long-term prospects are not as good as if you studied at university or in a particularly thorough way independently because you lack a deep understanding of computer science. Course Report is a great guide to choosing a bootcamp. Be careful to avoid low-quality bootcamps. To find out more, read our interview with an App Academy instructor.

Studying at university

Studying computer science at university (or another subject involving lots of programming) is a great option because it allows you to learn to code in an especially structured way and while the opportunity cost of your time is lower.

It will also give you a better theoretical understanding of computing than a bootcamp (which can be useful for getting the most highly-paid and intellectually interesting jobs), a good network, some prestige, and a better understanding of lower-level languages like C. Having a computer science degree also makes it easier to get a US work visa if you’re not from the US.

Doing internships

If you can find internships, ideally at the sorts of organisations you might want to work for to build your skills (like big tech companies or startups), you’ll gain practical experience and the key skills you wouldn’t otherwise pick up from academic degrees (e.g. using version control systems and powerful text editors). Take a look at our our list of companies with software and machine learning internships.

AI-assisted coding

As you’re getting started, it’s probably worth thinking about how developments in AI are going to affect programming in the future — and getting used to AI-assisted coding.

We’d recommend trying out using GitHub CoPilot, which writes code for you based on your comments. Cursor is a popular AI-assisted code editor based on VSCode.

You can also just ask AI chat assistants for help. ChatGPT is particularly helpful (although only if you use the paid version).

We think it’s reasonably likely that many software and tech jobs in the future will be heavily based on using tools like these.

Building a specialty

Depending on how you’re going to use software and tech skills, it may be useful to build up your skills in a particular area. Here’s how to get started in a few relevant areas:

If you’re currently at university, it’s worth checking if you can take an ML course (even if you’re not majoring in computer science).

But if that’s not possible, here are some suggestions of places you might start if you want to self-study the basics:

PyTorch is a very common package used for implementing neural networks, and probably worth learning! When I was first learning about ML, my first neural network was a 3-layer convolutional neural network with L2 regularisation classifying characters from the MNIST database. This is a pretty common first challenge and a good way to learn PyTorch.

You may also need to learn some maths.

The maths of deep learning relies heavily on calculus and linear algebra, and statistics can be useful too — although generally learning the maths is much less important than programming and basic, practical ML.

Again, if you’re still at university we’d generally recommend studying a quantitative degree (like maths, computer science, or engineering), most of which will cover all three areas pretty well.

If you want to actually get good at maths, you have to be solving problems. So, generally, the most useful thing that textbooks and online courses provide isn’t their explanations — it’s a set of exercises to try to solve in order, with some help if you get stuck.

If you want to self-study (especially if you don’t have a quantitative degree) here are some possible resources:

You might be able to find resources that cover all these areas, like Imperial College’s Mathematics for Machine Learning.

Most people get started in information security by studying computer science (or similar) at a university, and taking some cybersecurity courses — although this is by no means necessary to be successful.

You can get an introduction through the Google Foundations of Cybersecurity course. The full Google Cybersecurity Professional Certificate series is also worth watching to learn more on relevant technical topics.

For more, take a look at how to try out and get started in information security.

Data science combines programming with statistics.

One way to get started is by doing a bootcamp. The bootcamps are a similar deal to programming, although they tend to mainly recruit science PhDs. If you’ve just done a science PhD and don’t want to continue with academia, this is a good option to consider (although you should probably consider other ways of using the software and tech skills first). Similarly, you can learn data analysis, statistics, and modelling by taking the right graduate programme.

Data scientists are well paid — offering the potential to earn to give — and have high job satisfaction.

To learn more, see our full career review of data science.

Depending on how you’re aiming to have an impact with these skills (see the next section), you may also need to develop other skills. We’ve written about some other relevant skill sets:

For more, see our full list of impactful skills.

Once you have these skills, how can you best apply them to have an impact?

The problem you work on is probably the biggest driver of your impact. The first step is to make an initial assessment of which problems you think are most pressing (even if you change your mind over time, you’ll need to decide where to start working).

Once you’ve done that, the next step is to identify the highest-potential ways to use software and tech skills to help solve your top problems.

There are five broad categories here:

While some of these options (like protecting dangerous information) will require building up some more specialised skills, being a great programmer will let you move around most of these categories relatively easily, and the earning to give options means you’ll always have a pretty good backup plan.

Find jobs that use software and tech skills

See our curated list of job opportunities for this path.

    View all opportunities

    Career paths we’ve reviewed that use these skills

    Read next:  Explore other useful skills

    Want to learn more about the most useful skills for solving global problems, according to our research? See our list.

    Plus, join our newsletter and we’ll mail you a free book

    Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

    The post Software and tech skills appeared first on 80,000 Hours.

    ]]>
    Lennart Heim on the compute governance era and what has to come after https://80000hours.org/podcast/episodes/lennart-heim-compute-governance/ Thu, 22 Jun 2023 23:23:01 +0000 https://80000hours.org/?post_type=podcast&p=82516 The post Lennart Heim on the compute governance era and what has to come after appeared first on 80,000 Hours.

    ]]>
    The post Lennart Heim on the compute governance era and what has to come after appeared first on 80,000 Hours.

    ]]>
    AI safety technical research https://80000hours.org/career-reviews/ai-safety-researcher/ Mon, 19 Jun 2023 10:28:33 +0000 https://80000hours.org/?post_type=career_profile&p=74400 The post AI safety technical research appeared first on 80,000 Hours.

    ]]>
    Progress in AI — while it could be hugely beneficial — comes with significant risks. Risks that we’ve argued could be existential.

    But these risks can be tackled.

    With further progress in AI safety, we have an opportunity to develop AI for good: systems that are safe, ethical, and beneficial for everyone.

    This article explains how you can help.

    In a nutshell: Artificial intelligence will have transformative effects on society over the coming decades, and could bring huge benefits — but we also think there’s a substantial risk. One promising way to reduce the chances of an AI-related catastrophe is to find technical solutions that could allow us to prevent AI systems from carrying out dangerous behaviour.

    Pros

    • Opportunity to make a significant contribution to a hugely important area of research
    • Intellectually challenging and interesting work
    • The area has a strong need for skilled researchers and engineers, and is highly neglected overall

    Cons

    • Due to a shortage of managers, it’s difficult to get jobs and might take you some time to build the required career capital and expertise
    • You need a strong quantitative background
    • It might be very difficult to find solutions
    • There’s a real risk of doing harm

    Key facts on fit

    You’ll need a quantitative background and should probably enjoy programming. If you’ve never tried programming, you may be a good fit if you can break problems down into logical parts, generate and test hypotheses, possess a willingness to try out many different solutions, and have high attention to detail.

    If you already:

    • Are a strong software engineer, you could apply for empirical research contributor roles right now (even if you don’t have a machine learning background, although that helps)
    • Could get into a top 10 machine learning PhD, that would put you on track to become a research lead
    • Have a very strong maths or theoretical computer science background, you’ll probably be a good fit for theoretical alignment research

    Recommended

    If you are well suited to this career, it may be the best way for you to have a social impact.

    Review status

    Based on a medium-depth investigation 

    Thanks to Adam Gleave, Jacob Hilton and Rohin Shah for reviewing this article. And thanks to Charlie Rogers-Smith for his help, and his article on the topic — How to pursue a career in technical AI alignment.

    Why AI safety technical research is high impact

    As we’ve argued, in the next few decades, we might see the development of hugely powerful machine learning systems with the potential to transform society. This transformation could bring huge benefits — but only if we avoid the risks.

    We think that the worst-case risks from AI systems arise in large part because AI systems could be misaligned — that is, they will aim to do things that we don’t want them to do. In particular, we think they could be misaligned in such a way that they develop (and execute) plans that pose risks to humanity’s ability to influence the world, even when we don’t want that influence to be lost.

    We think this means that these future systems pose an existential threat to civilisation.

    Even if we find a way to avoid this power-seeking behaviour, there are still substantial risks — such as misuse by governments or other actors — which could be existential threats in themselves.

    Want to learn more about risks from AI? Read the problem profile.

    We think that technical AI safety could be the highest-impact career path we’ve identified to date. That’s because it seems like a promising way of reducing risks from AI. We’ve written an entire article about what those risks are and why they’re so important.

    Read more about preventing an AI-related catastrophe

    There are many ways in which we could go about reducing the risks that these systems might pose. But one of the most promising may be researching technical solutions that prevent unwanted behaviour — including misaligned behaviour — from AI systems. (Finding a technical way to prevent misalignment in particular is known as the alignment problem.)

    In the past few years, we’ve seen more organisations start to take these risks more seriously. Many of the leading industry labs developing AI — including Google DeepMind and OpenAI — have teams dedicated to finding these solutions, alongside academic research groups including at MIT, Oxford, Cambridge, Carnegie Mellon University, and UC Berkeley.

    That said, the field is still very new. We think there are only around 300 people working on technical approaches to reducing existential risks from AI systems,1 which makes this a highly neglected field.

    Finding technical ways to reduce this risk could be quite challenging. Any practically helpful solution must retain the usefulness of the systems (remaining economically competitive with less safe systems), and continue to work as systems improve over time (that is, it needs to be ‘scalable’). As we argued in our problem profile, it seems like it might be difficult to find viable solutions, particularly for modern ML (machine learning) systems.

    (If you don’t know anything about ML, we’ve written a very very short introduction to ML, and we’ll go into more detail on how to learn about ML later in this article. Alternatively, if you do have ML experience, talk to our team — they can give you personalised career advice, make introductions to others working on these issues, and possibly even help you find jobs or funding opportunities.)

    Although it seems hard, there are lots of avenues for more research — and the field really is very young, so there are new promising research directions cropping up all the time. So we think it’s moderately tractable, though we’re highly uncertain.

    In fact, we’re uncertain about all of this and have written extensively about reasons we might be wrong about AI risk.

    But, overall, we think that — if it’s a good fit for you — going into AI safety technical research may just be the highest-impact thing you can do with your career.

    What does this path involve?

    AI safety technical research generally involves working as a scientist or engineer at major AI labs, in academia, or in independent nonprofits.

    These roles can be very hard to get. You’ll likely need to build up career capital before you end up in a high-impact role (more on this later, in the section on how to enter). That said, you may not need to spend a long time building this career capital — we’ve seen exceptionally talented people move into AI safety from other quantitative fields, sometimes in less than a year.

    Most AI safety technical research falls on a spectrum between empirical research (experimenting with current systems as a way of learning more about what will work), and theoretical research (conceptual and mathematical research looking at ways of ensuring that future AI systems are safe).

    No matter where on this spectrum you end up working, your career path might look a bit different depending on whether you want to aim at becoming a research lead — proposing projects, managing a team and setting direction — or a contributor — focusing on carrying out the research.

    Finally, there are two slightly different roles you might aim for:

    • In academia, research is often led by professors — the key distinguishing feature of being a professor is that you’ll also teach classes and mentor grad students (and you’ll definitely need a PhD).
    • Many (but not all) contributor roles in empirical research are also engineers, often software engineers. Here, we’re focusing on software roles that directly contribute to AI safety research (and which often require some ML background) — we’ve written about software engineering more generally in a separate career review.

    4 kinds of AI safety role: empirical lead, empirical contributor, theoretical lead and theoretical contributor

    We think that research lead roles are probably higher-impact in general. But overall, the impact you could have in any of these roles is likely primarily determined by your personal fit for the role — see the section on how to predict your fit in advance.

    Next, we’ll take a look at what working in each path might involve. Later, we’ll go into how you might enter each path.

    What does work in the empirical AI safety path involve?

    Empirical AI safety tends to involve teams working directly with ML models to identify any risks and develop ways in which they might be mitigated.

    That means the work is focused on current ML techniques and techniques that might be applied in the very near future.

    Practically, working on empirical AI safety involves lots of programming and ML engineering. You might, for example, come up with ways you could test the safety of existing systems, and then carry out these empirical tests.

    You can find roles in empirical AI safety in industry and academia, as well as some in AI safety-focused nonprofits.

    Particularly in academia, lots of relevant work isn’t explicitly labelled as being focused on existential risk — but it can still be highly valuable. For example, work in interpretability, adversarial examples, diagnostics and backdoor learning, among other areas, could be highly relevant to reducing the chance of an AI-related catastrophe.

    We’re also excited by experimental work to develop safety standards that AI companies might adhere to in the future — for example, the work being carried out by METR.

    To learn more about the sorts of research taking place at labs focused on empirical AI safety, take a look at:

    While programming is central to all empirical work, generally, research lead roles will be less focused on programming; instead, they need stronger research taste and theoretical understanding. In comparison, research contributors need to be very good at programming and software engineering.

    What does work in the theoretical AI safety path involve?

    Theoretical AI safety is much more heavily conceptual and mathematical. Often it involves careful reasoning about the hypothetical behaviour of future systems.

    Generally, the aim is to come up with properties that it would be useful for safe ML algorithms to have. Once you have some useful properties, you can try to develop algorithms with these properties (bearing in mind that to be practically useful these algorithms will have to end up being adopted by industry). Alternatively, you could develop ways of checking whether systems have these properties. These checks could, for example, help hold future AI products to high safety standards.

    Many people working in theoretical AI safety will spend much of their time proving theorems or developing new mathematical frameworks. More conceptual approaches also exist, although they still tend to make heavy use of formal frameworks.

    Some examples of research in theoretical AI safety include:

    There are generally fewer roles available in theoretical AI safety work, especially as research contributors. Theoretical research contributor roles exist at nonprofits (primarily the Alignment Research Center), as well as at some labs (for example, Anthropic’s work on conditioning predictive models and the Causal Incentives Working Group at Google DeepMind). Most contributor roles in theoretical AI safety probably exist in academia (for example, PhD students in teams working on projects relevant to theoretical AI safety).

    Some exciting approaches to AI safety

    There are lots of technical approaches to AI safety currently being pursued. Here are just a few of them:

    It’s worth noting that there are many approaches to AI safety, and people in the field strongly disagree on what will or won’t work.

    This means that, once you’re working in the field, it can be worth being charitable and careful not to assume that others’ work is unhelpful just because it seemed so on a quick skim. You should probably be uncertain about your own research agenda as well.

    What’s more, as we mentioned earlier, lots of relevant work across all these areas isn’t explicitly labelled ‘safety.’

    So it’s important to think carefully about how or whether any particular research helps reduce the risks that AI systems might pose.

    What are the downsides of this career path?

    AI safety technical research is not the only way to make progress on reducing the risks that future AI systems might pose. Also, there are many other pressing problems in the world that aren’t the possibility of an AI-related catastrophe, and lots of careers that can help with them. If you’d be a better fit working on something else, you should probably do that.

    Beyond personal fit, there are a few other downsides to the career path:

    • It can be very competitive to enter (although once you’re in, the jobs are well paid, and there are lots of backup options).
    • You need quantitative skills — and probably programming skills.
    • The work is geographically concentrated in just a few places (mainly the California Bay Area and London, but there are also opportunities in places with top universities such as Oxford, New York, Pittsburgh, and Boston). That said, remote work is increasingly possible at many research labs.
    • It might not be very tractable to find good technical ways of reducing the risk. Although assessments of its difficulty vary, and while making progress is almost certainly possible, it may be quite hard to do so. This reduces the impact that you could have working in the field. That said, if you start out in technical work you might be able to transition to governance work, since that often benefits from technical training and experience with the industry, which most people do not have.)
    • Relatedly, there’s lots of disagreement in the field about what could work; you’ll probably be able to find at least some people who think what you’re working on is useless, whatever you end up doing.
    • Most importantly, there’s some risk of doing harm. While gaining career capital, and while working on the research itself, you’ll have to make difficult decisions and judgement calls about whether you’re working on something beneficial (see our anonymous advice about working in roles that advance AI capabilities). There’s huge disagreement on which technical approaches to AI safety might work — and sometimes this disagreement takes the form of thinking that a strategy will actively increase existential risks from AI.

    Finally, we’ve written more about the best arguments against AI being pressing in our problem profile on preventing an AI-related catastrophe. If those are right, maybe you could have more impact working on a different issue.

    How much do AI safety technical researchers earn?

    Many technical researchers work at companies or small startups that pay wages competitive with the Bay Area and Silicon Valley tech industry, and even smaller organisations and nonprofits will pay competitive wages to attract top talent. The median compensation for a software engineer in the San Francisco Bay area was $222,000 per year in 2020.3 (Read more about software engineering salaries).

    This $222,000 median may be an underestimate, as AI roles, especially in top AI labs that are rapidly scaling up their work in AI, often pay better than other tech jobs, and the same applies to safety researchers — even those in nonprofits.

    However, academia has lower salaries than industry in general, and we’d guess that AI safety research roles in academia pay less than commercial labs and nonprofits.

    Examples of people pursuing this path

    How to predict your fit in advance

    You’ll generally need a quantitative background (although not necessarily a background in computer science or machine learning) to enter this career path.

    There are two main approaches you can take to predict your fit, and it’s helpful to do both:

    • Try it out: try out the first few steps in the section below on learning the basics. If you haven’t yet, try learning some python, as well as taking courses in linear algebra, calculus, and probability. And if you’ve done that, try learning a bit about deep learning and AI safety. Finally, the best way to try this out for many people would be to actually get a job as a (non-safety) ML engineer (see more in the section on how to enter).
    • Talk to people about whether it would be a good fit for you: If you want to become a technical researcher, our team probably wants to talk to you. We can give you 1-1 advice, for free. If you know anyone working in the area (or something similar), discuss this career path with them and ask for their honest opinion. You may be able to meet people through our community. Our advisors can also help make connections.

    It can take some time to build expertise, and enjoyment can follow expertise — so be prepared to take some time to learn and practice before you decide to switch to something else entirely.

    If you’re not sure what roles you might aim for longer term, here are a few rough ways you could make a guess about what to aim for, and whether you might be a good fit for various roles on this path:

    • Testing your fit as an empirical research contributor: In a blog post about hiring for safety researchers, the Google DeepMind team said “as a rough test for the Research Engineer role, if you can reproduce a typical ML paper in a few hundred hours and your interests align with ours, we’re probably interested in interviewing you.”
      • Looking specifically at software engineering, one hiring manager at Anthropic said that if you could, with a few weeks’ work, write a complex new feature or fix a very serious bug in a major ML library, they’d want to interview you straight away. (Read more.)
    • Testing your fit for theoretical research: If you could have got into a top 10 maths or theoretical computer science PhD programme if you’d optimised your undergrad to do so, that’s a decent indication of your fit (and many researchers in fact have these PhDs). The Alignment Research Center (one of the few organisations that hires for theoretical research contributors, as of 2023) said that they were open to hiring people without any research background. They gave four tests of fit: creativity (e.g. you may have ideas for solving open problems in the field, like Eliciting Latent Knowledge); experience designing algorithms, proving theorems, or formalising concepts; broad knowledge of maths and computer science; and having thought a lot about the AI alignment problem in particular.
    • Testing your fit as a research lead (or for a PhD): The vast majority of research leads have a PhD. Also, many (but definitely not all) AI safety technical research roles will require a PhD — and if they don’t, having a PhD (or being the sort of person that could get one) would definitely help show that you’re a good fit for the work. To get into a top 20 machine learning PhD programme, you’d probably need to publish something like a first author workshop paper, as well as a third author conference paper at a major ML conference (like NeurIPS or ICML). (Read more about whether you should do a PhD).

    Read our article on personal fit to learn more about how to assess your fit for the career paths you want to pursue.

    How to enter

    You might be able to apply for roles right away — especially if you meet, or are near meeting, the tests we just looked at — but it also might take you some time, possibly several years, to skill up first.

    So, in this section, we’ll give you a guide to entering technical AI safety research. We’ll go through four key questions:

    1. How to learn the basics
    2. Whether you should do a PhD
    3. How to get a job in empirical research
    4. How to get a job in theoretical research

    Hopefully, by the end of the section, you’ll have everything you need to get going.

    Learning the basics

    To get anywhere in the world of AI safety technical research, you’ll likely need a background knowledge of coding, maths, and deep learning.

    You might also want to practice enough to become a decent ML engineer (although this is generally more useful for empirical research), and learn a bit about safety techniques in particular (although this is generally more useful for empirical research leads and theoretical researchers).

    We’ll go through each of these in turn.

    Learning to program

    You’ll probably want to learn to code in python, because it’s the most widely used language in ML engineering.

    The first step is probably just trying it out. As a complete beginner, you can write a Python program in less than 20 minutes that reminds you to take a break every two hours. Don’t be discouraged if your code doesn’t work the first time — that’s what normally happens when people code!

    Once you’ve done that, you have a few options:

    You can read more about learning to program — and how to get your first job in software engineering (if that’s the route you want to take) — in our career review on software engineering.

    Learning the maths

    The maths of deep learning relies heavily on calculus and linear algebra, and statistics can be useful too — although generally learning the maths is much less important than programming and basic, practical ML.

    We’d generally recommend studying a quantitative degree (like maths, computer science or engineering), most of which will cover all three areas pretty well.

    If you want to actually get good at maths, you have to be solving problems. So, generally, the most useful thing that textbooks and online courses provide isn’t their explanations — it’s a set of exercises to try to solve, in order, with some help if you get stuck.

    If you want to self-study (especially if you don’t have a quantitative degree) here are some possible resources:

    You might be able to find resources that cover all these areas, like Imperial College’s Mathematics for Machine Learning.

    Learning basic machine learning

    You’ll likely need to have a decent understanding of how AI systems are currently being developed. This will involve learning about machine learning and neural networks, before diving into any specific subfields of deep learning.

    Again, there’s the option of covering this at university. If you’re currently at college, it’s worth checking if you can take an ML course even if you’re not majoring in computer science.

    There’s one important caveat here: you’ll learn a huge amount on the job, and the amount you’ll need to know in advance for any role or course will vary hugely! Not even top academics know everything about their fields. It’s worth trying to find out how much you’ll need to know for the role you want to do before you invest hundreds of hours into learning about ML.

    With that caveat in mind, here are some suggestions of places you might start if you want to self-study the basics:

    PyTorch is a very common package used for implementing neural networks, and probably worth learning! When I was first learning about ML, my first neural network was a 3-layer convolutional neural network with L2 regularisation classifying characters from the MNIST database. This is a pretty common first challenge, and a good way to learn PyTorch.

    Learning about AI safety

    If you’re going to work as an AI safety researcher, it usually helps to know about AI safety.

    This isn’t always true — some engineering roles won’t require much knowledge of AI safety. But even then, knowing the basics will probably help land you a position, and can also help with things like making difficult judgement calls and avoiding doing harm. And if you want to be able to identify and do useful work, you’ll need to learn about the field eventually.

    Because the field is still so new, there probably aren’t (yet) university courses you can take. So you’ll need to do some self-study. Here are some places you might start:

    For more suggestions — especially when it comes to reading about the nature of the risks we might face from AI systems — take a look at the top resources to learn more from our problem profile.

    Should you do a PhD?

    Some technical research roles will require a PhD — but many won’t, and PhDs aren’t the best option for everyone.

    The main benefit of doing a PhD is probably practising setting and carrying out your own research agenda. As a result, getting a PhD is practically the default if you want to be a research lead.

    That said, you can also become a research lead without a PhD — in particular, by transitioning from a role as a research contributor. At some large labs, the boundary between being a contributor and a lead is increasingly blurry.

    Many people find PhDs very difficult. They can be isolating and frustrating, and take a very long time (4–6 years). What’s more, both your quality of life and the amount you’ll learn will depend on your supervisor — and it can be really difficult to figure out in advance whether you’re making a good choice.

    So, if you’re considering doing a PhD, here are some things to consider:

    • Your long-term vision: If you’re aiming to be a research lead, that suggests you might want to do a PhD — the vast majority of research leads have PhDs. If you mainly want to be a contributor (e.g. an ML or software engineer), that suggests you might not. If you’re unsure, you should try doing something to test your fit for each, like trying a project or internship. You might try a pre-doctoral research assistant role — if the research you do is relevant to your future career, these can be good career capital, whether or not you do a PhD.
    • The topic of your research: It’s easy to let yourself become tied down to a PhD topic you’re not confident in. If the PhD you’re considering would let you work on something that seems useful for AI safety, it’s probably — all else equal — better for your career, and the research itself might have a positive impact as well.
    • Mentorship: What are the supervisors or managers like at the opportunities open to you? You might be able to find ML engineering or research roles in industry where you could learn much more than you would in a PhD — or vice versa. When picking a supervisor, try reaching out to the current or former students of a prospective supervisor and asking them some frank questions. (Also, see this article on how to choose a PhD supervisor.)
    • Your fit for the work environment: Doing a PhD means working on your own with very little supervision or feedback for long periods of time. Some people thrive in these conditions! But some really don’t and find PhDs extremely difficult.

    Read more in our more detailed (but less up-to-date) review of machine learning PhDs.

    It’s worth remembering that most jobs don’t need a PhD. And for some jobs, especially empirical research contributor roles, even if a PhD would be helpful, there are often better ways of getting the career capital you’d need (for example, working as a software or ML engineer). We’ve interviewed two ML engineers who have had hugely successful careers without doing a PhD.

    Whether you should do a PhD doesn’t depend (much) on timelines

    We think it’s plausible that we will develop AI that could be hugely transformative for society by the end of the 2030s.

    All else equal, that possibility could argue for trying to have an impact right away, rather than spending five (or more) years doing a PhD.

    Ultimately, though, how well you, in particular, are suited to a particular PhD is probably a much more important factor than when AI will be developed.

    That is to say, we think the increase in impact caused by choosing a path that’s a good fit for you is probably larger than any decrease in impact caused by delaying your work. This is in part because the spread in impact caused by the specific roles available to you, as well as your personal fit for them, is usually very large. Some roles (especially research lead roles) will just require having a PhD, and others (especially more engineering-heavy roles) won’t — and people’s fit for these paths varies quite a bit.

    We’re also highly uncertain about estimates about when we might develop transformative AI. This uncertainty reduces the expected cost of any delay.

    Most importantly, we think PhDs shouldn’t be thought of as a pure delay to your impact. You can do useful work in a PhD, and generally, the first couple of years in any career path will involve a lot of learning the basics and getting up to speed. So if you have a good mentor, work environment, and choice of topic, your PhD work could be as good as, or possibly better than, the work you’d do if you went to work elsewhere early in your career. And if you suddenly receive evidence that we have less time than you thought, it’s relatively easy to drop out.

    There are lots of other considerations here — for a rough overview, and some discussion, see this post by 80,000 Hours advisor Alex Lawsen, as well as the comments.

    Overall, we’d suggest that instead of worrying about a delay to your impact, think instead about which longer-term path you want to pursue, and how the specific opportunities in front of you will get you there.

    How to get into a PhD

    ML PhDs can be very competitive. To get in, you’ll probably need a few publications (as we said above, something like a first author workshop paper, as well as a third author conference paper at a major ML conference (like NeurIPS or ICML), and references, probably from ML academics. (Although publications also look good whatever path you end up going down!)

    To end up at that stage, you’ll need a fair bit of luck, and you’ll also need to find ways to get some research experience.

    One option is to do a master’s degree in ML, although make sure it’s a research masters — most ML master’s degrees primarily focus on preparation for industry.

    Even better, try getting an internship in an ML research group. Opportunities include RISS at Carnegie Mellon University, UROP at Imperial College London, the Aalto Science Institute international summer research programme, the Data Science Summer Institute, the Toyota Technological Institute intern programme and MILA. You can also try doing an internship specifically in AI safety, for example at CHAI. However, there are sometimes disadvantages to doing internships specifically in AI safety directly — in general, it may be harder to publish and mentorship might be more limited.

    Another way of getting research experience is by asking whether you can work with researchers. If you’re already at a top university, it can be easiest to reach out to people working at the university you’re studying at.

    PhD students or post-docs can be more responsive than professors, but eventually, you’ll want a few professors you’ve worked with to provide references, so you’ll need to get in touch. Professors tend to get lots of cold emails, so try to get their attention! You can try:

    • Getting an introduction, for example from a professor who’s taught you
    • Mentioning things you’ve done (your grades, relevant courses you’ve taken, your GitHub, any ML research papers you’ve attempted to replicate as practice)
    • Reading some of their papers and the main papers in the field, and mention them in the email
    • Applying for funding that’s available to students who want to work in AI safety, and letting people know you’ve got funding to work with them

    Ideally, you’ll find someone who supervises you well and has time to work with you (that doesn’t necessarily mean the most famous professor — although it helps a lot if they’re regularly publishing at top conferences). That way, they’ll get to know you, you can impress them, and they’ll provide an amazing reference when you apply for PhDs.

    It’s very possible that, to get the publications and references you’ll need to get into a PhD, you’ll need to spend a year or two working as a research assistant, although these positions can also be quite competitive.

    This guide by Adam Gleave also goes into more detail on how to get a PhD, including where to apply and tips on the application process itself. We discuss ML PhDs in more detail in our career review on ML PhDs (though it’s outdated compared to this career review).

    Getting a job in empirical AI safety research

    Ultimately, the best way of learning to do empirical research — especially in contributor and engineering-focused roles — is to work somewhere that does both high-quality engineering and cutting-edge research.

    The top three labs are probably Google DeepMind (who offer internships to students), OpenAI (who have a 6-month residency programme) and Anthropic. (Working at a leading AI lab carries with it some risk of doing harm, so it’s important to think carefully about your options. We’ve written a separate article going through the major relevant considerations.)

    To end up working in an empirical research role, you’ll probably need to build some career capital.

    Whether you want to be a research lead or a contributor, it’s going to help to become a really good software engineer. The best ways of doing this usually involve getting a job as a software engineer at a big tech company or at a promising startup. (We’ve written an entire article about becoming a software engineer.)

    Many roles will require you to be a good ML engineer, which means going further than just the basics we looked at above. The best way to become a good ML engineer is to get a job doing ML engineering — and the best places for that are probably leading AI labs.

    For roles as a research lead, you’ll need relatively more research experience. You’ll either want to become a research contributor first, or enter through academia (for example by doing a PhD).

    All that said, it’s important to remember that you don’t need to know everything to start applying, as you’ll inevitably learn loads on the job — so do try to find out what you’ll need to learn to land the specific roles you’re considering.

    How much experience do you need to get a job? It’s worth reiterating the tests we looked at above for contributor roles:

    • In a blog post about hiring for safety researchers, the DeepMind team said “as a rough test for the Research Engineer role, if you can reproduce a typical ML paper in a few hundred hours and your interests align with ours, we’re probably interested in interviewing you.”
    • Looking specifically at software engineering, one hiring manager at Anthropic said that if you could, with a few weeks’ work, write a new feature or fix a serious bug in a major ML library, they’d want to interview you straight away. (Read more.)

    In the process of getting this experience, you might end up working in roles that advance AI capabilities. There are a variety of views on whether this might be harmful — so we’d suggest reading our article about working at leading AI labs and our article containing anonymous advice from experts about working in roles that advance capabilities. It’s also worth talking to our team about any specific opportunities you have.

    If you’re doing another job, or a degree, or think you need to learn some more before trying to change careers, there are a few good ways of getting more experience doing ML engineering that go beyond the basics we’ve already covered:

    • Getting some experience in software / ML engineering. For example, if you’re doing a degree, you might try an internship as a software engineer during the summer. DeepMind offer internships for students with at least two years of study in a technical subject,
    • Replicating papers. One great way of getting experience doing ML engineering, is to replicate some papers in whatever sub-field you might want to work in. Richard Ngo, an AI governance researcher at OpenAI, has written some advice on replicating papers. But bear in mind that replicating papers can be quite hard — take a look at Amid Fish’s blog on what he learned replicating a deep RL paper. Finally, Rogers-Smith has some suggestions on papers to replicate. If you do spend some time replicating papers, remember that when you get to applying for roles, it will be really useful to be able to prove you’ve done the work. So try uploading your work to GitHub, or writing a blog on your progress. And if you’re thinking about spending a long time on this (say, over 100 hours), try to get some feedback on the papers you might replicate before you start — you could even reach out to a lab you want to work for.
    • Taking or following a more in-depth course in empirical AI safety research. Redwood Research ran the MLAB bootcamp, and you can apply for access to their curriculum here. You could also take a look at this Deep Learning Curriculum by Jacob Hilton, a researcher at the Alignment Research Center — although it’s probably very challenging without mentorship.4 The Alignment Research Engineer Accelerator is a program that uses this curriculum. Some mentors on the SERI ML Alignment Theory Scholars Program focus on empirical research.
    • Learning about a sub-field of deep learning. In particular, we’d suggest natural language processing (in particular transformers — see this lecture as a starting point) and reinforcement learning (take a look at Pong from Pixels by Andrej Karpathy, and OpenAI’s Spinning up in Deep RL). Try to get to the point where you know about the most important recent advances.

    Finally, Athena is an AI alignment mentorship program for women with a technical background looking to get jobs in the alignment field

    Getting a job in theoretical AI safety research

    There are fewer jobs available in theoretical AI safety research, so it’s harder to give concrete advice. Having a maths or theoretical computer science PhD isn’t always necessary, but is fairly common among researchers in industry, and is pretty much required to be an academic.

    If you do a PhD, ideally it’d be in an area at least somewhat related to theoretical AI safety research. For example, it could be in probability theory as applied to AI, or in theoretical CS (look for researchers who publish in COLT or FOCS).

    Alternatively, one path is to become an empirical research lead before moving into theoretical research.

    Compared to empirical research, you’ll need to know relatively less about engineering, and relatively more about AI safety as a field.

    Once you’ve done the basics, one possible next step you could try is reading papers from a particular researcher, or on a particular topic, and summarising what you’ve found.

    You could also try spending some time (maybe 10–100 hours) reading about a topic and then some more time (maybe another 10–100 hours) trying to come up with some new ideas on that topic. For example, you could try coming up with proposals to solve the problem of eliciting latent knowledge. Alternatively, if you wanted to focus on the more mathematical side, you could try having a go at the assignment at the end of this lecture by Michael Cohen, a grad student at the University of Oxford.

    If you want to enter academia, reading a ton of papers seems particularly important. Maybe try writing a survey paper on a certain topic in your spare time. It’s a great way to master a topic, spark new ideas, spot gaps, and come up with research ideas. When applying to grad school or jobs, your paper is a fantastic way to show you love research so much you do it for fun.

    There are some research programmes aimed at people new to the field, such as the SERI ML Alignment Theory Scholars Program, to which you could apply.

    Other ways to get more concrete experience include doing research internships, working as a research assistant, or doing a PhD, all of which we’ve written about above, in the section on whether and how you can get into a PhD programme.

    One note is that a lot of people we talk to try to learn independently. This can be a great idea for some people, but is fairly tough for many, because there’s substantially less structure and mentorship.

    AI labs in industry that have empirical technical safety teams, or are focused entirely on safety:

    • Anthropic is an AI safety company working on building interpretable and safe AI systems. They focus on empirical AI safety research. Anthropic cofounders Daniela and Dario Amodei gave an interview about the lab on the Future of Life Institute podcast. On our podcast, we spoke to Chris Olah, who leads Anthropic’s research into interpretability, and Nova DasSarma, who works on systems infrastructure at Anthropic.
    • METR works on assessing whether cutting-edge AI systems could pose catastrophic risks to civilization, including early-stage, experimental work to develop techniques, and evaluating systems produced by Anthropic and OpenAI.
    • The Center for AI Safety is a nonprofit that does technical research and promotion of safety in the wider machine learning community.
    • FAR AI is a research nonprofit that incubates and accelerates research agendas that are too resource-intensive for academia but not yet ready for commercialisation by industry, including research in adversarial robustness, interpretability and preference learning.
    • Google DeepMind is probably the largest and most well-known research group developing general artificial machine intelligence, and is famous for its work creating AlphaGo, AlphaZero, and AlphaFold. It is not principally focused on safety, but has two teams focused on AI safety, with the Scalable Alignment Team focusing on aligning existing state-of-the-art systems, and the Alignment Team focused on research bets for aligning future systems.
    • OpenAI, founded in 2015, is a lab that is trying to build artificial general intelligence that is safe and benefits all of humanity. OpenAI is well known for its language models like GPT-4. Like DeepMind, it is not principally focused on safety, but has a safety team and a governance team. Jan Leike (co-lead of the superalignment team) has some blog posts on how he thinks about AI alignment, and has spoken on our podcast about the sorts of people he’d like to hire for his team.
    • Ought is a machine learning lab building Elicit, an AI research assistant. Their aim is to align open-ended reasoning by learning human reasoning steps, and to direct AI progress towards helping with evaluating evidence and arguments.
    • Redwood Research is an AI safety research organisation, whose first big project attempted to make sure language models (like GPT-3) produce output following certain rules with very high probability, in order to address failure modes too rare to show up in standard training.

    Theoretical / conceptual AI safety labs:

    • The Alignment Research Center (ARC) is attempting to produce alignment strategies that could be adopted in industry today while also being able to scale to future systems. They focus on conceptual work, developing strategies that could work for alignment and which may be promising directions for empirical work, rather than doing empirical AI work themselves. Their first project was releasing a report on Eliciting Latent Knowledge, the problem of getting advanced AI systems to honestly tell you what they believe (or ‘believe’) about the world. On our podcast, we interviewed ARC founder Paul Christiano about his research (before he founded ARC).
    • The Center on Long-Term Risk works to address worst-case risks from advanced AI. They focus on conflict between AI systems.
    • The Machine Intelligence Research Institute was one of the first groups to become concerned about the risks from machine intelligence in the early 2000s, and its team has published a number of papers on safety issues and how to resolve them.
    • Some teams in commercial labs also do some more theoretical and conceptual work on alignment, such as Anthropic’s work on conditioning predictive models and the Causal Incentives Working Group at Google DeepMind.

    AI safety in academia (a very non-comprehensive list; while the number of academics explicitly and publicly focused on AI safety is small, it’s possible to do relevant work at a much wider set of places):

    Want one-on-one advice on pursuing this path?

    We think that the risks posed by the development of AI may be the most pressing problem the world currently faces. If you think you might be a good fit for any of the above career paths that contribute to solving this problem, we’d be especially excited to advise you on next steps, one-on-one.

    We can help you consider your options, make connections with others working on reducing risks from AI, and possibly even help you find jobs or funding opportunities — all for free.

    APPLY TO SPEAK WITH OUR TEAM

    Find a job in this path

    If you think you might be a good fit for this path and you’re ready to start looking at job opportunities that are currently accepting applications, see our curated list of opportunities for this path:

      View all opportunities

      Learn more about AI safety technical research

      Top recommendations

      Further recommendations

      Here are some suggestions about where you could learn more:

      Read next:  Learn about other high-impact careers

      Want to consider more paths? See our list of the highest-impact career paths according to our research.

      Plus, join our newsletter and we’ll mail you a free book

      Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

      The post AI safety technical research appeared first on 80,000 Hours.

      ]]>
      Information security in high-impact areas https://80000hours.org/career-reviews/information-security/ Mon, 19 Dec 2022 23:00:00 +0000 https://80000hours.org/?post_type=career_profile&p=74534 The post Information security in high-impact areas appeared first on 80,000 Hours.

      ]]>
      As the 2016 US presidential campaign was entering a fractious round of primaries, Hillary Clinton’s campaign chair, John Podesta, opened a disturbing email.1 The March 19 message warned that his Gmail password had been compromised and that he urgently needed to change it.

      The email was a lie. It wasn’t trying to help him protect his account — it was a phishing attack trying to gain illicit access.

      Podesta was suspicious, but the campaign’s IT team erroneously wrote the email was “legitimate” and told him to change his password. The IT team provided a safe link for Podesta to use, but it seems he or one of his staffers instead clicked the link in the forged email. That link was used by Russian intelligence hackers known as “Fancy Bear,” and they used their access to leak private campaign emails for public consumption in the final weeks of the 2016 race, embarrassing the Clinton team.

      While there are plausibly many critical factors in any close election, it’s possible that the controversy around the leaked emails played a non-trivial role in Clinton’s subsequent loss to Donald Trump. This would mean the failure of the campaign’s security team to prevent the hack — which might have come down to a mere typo2 — was extraordinarily consequential.

      These events vividly illustrate how careers in infosecurity at key organisations have the potential for outsized impact. Ideally, security professionals can develop robust practices that reduce the likelihood that a single slip-up will result in a significant breach. But this key component for the continued and unimpaired functioning of important organisations is often neglected.

      And the need for such protection stretches far beyond hackers trying to cause chaos in an election season. Information security is vital to safeguard all kinds of critical organisations such as those storing extremely sensitive data about biological threats, nuclear weapons, or advanced artificial intelligence, that might be targeted by criminal hackers or aggressive nation states. Such attacks, if successful, could contribute to dangerous competitive dynamics (such as arms races) or directly lead to catastrophe.

      Some infosecurity roles involve managing and coordinating organisational policy, working on technical aspects of security, or a combination of both. We believe many such roles have thus far been underrated among those interested in effective altruism and reducing global catastrophic risks, and we’d be excited to see more altruistically motivated candidates move into this field.

      In a nutshell: Organisations with influence, financial power, and advanced technology are targeted by actors seeking to steal or abuse these assets. A career in information security is a promising avenue to support high-impact organisations by protecting against these attacks, which have the potential to disrupt an organisation’s mission or even increase existential risk.

      Recommended

      If you are well suited to this career, it may be the best way for you to have a social impact.

      Review status

      Based on a medium-depth investigation 

      Jeffrey Ladish contributed to this career review. We also thank Wim van der Schoot for his helpful comments.

      Why might information security be a high-impact career?

      Information security protects against events that hamper an organisation’s ability to fulfil its mission, such as attackers gaining access to confidential information. Information security specialists play a vital role in supporting the mission of organisations, similar to roles in operations.

      So if you want an impactful career, expertise in information security could enable you to make a significant positive difference in the world by helping important organisations and institutions be secure and successful.

      Compared to other roles in technology, an information security career can be a safe option because there may be less risk you could have a negative impact. In general, preventing attacks makes the world a safer place, even if it’s not clear whether potential victim organisations are providing net positive impact themselves. When a company is hacked, the harm can disproportionately fall on others — such as people who trusted the company with their private information.

      On the other hand, information security roles can sometimes have limited impact even when supporting high-impact areas, if the organisation does not genuinely value security. Many organisations have security functions primarily so that they can comply with regulations and compliance standards for doing business. These security standards have an important role, but when they are applied without care for achieving real security outcomes, it often leads to security theatre. It is not uncommon for security professionals to realise that they are having minimal impact on the security posture of their organisation.

      Protecting organisations working on the world’s most pressing problems

      Organisations working on pressing problems need cybersecurity expertise to protect their computer systems, financial resources, and confidential information from attack. In some ways, these challenges are similar to those faced by any other organisation; however, organisations working on major global problems are sometimes special targets for attacks.

      These organisations — such as those trying to monitor dangerous pathogens or coordinate to reduce global tensions — often work with international institutions, local political authorities, and governments. They may be targeted by state-sponsored attacks from countries with relevant geopolitical interests, either to steal information or to gain access to other high-value targets.

      Some high-impact organisations have confidential, sensitive discussions as part of their work, where a leak of information through a security compromise would damage trust and their ability to fulfil their mission. This is especially relevant when operating in countries with information control and censorship regimes.

      In addition to threats from state-sponsored attackers, cybercrime groups also raise serious risks.

      They seek financial gain through extortion and fraud — for example, by changing payment information, ransoming data, or threatening to leak confidential correspondence. Any organisation is vulnerable to these attacks. But organisations that handle particularly sensitive information or large value financial transactions, such as philanthropic grantmaking funds, are especially likely targets.

      In extreme cases, some organisations need help protecting information that could be harmful for the world if it was known more widely, such as harmful genetic sequences or powerful AI technology.

      The security of advanced AI systems

      While we think information security work can be valuable at many high-impact organisations, securing the most advanced AI systems may be among the highest-impact work you could do.

      We currently rank risks from artificial intelligence as the most pressing world problem because of the potential for future systems to cause catastrophes on a global scale. And to reduce the risk of an AI-related catastrophe, we’ve recommended some people work in the field of AI safety.

      But even if companies developing AI models use them responsibly and in accordance with high standards of safety, these efforts could be undermined if an outside actor steals the technology then deploys it irresponsibly. And because advanced AI models are expected to be powerful and extremely economically valuable, there are actors with both an interest in stealing them and a history of launching successful cyberattacks to steal technology.

      Because information security is a highly sought-after skill, some AI-related organisations have found it difficult to hire for these crucial roles. There could also be special demand for people who understand the particular information security challenges related to AI; working on this topic could have a high impact and make you a desirable job candidate.

      What does working in high-impact information security roles actually look like?


      “Defensive” cybersecurity roles — where the main job is to defend against attacks by outsiders — are most commonly in demand, especially in smaller nonprofit organisations and altruistically minded startups that don’t have the resources to hire more than a single security specialist.

      In some of these roles, you’ll find yourself doing a mix of hands-on technical work and communicating security risk. For example:

      • You will apply an understanding of how hackers work and how to stop them.
      • You will set up security systems, review IT configurations, and provide advice to the team about how to do their work securely.
      • You will test for bugs and vulnerabilities and design systems and policies that are robust to a range of possible attacks.

      Having security knowledge across a wide range of organisational IT topics will help you be most useful, such as laptop security, cloud administration, application security, and IT accounts (often called “identity and access management”).

      You can have an outsized impact relative to another potential hire by working for a high-impact organisation where you understand their cause area. This is because information security can be challenging for organisations that are focussed on social impact, as industry standard cybersecurity advice is built to support profit motives and regulatory frameworks. Tailoring cybersecurity to how an organisation is trying to achieve its mission — and to prevent the harmful events the organisation cares most about — could greatly increase your effectiveness.

      If you’re interested in reducing existential risks, we think you should consider joining an organisation working in relevant areas such artificial intelligence, as discussed above, or biorisk.

      An important part of this is bringing your team along for the journey. To do security well, you will regularly be asking people to change the way they work (likely adding hurdles!), so being an effective communicator can be as important as understanding the technical details. Helping everyone understand why certain security measures matter and how you’re balancing the costs and benefits is required for the team to accept additional effort or seemingly unnecessary steps.

      Ethical hacking roles, in which you’re tasked with breaking the defences of your clients or employers in order to ultimately improve them, are also important for cybersecurity — but only very large organisations have positions for these sorts of “offensive” (or “red teaming”) roles. More often, such roles are at cybersecurity services companies, which are paid to do short-term penetration testing exercises for clients.

      If you take such a role, it would be hard to focus on the security of impactful organisations in order to maximise your impact, because you often have little choice about which clients you’re supporting. But you could potentially build career capital in these kinds of positions before moving on to more impactful jobs.

      What kind of salaries do cybersecurity professionals earn?

      Professionals in information security roles such as cybersecurity earn high salaries. The US Bureau of Labor Statistics reported that the median salary for information security analysts was over $100,000 a year in 2021. In some key roles, such as those at top AI labs or major companies, the right candidates can make as much as $500,000 a year or more.

      While you’ll likely have a bigger impact supporting an organisation directly if the organisation is doing particularly important work, earning to give can still be a high-impact option, especially when you focus on donating to the most effective projects that could use the extra funds.

      How to assess your fit in advance?

      A great way to gauge your fit for information security is to try it out. There are many free online resources that can teach you the basics or give you hands-on experience with technical aspects of security.

      You can get an introduction through the Google Foundations of Cybersecurity course, which you can view for free if you select the ‘audit’ option on the bottom left of the enrollment pop-up. The full Google Cybersecurity Professional Certificate series is also worth watching to learn more on relevant technical topics.

      Some other ideas to get you started:

      Having a knack for figuring out how computer systems work, or enjoying deploying a security mindset are predictors that you might be a good fit — but they are not required to get started in information security.

      How to enter infosecurity

      Entering with a degree

      The traditional way to enter this field is to study an IT discipline — such as computer science, software engineering, computer engineering, or a related field — in a university that has a good range of cybersecurity courses. However, you shouldn’t think of this as a prerequisite — there are many successful security practitioners without a formal degree. A degree often makes it easier to get entry-level jobs though, because many organisations still require it.

      Aside from cybersecurity-labelled courses, a good grasp of the fundamentals of computer systems is useful. This includes topics on computer networks, operating systems, and the basics of how computer hardware works. We suggest you consider at least one course in machine learning — while it’s difficult to predict technology changes, it’s plausible that AI technologies will dramatically change the security landscape.

      Consider finding a part-time job in an IT area while studying (see the next section), or doing an internship. This doesn’t need to be in an information security capacity; it can just be a role where you get to see first-hand how IT works. What you learn in university and what happens in practice are different, and understanding how IT is applied in the real world is vital.

      In the final year of your degree, look for entry-level cybersecurity positions — or other IT positions, if you need to.

      We think that jobs in cybersecurity defensive roles are ideal for gaining the broad range of skills that are most likely to be relevant to high-impact organisations. These have role titles such as Security Analyst, Security Operations, IT Security Officer, Security Engineer, or even Application Security Engineer. “Offensive” roles such as penetration testing can also provide valuable experience, but you may not get as broad an overview across all of the fronts relevant to enterprise security, or experience the challenges with implementation first-hand.

      Entering with (just) IT experience

      It is also possible to enter this field without a degree.

      If you have a good working knowledge of IT or coding skills, a common path is to start in a junior role in internal IT support (or similar service desk or help desk positions) or software role. Many people working in cybersecurity today transitioned from other roles in IT. This can work well if you are especially interested in computers and are motivated to tinker with computer systems in your own time.

      A lot of what that you’ll learn in an organisational IT role will be useful for cybersecurity roles. Solid IT management requires day-to-day security, and understanding how the systems work and the challenges caused by security features is important if you’re going to be effective in cybersecurity.

      Do you need certifications?

      There are many cybersecurity certifications you can get. They aren’t mandatory, but having one may help you get into an entry-level job, especially if you don’t have a degree. The usefulness varies depending on how reputable the provider is, and the training and exams may be expensive.

      Some well-regarded certifications are CompTIA Security+, GIAC Security Essentials, OSCP Penetration Testing, and Certified Ethical Hacker. Vendor and technology certifications (e.g. Microsoft or AWS) generally aren’t valuable unless they’re specific to a job you’re pursuing.

      What sorts of places should you work?

      For your first few years, we recommend prioritising finding a role that will grow your knowledge and capability quickly. Some high-impact organisations are quite small, so they may not be well-placed to train you up early in your career, because they’ll likely have less capacity for mentorship in a range of technical areas.

      Find a job where you can learn good IT or cybersecurity management from others.

      The best places to work will already have relatively good security management practices and organisational maturity, so you can see what things are supposed to look like. You may also get a sense of the barriers that prevent organisations from having ideal security practices. Being able to ask questions from seasoned professionals and figure out what is actually feasible helps you learn more quickly than running up against all of the roadblocks yourself.

      Tech companies and financial organisations have a stronger reputation for cybersecurity. Security specialist organisations — such as in consulting, managed security providers, or security software companies — can also be great places to learn. Government organisations specialising in cybersecurity can provide valuable experience that is hard to get outside of specific roles.

      Once you’re skilled up, the main thing to look for is a place that is doing important work. This might be a government agency, a nonprofit, or even a for-profit. We list some high-impact organisations here. Information security is a support function needed by all organisations to different degrees. How positive your impact is will depend a lot on whether you’re protecting an organisation that does important and pressing work. Below we discuss specific areas where we think additional people could do the most impactful work.

      Safeguarding information hazards

      Protecting information that could be damaging for the world if it was stolen may be especially impactful and could help decrease existential risk.

      Some information could increase the risk that humanity becomes extinct if it were leaked. Organisations focussed on reducing this risk may need to create or use this information as part of their work, so working on their security means you can have a directly positive impact. Examples include:

      • AI research labs, as discussed above, which may discover technologies that could harm humanity in the wrong hands
      • Biorisk researchers who work on sensitive materials, such as harmful genetic sequences that could be used to engineer pandemics
      • Research and grantmaking foundations that have access to sensitive information on the strategies and results of existential risk reduction organisations

      Contributing to safe AI

      Security skills are relevant for preventing an AI-related catastrophe. Security professionals can bring a security mindset and technical skills that can mitigate the risk of an advanced AI leading to disaster.

      If advanced AI ends up radically transforming the global economy, as some believe it might, the security landscape and nature of threats discussed in this article could change in unexpected ways. Understanding the cutting-edge uses of AI by both malicious hackers and infosecurity professionals could allow you to have a large impact by helping ensure the world is protected from major catastrophic threats.

      Working in governments

      Governments also hold information that could negatively impact geopolitical stability if stolen, such as weapons technology and diplomatic secrets. But it may be more difficult to have a positive impact through this path working in government, as established bureaucracies are often resistant to change, and this resistance may prevent you from having impact.

      That said, the scale of government also means that if you are able to make a positive change in impactful areas, it has the potential for far-reaching effects.

      People working in this area should regularly reassess whether their work is, or is on a good path to, making a meaningful difference. There may be better opportunities inside or outside government.

      You may have a positive impact by working in cybersecurity for your country’s national security agencies, either as a direct employee or as a government contractor. In addition, these roles may give you the experience and professional contacts needed to work effectively in national cybersecurity policy.

      If you have the opportunity, working to set and enforce sensible cybersecurity policy could be highly impactful.

      Want one-on-one advice on pursuing this path?

      If you think this path might be a great option for you, but you need help deciding or thinking about what to do next, our team might be able to help.

      We can help you compare options, make connections, and possibly even help you find jobs or funding opportunities.

      APPLY TO SPEAK WITH OUR TEAM

      Learn more

      Read next:  Learn about other high-impact careers

      Want to consider more paths? See our list of the highest-impact career paths according to our research.

      Plus, join our newsletter and we’ll mail you a free book

      Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

      The post Information security in high-impact areas appeared first on 80,000 Hours.

      ]]>
      Software engineering https://80000hours.org/career-reviews/software-engineering/ Fri, 04 Feb 2022 09:00:51 +0000 https://80000hours.org/?post_type=career_profile&p=75831 The post Software engineering appeared first on 80,000 Hours.

      ]]>
      On December 31, 2021, the most valuable company on Earth was Apple, worth around $3 trillion. After that came Microsoft, at $2.5 trillion, then Google (officially Alphabet) at $1.9 trillion, then Amazon at $1.5 trillion.

      On December 31, 2020, the four most valuable companies were: Apple, Microsoft, Amazon, and Google.

      On December 31, 2019, the four most valuable companies were: Apple, Microsoft, Google, and Amazon.

      And on December 31, 2018, the four most valuable companies were: Microsoft, Apple, Amazon, and Google.

      If you’re anything like me, you’re starting to spot a pattern here.

      Revenue in software has grown from $400 billion in 2016 to $500 billion in 2021, and is projected to reach $800 billion by 2026.

      Software has an increasing and overwhelming importance in our economy — and everything else in our society. High demand and low supply makes software engineering well-paid, and often enjoyable.

      But we also think that, if you’re trying to make the world a better place, software engineering could be a particularly good way to help.

      In a nutshell:

      Software engineering could be a great option for having a direct impact on the world’s most pressing problems. If you have good analytical skills (even if you have a humanities background), you might consider testing it. Basic programming skills can be easy to learn and extremely useful even if you decide not to go into software engineering, which means trying this out could be particularly low cost.

      Pros

      • Gain a flexible skill set.
      • Make a significant direct impact, either by working on AI safety, or in otherwise particularly effective organisations.
      • Have excellent working conditions, high pay, and good job security.

      Cons

      • Late-stage earnings are often lower than in many other professional jobs (especially high-paying roles such as quantitative trading), unless you help found a successful startup.
      • Likely only a small proportion of exceptional programmers will have a highly significant impact.
      • Initially, it could be relatively challenging to gain skills quickly compared to some other jobs, as you need a particular concrete skill set.

      Key facts on fit

      Willingness to teach yourself, ability to break problems down into logical parts and generate and test hypotheses, willingness to try out many different solutions, high attention to detail, quantitative degree useful but not required.

      Sometimes recommended — personal fit dependent

      This career will be some people's highest-impact option if their personal fit is especially good.

      Review status

      Based on an in-depth investigation 

      This review owes a lot to helpful discussions with (and comments from) Andy Jones, Ozzie Gooen, Jeff Kaufman, Sasha Cooper, Ben Kuhn, Nova DasSarma, Kamal Ndousse, Ethan Alley, Ben West, Ben Mann, Tom Conerly, Zac Hatfield-Dodds, and George McGowan. Special thanks go to Roman Duda for our previous review of software engineering, on which this was based.

      Why might software engineering be high impact?

      Software engineers are in a position to meaningfully contribute directly to solving a wide variety of the world’s most pressing problems.

      In particular, there is a shortage of software engineers at the cutting edge of research into AI safety.

      We’ve also found that software engineers can contribute greatly to work aiming at preventing pandemics and other global catastrophic biological risks.

      Aside from direct work on these crucial problems, while working for startups or larger tech companies you can gain excellent career capital (especially technical skills), and, if you choose, earn and donate substantial amounts to the world’s best charities.

      How to do good as a software engineer

      Even for skilled engineers who could command high salaries, we think that working directly on a problem will probably be more impactful than earning to give.

      Some examples of projects where software engineering is central to their impactful work:

      Most organisations, even ones that don’t focus on developing large software products, need software engineers to manage computer systems, apps, and websites. For example:

      Many people we’ve spoken to at these and other organisations have said that they have real difficulty hiring extremely talented software engineers. Many nonprofits want to hire people who believe in their missions (just as they do with operations staff), which indicates that talented, altruistic-minded software engineers are sorely needed and could do huge amounts of good.

      Smaller organisations that don’t focus on engineering often only have one or two software engineers. And because things at small organisations can change rapidly, they need unusually adaptable and flexible people who are able to maintain software with very little help from the wider team.1

      It seems likely that, as the community of people working on helping future generations grows, there will be more opportunities for practical software development efforts to help. This means that even if you don’t currently have any experience with programming, it could be valuable to begin developing expertise in software engineering now.

      Software engineers can help with AI safety

      We’ve argued before that artificial intelligence could have a deeply transformative impact on our society. There are huge opportunities associated with this ongoing transformation, but also extreme risks — potentially even threatening humanity’s survival.

      With the rise of machine learning, and the huge success of deep learning models like GPT-3, many experts now think it’s reasonably likely that our current machine learning methods could be used to create transformative artificial intelligence.

      This has led to an explosion in empirical AI safety research, where teams work directly with deep neural networks to identify risks and develop frameworks for mitigating them. Examples of organisations working in empirical AI safety research include Redwood Research, DeepMind, OpenAI, and Anthropic.

      These organisations are doing research directly with extremely large neural networks, which means each experiment can cost millions of dollars to run. This means that even small improvements to the efficiency of each experiment can be hugely beneficial.

      There’s also often overlap between experimental results that will help further AI safety and results that could accelerate the development of unsafe AI, so it’s also important that the results of these experiments are kept secure.

      As a result, it’s likely to remain incredibly valuable to have talented engineers working on ensuring that these experiments are as efficient and safe as possible. Experts we spoke to expect this to remain a key bottleneck in AI safety research for many years.

      However, there is a serious risk associated with this route: it seems possible for engineers to accidentally increase risks from AI by generally accelerating the technical development of the field. We’re not sure of the more precise contours of this risk (e.g. exactly what kinds of projects you should avoid), but think it’s important to watch out for. That said, there are many more junior non-safety roles out there than roles focused specifically on safety, and experts we’ve spoken to expect that most non-safety projects aren’t likely to be causing harm. If you’re uncertain about taking a job for this reason, our team may be able to help you decide.

      Software engineer salaries mean you can earn to give

      In general, if you can find a job you can do well, you’ll have a bigger impact working on a problem directly than you would by earning money and donating. However, earning to give can still be a high-impact option, especially if you focus on donating to the most effective projects that could use the extra funds.

      If you’re skilled enough to work at top companies, software engineering is a well-paid career. In the US, entry-level software engineer salaries start at around $110,000. Engineers at Microsoft start at $150,000, and engineers at Google start at around $180,000 (including stock and bonuses). If you’re successful, after a few years on the job you could be earning over $500,000 a year.

      Pay is generally much lower in other countries. Median salaries in Australia are around 20% lower than salaries in the US (approximately US$80,000), and around 40% lower in the UK, Germany, Canada, and Japan (approximately US$60,000). While much of your earnings as a software engineer come from bonuses and equity, rather than just your salary, these are also lower outside the US.

      If you do want to make a positive difference through donating part of your income as a software engineer, you may be able to increase your impact by using donation-matching programmes, which are common at large tech companies (although these are often capped at around US$10,000 per year).

      You can read more about salaries at large tech companies below.

      It’s important to note that many nonprofit organisations, including those focusing on AI safety, will offer salaries and benefits that compete with those at for-profit firms.

      If you work at or found a startup, your earnings will be highly variable. However, the expected value of your earnings — especially as a cofounder — could be extremely high. For this reason, if you’re a particularly good fit, founding a tech startup and donating your earnings could be hugely impactful, as you could earn and donate extraordinary amounts.

      What does a software engineering career involve?

      Ultimately, the best ways to have an impact with software engineering are probably things like working at an AI lab or a particularly effective nonprofit.

      To get there, there are two broad paths that you could follow to build software engineering skills (and, given the high salaries in software engineering, you can earn to give along the way):

      1. Working for a large, stable company (e.g. Microsoft, Google, Amazon)
      2. Working for a small, fast-growing startup

      In general, you will gain broadly transferable skills through either of these options. To gain experience as quickly and effectively as possible, look for roles that offer good management and mentorship opportunities. You should also make sure you gain a really deep understanding of the basics of software development.

      Working at a top-tier tech company also holds comparable prestige to working in finance or consulting, and gives you the opportunity to make connections with wealthy and influential people, many of whom are impact-minded and interested in doing good.

      You’ll need different skills, and work at different jobs, depending on whether you want to be a front-end, back-end (including machine learning), or full-stack developer.

      Working for a large software company

      The best way to develop software skills is to practise writing code and building software through years of experience. Direct one-on-one mentorship is extremely valuable when developing skills, and this is often provided through software engineering jobs at large tech companies.

      Top firms (e.g. Microsoft, Google, Amazon) are particularly good at providing training to develop particular skill sets, such as management and information security. After talking with people who have experience in training at both tech giants and elsewhere, we think that this internal training is likely the best way to develop knowledge in software engineering (other than on-the-job practice), and will be better than training provided outside of these big tech companies.

      However, it’s important to ensure that your role provides you with a variety of experiences: five years of software development experience is not the same as having the same year of experience five times over.

      For example, it can be harder to gain full-stack or transferable front-end development experience at a large company. Many large mature products have a large front-end team making many small tweaks and analysing their performance in experiments. This provides good training in experiment design and analysis, but often isn’t very transferable to the sorts of front-end work you’d do at smaller companies or nonprofits, where you’ll often be working in a much smaller team with a focus on developing the experience as a whole rather than running experiments on small changes.

      It generally takes around two years for new starters at big tech companies to have the experience they need to independently work on software, and another two years to reach a position where they are able to give advice and support to others in the company and manage projects.

      Key career stages at large tech companies

      First you’ll need some basic experience. You can get this from a relevant degree; working on a job at a smaller, less prestigious company; or from a bootcamp (see how to enter below for more).

      New graduates, and other people with a couple of years of relevant experience, will start out as junior engineers. As a junior engineer, you’d complete small, clearly specified tasks and gain a preliminary understanding of the software development lifecycle. You’ll generally be given lots of guidance and support from more experienced engineers. You usually stay in this role for around three years, gradually expanding your scope. In the US, you’d be paid an entry-level compensation of $100,000 to $200,000 (as of early 2022).

      Once you’ve successfully demonstrated that you can work on projects without needing much support, you’ll be given more responsibility. For a couple of years, you’ll work on more complex projects (often in one or two languages in which you’ve specialised), and with less support from others.

      After five to eight years2, you’ll generally progress to a senior engineer position. As a senior engineer, you write complex applications and have a deep understanding of the entire software lifecycle. You may lead small teams or projects, and you’ll be expected to provide mentorship and guidance to junior engineers. You can stay in this role for much of your career, though it becomes harder to compete with younger talent as you get older. Compensation in 2022 at this level is around $300,000 to $400,000 in the US.

      At this point you may have the skills to leave and become a technical founder or CTO of a startup. This is a highly variable option (since most startups fail), but could be one of the highest expected value ways to earn to give given a chance of wild success.

      Progressing past senior engineers, you’re typically responsible for defining as well as doing your job. You may go into management positions, or could become a staff engineer. Staff engineers, while still building software, also set technical direction, provide mentorship, input an engineering perspective to organisational decisions, and do exploratory work. At this level, at top firms in the US, you can earn upwards of $500,000 and sometimes more than $1,000,000 a year.

      Software engineering is unusual in that you can have a senior position without having to do management, and many see this as a unique benefit of the career. (To learn more about post-senior roles, we recommend The Staff Engineer’s Path by Tanya Reilly and the StaffEng website.)

      Working for a startup as a software engineer

      Working for a startup can give you a much broader range of experience, including problem-solving, project management, and other ‘soft’ skills — because unlike in large companies, there is no one else at the organisation to do these things for you. You can gain a strong understanding of the entire development process as well as general software engineering principles.

      Startups often have a culture that encourages creative thinking and resourcefulness. This can be particularly good experience for working in small software-focused nonprofits later in your career.

      However, the experience of working in small organisations varies wildly. You’ll be less likely to have many very senior experienced engineers around to give you the feedback you need to improve. At very small startups, the technical cofounder may be the only experienced engineer, and they are unlikely to provide the level of mentorship provided at big tech companies (in part because there’s so much else they will need to be doing). That said, we’ve spoken to some people who have had great mentorship at small startups.

      You also gain responsibility much faster at a fast-growing startup, as there is a desperate need for employees to take on new projects and gain the skills required. This can make startups a very fertile learning ground, if you can teach yourself what you need to know.

      Pay at startups is very variable, as you will likely be paid (in large part) in equity, and so your earnings will be heavily tied to the success of the organisation. However, the expected value of your earnings may be comparable to, and in some cases higher than, earnings at large companies.

      Many startups exit by selling to large tech companies. If this happens, you may end up working for a large company anyway.

      Take a look at our list of places to find startup roles.

      Moving to a direct impact software engineering role

      Working in AI safety

      If you are looking to work in an engineering role in an AI safety or other research organisation, you will probably want to focus on back-end software development (although there are also front-end roles, particularly those focusing on gathering data from humans on which models can be trained and tested). There are recurring opportunities for software engineers with a range of technical skills (to see examples, take a look at our job board).

      If you have the opportunity to choose areas in which you could gain expertise, the experienced engineers we spoke to suggested focusing on:

      • Distributed systems
      • Numerical systems
      • Security

      In general, it helps to have expertise in any specific, hard-to-find skill sets.

      This work uses a range of programming languages, including Python, Rust, C++ and JavaScript. Functional languages such as Haskell are also common.

      We’ve previously written about how to move into a machine learning career for AI safety. We now think it is easier than we previously thought to move into an AI-safety-related software engineering role without explicit machine learning experience.

      The Effective Altruism Long-Term Future Fund and the Survival and Flourishing Fund may provide funding for promising individuals to learn skills relevant to helping future generations, including new technologies such as machine learning. If you already have software engineering experience, but would benefit from explicit machine learning or AI safety experience, this could be a good option for you.

      If you think you could, with a few weeks’ work, write a new feature or fix a bug in a major machine learning library, then you could probably apply directly for engineering roles at top AI safety labs (such as Redwood Research, DeepMind, OpenAI, and Anthropic), without needing to spend more time building experience in software engineering. These top labs offer pay that is comparable to pay at large tech firms. (Read more about whether you should take a job at a top AI lab.)

      If you are considering joining an AI safety lab in the near future, our team may be able to help.

      Working on reducing global catastrophic biological risks

      Reducing global catastrophic biological risks — for example, research into screening for novel pathogens to prevent future pandemics — is likely to be one of the most important ways to help solve the world’s most pressing problems.

      Through organisations like Telis Bioscience and SecureDNA (and other projects that might be founded in the future), there are significant opportunities for software engineers to contribute to reducing these risks.

      Anyone with a good understanding of how to build software can be useful in these small organisations, even if they don’t have much experience. However, if you want to work in this space, you’ll need to be comfortable getting your hands dirty and doing whatever needs to be done, even when the work isn’t the most intellectually challenging. For this reason, it could be particularly useful to have experience working in a software-based startup.

      Much of the work in biosecurity is related to handling and processing large amounts of data, so knowledge of how to work with distributed systems is in demand. Expertise in adjacent fields such as data science could also be helpful.

      There is also a big focus on security, particularly at organisations like SecureDNA.

      Most code in biosecurity is written in Python.

      If you’re interested in working on biosecurity and pandemic preparedness as a software engineer, you can find open positions on our job board.

      Other important direct work

      Nonprofit organisations and altruistic-minded startups often have very few team members. And no matter what an organisation does, they almost always have some need for engineers (for example, 80,000 Hours is not a software organisation, but we employ two developers). So if you find an organisation you think is doing something really useful, working as a software engineer for them might be an excellent way to support that work.

      Engineering for a small organisation likely means doing work across the development process, since there are few other engineers.

      Often these organisations are focused on front-end development, with jobs ranging from application development and web development to data science and project management roles. There are often also opportunities for full-stack developers with a broad range of experience.

      Founding an organisation yourself is more challenging, but can be even more impactful. And if you’ve worked in a small organisation or a startup before, you might have the broad skills and entrepreneurialism that’s required to succeed. See our profile on founding new high-impact projects for more.

      Reasons not to go into software engineering

      We think that most people with good general intelligence will be able to do well at software engineering. And because it’s very easy to test out (see the section on how to predict your fit in advance), you’ll be able to tell early on whether you’re likely to be a good fit.

      However, there are lots of other paths that seem like particularly promising ways to help solve the world’s most pressing problems, and it’s worth looking into them. If you find programming difficult, or unenjoyable, your personal fit for other career paths may be higher. And even if you enjoy it and you’re good at it, we think that will be true for lots of people, so that’s not a good reason to think you won’t be even better at something else!

      As a result, it’s important to test your fit for a variety of options. Try taking a look at our other career reviews to find out more.

      How much do software engineers earn?

      It’s difficult to make claims about software engineer earnings in general.

      For a start, almost all of the official (especially government) data on this is on salaries rather than total compensation. By the time you’re a senior engineer, less than half of what you earn will be from your salary — the rest will be from bonuses, stock, and other benefits.

      Most government data also reports median salaries, but as we saw when looking at progression in big tech firms, very senior software engineers can earn seven-figure compensations. So we should expect the distribution of total compensation to be positively skewed, or possibly even bimodal.

      As a result, you should think of the figures below as representing salaries for early- to mid- career software developers.

      Even given all these caveats, the figures we present here are instructive for understanding the relative salary levels (e.g. between locations), even if the absolute values given aren’t perfect.

      More data is available at Levels.fyi, which collects data from people self-reporting their total compensation, and also has data on the distribution of what people earn, rather than just averages.

      Software engineering salaries in the US

      Here are the median US salaries for software developers, from the US Bureau of Labor Statistics:

      Median US salaries for software engineers in 2020 (excluding bonuses)3

      Mean Median
      Computer programmers $95,640 $89,190
      Software developers and software quality assurance analysts and testers $114,270 $110,140
      Web developers and digital interface designers $85,490 $77,200

      Here are the median salaries at different levels of progression, both in the US as a whole and in Mountain View and Palo Alto (i.e. Silicon Valley).4 In general, salaries rise quite rapidly in the early stages of the career, but then level off and grow by only a few percent per year after around a decade. However, this is probably offset by increases in other forms of compensation.

      Median US salaries for software engineers in 2020 at different levels of progression

      Stage Usual experience required US (median salary + bonus) Mountain View and Palo Alto, CA (median salary + bonus)
      Software engineer I (entry level) 0-2 years $75,000 $94,000
      Software engineer II 2-4 years $95,000 $120,000
      Software engineer III 4-6 years $120,000 $150,000
      Software engineer IV 6-8 years $147,000 $185,000
      Software engineer V 8-10 years $168,000 $211,000
      Software engineering manager 10+ years $155,000 $195,000
      Software engineer director 10+ years $226,000 $284,000
      Software engineer director 15+ years $303,000 $380,000

      For figures on total compensation, especially at top companies, we can again look at Levels.fyi. These figures are far higher. Entry-level compensation is around $150,000, rising to $300,000 to $400,000 for senior engineers, and above $500,000 for late-career engineers. The top compensation levels reported are over $1,000,000.

      Salaries also vary by location within the US; they are generally significantly higher in California (although web developers are best paid in Seattle).

      Mean salary by US region in 20205

      National Top-paying state Top-paying metro area
      Computer programmers $95,640 $107,300 (CA) $125,420 (San Francisco)
      Software developers and software quality assurance analysts and testers $114,270 $137,620 (CA) $157,480 (Silicon Valley)
      Web developers and digital interface designers $85,490 $94,960 (WA) $138,070 (Seattle)

      These data are supported by Levels.fyi data on various locations in the US (e.g. Atlanta, New York City, Seattle, and the Bay Area).

      Notably, the differences between locations in salaries at the 90th percentile is much higher than the differences in median salaries.

      Compensation by US region in 20206

      Median 90th percentile
      Atlanta $131,000 $216,000
      New York City $182,000 $365,000
      Seattle $218,000 $430,000
      San Francisco Bay area $222,000 $426,000

      It’s worth noting, however, that the cost of living in Silicon Valley is higher than in other parts of the US (Silicon Valley’s cost of living is 1.5 times the US national average7), reducing disposable income. (In general, data on average cost of living is particularly representative of the costs you’d expect to pay if you have a family or want to own a house.)

      If you want to estimate your own disposable income given different scenarios, you can try these tools:

      Software engineering pay in other countries

      Software engineers are paid significantly less outside the US. The UK Office for National Statistics found that the mean salary for “programmers and software development professionals” in 2020 was £46,000 (US$59,000 in 2020).8 Even when looking at full compensation, we see similar trends across the world.

      Software engineer compensation outside the US6

      Median 90th percentile
      Australia A$166,000
      (US$123,000)
      A$270,000
      (US$200,000)
      Canada C$143,000
      (US$115,000)
      C$270,000
      (US$218,000)
      Germany €86,000
      (US$98,000)
      €145,000
      (US$165,000)
      India ₹3,123,000
      (US$42,000)
      ₹7,435,000
      US$100,000)
      Ireland €101,000
      (US$115,000)
      €188,000
      (US$214,000)
      Israel ₪533,000
      (US$165,000)
      ₪866,000
      (US$268,000)
      Netherlands €108,000
      (US$123,000)
      €174,000
      (US$198,000)
      Russia ₽2,991,000
      (US$42,000)
      ₽6,410,000
      (US$90,000)
      Singapore S$143,000
      (US$106,000)
      S$263,000
      (US$195,000)
      Switzerland CHF 177,000
      (US$190,000)
      CHF 355,000
      (US$382,000)
      Taiwan NT$1,819,000
      (US$65,000)
      NT$3,387,000
      (US$121,000)
      United Kingdom £90,000
      (US$123,000)
      £166,000
      (US$228,000)

      The only countries with earnings as high as the US are Israel and Switzerland, and no countries have earnings as high as Seattle or the San Francisco Bay Area. The cost of living in major cities in Israel and Switzerland is around 20% higher than in Silicon Valley.9

      Compensation across the world is often higher if you work from a major city.

      Software engineer compensation in major cities outside the US6

      Median 90th percentile
      Bangalore, India ₹3,569,000
      (US$48,000)
      ₹7,583,000
      (US$102,000)
      Dublin, Ireland €106,000
      (US$120,000)
      €189,000
      (US$215,000)
      London, UK £95,000
      (US$130,000)
      £170,000
      (US$233,000)
      Toronto, Canada C$149,000
      (US$120,000)
      C$273,000
      (US$220,000)
      Vancouver, Canada C$156,000
      (US$126,000)
      C$306,000
      (US$247,000)

      It can be difficult to get a visa to work in the US. For example, US immigration law mandates that a maximum of 65,000 H-1B visas (one of the most common types for software engineers) are issued a year. Also, because of the cost of flying you out for an interview, there will often be a higher bar for international applicants passing phone interviews.

      There are some things that can make it easier to get a visa:

      • Having a degree in computer science or other field related to your job
      • Applying to companies with enough capital and flexibility to bear the time and financial costs of the visa process
      • Having a specific unusual skill set that may be hard to find in the US

      Take a look at this blog to find out more.

      Despite all of this, remote work in software development is becoming far more common. There’s a growing trend for a few companies to hire globally for remote roles, and pay US-market compensation. If you manage to get one of those roles, you can earn a lot from anywhere.

      Software engineering job outlook

      The future demand for software engineers is promising. The US Bureau of Labor Statistics projects 22% growth in US employment of software engineers from 2020–30, which is much higher than the growth rate for all occupations (8%). The main reason given for this growth is a large projected increase in the demand for software for mobile technology, the healthcare industry, and computer security.

      Software engineering job outlook according to the US Bureau of Labor Statistics

      The number of web development jobs is projected to grow by 13% from 2020–2030. The main reasons for this are the expected growth of e-commerce and an increase in mobile devices that access the web.

      What does this mean for future salaries? Strong growth in demand provides the potential for salary growth, but it also depends on how easily the supply of engineers can keep up with demand.

      Web development job outlook according to the US Bureau of Labor Statistics

      Software engineering job satisfaction

      The same high demand for software engineers that leads to high pay also leads to high bargaining power. As a result, job satisfaction among software engineers is high.

      Many software engineers we have spoken to say the work is engaging, often citing the puzzles and problems involved with programming, and being able to enter a state of flow (which is one of the biggest predictors of job satisfaction). On the other hand, working with large existing codebases and fixing bugs are often less pleasant. Read our five interviews with software engineers for more details.

      Work-life balance in software engineering is generally better than in jobs with higher or comparable pay. According to one survey, software engineers work 8.6 hours per day (though hours are likely to be longer in higher-paid roles and at startups).

      Tech companies are progressive, often having flexible hours, convenient perks, remote working, and a results-driven culture. The best companies are widely regarded as among the best places to work in the world.

      Examples of people pursuing this path

      How to predict your fit in advance

      The best way to gauge your fit is to try it out. You don’t need a computer science degree to do this. We recommend that you:

      1. Try out writing code — as a complete beginner, you can write a Python program in less than 20 minutes that reminds you to take a break every two hours. Once you know the fundamentals, try taking an intro to computer science and programming class, or work through free resources. If you’re in college, you could try taking CS 101 (or an equivalent course outside the US).
      2. Do a project with other people — this lets you test out writing programs in a team and working with larger codebases. It’s easy to come up with programming projects to do with friends — you can see some examples here. Contributing to open-source projects in particular lets you work with very large existing codebases.
      3. Take an internship or do a coding bootcamp.

      It seems likely that a few software engineers could be significantly better than average. These very best software engineers are often people who spend huge amounts of time practising. This means that if you enjoy coding enough to want to do it both as a job and in your spare time, you are likely to be a good fit.

      How to enter this field

      While a degree in computer science or a quantitative subject is often helpful, many entry-level jobs don’t require one, meaning that software engineering is open to people with backgrounds in humanities and social sciences.

      To enter, you need some basic programming skills and to be able to demonstrate a strong interest in software engineering. We’ve seen many people with humanities and social science degrees get junior software engineer jobs with high salaries, just through learning on their own or through coding bootcamps.

      Learning to program

      Basic computer programming skills can be extremely useful whatever you end up doing. You’ll find ways to automate tasks or analyse data throughout your career. This means that spending a little time learning to code is a very robustly useful option.

      • Learning on your own. There are many great introductory computer science and programming courses online, including: Udacity’s Intro to Computer Science, MIT’s Introduction to Computer Science and Programming, and Stanford’s Programming Methodology. Don’t be discouraged if your code doesn’t work the first time — that’s what normally happens when people code!
      • Attending a coding bootcamp. We’ve advised many people who managed to get junior software engineer jobs in less than a year through going to a bootcamp. Coding bootcamps are focused on taking people with little knowledge of programming to as highly paid a job as possible within a couple of months. This is a great entry route if you don’t already have much background, though some claim the long-term prospects are not as good because you lack a deep understanding of computer science. Course Report is a great guide to choosing a bootcamp. Be careful to avoid low-quality bootcamps. To find out more, read our interview with an App Academy instructor.
      • Studying computer science at university (or another subject involving lots of programming). If you’re in university, this is a great option because it allows you to learn programming while the opportunity cost of your time is lower. It will also give you a better theoretical understanding of computing than a bootcamp will (which can be useful for getting the most highly paid and intellectually interesting jobs), a good network, some prestige, and a better understanding of lower-level languages like C. Having a CS degree also makes it easier to get a US work visa if you’re not from the US.
      • Doing internships. If you can find internships, ideally at your target employers (whether big tech companies or nonprofits), you’ll gain practical experience and the key skills you otherwise wouldn’t pick up from academic degrees (e.g. using version control systems and powerful text editors). Take a look at our list of software engineering (and machine learning) internships at top companies.

      Getting your first job in software engineering

      Larger companies will broadly advertise entry-level roles. For smaller companies, you may have to reach out directly and through your network. You can find startup positions on job boards such as AngelList, and many top venture capital firms have job boards for their portfolio companies.

      Large software firms can have long and in-depth interview processes. You will be asked about general software knowledge, and later rounds of interviews are likely to give you problems around coding and algorithms, during which you will be asked to collaborate with the interviewer to solve the problem.

      It’s worth practising software engineering interview questions in advance; often this means apply for companies you are less likely to want to work at first, and use these applications to get used to the process. This can be a stressful process (in part because you might face some early rejections, in part because it’s tricky to navigate applying if you don’t really want the job that much), so it’s important to take care of your mental health throughout the process.

      It will also probably help to study the most popular interview guide, Cracking the Coding Interview. You can also practise by doing TopCoder problems.

      We think that this guide to getting a software engineering job is particularly helpful. There are six rough steps:

      1. Send a company your resume. Make it as specific as possible to the job you’re applying for, and proofread it carefully. If you can get a referral from a friend, that will significantly increase your chances of success.
      2. Speak to a recruiter. Read up about the company in advance, and make sure you have questions. Be nice — it’s going to help if the recruiter is on your side.
      3. Have a technical phone interview. You’ll solve some problems together. Make sure you ask questions to clarify the problem, and strategise about the best possible approach before you start writing code. Finish by checking for bugs and make sure you’re handling errors correctly. When you’re done, ask the interviewer some questions!
      4. Have a three- to six-hour on-site interview. It’s key to talk out loud as you work through a problem. And again, ask your interviewer some questions about them and the company.
      5. Get an offer from the recruiter. You should make sure they think you are seriously considering the company or you may not get an offer. If you don’t get an offer, ask for feedback (though it’s not always possible for companies to give detailed feedback). If you need more time to think (or to apply elsewhere), tell them in advance, and they may choose to wait to give you details when you’re more ready to go through with an offer.
      6. Accept the offer!

      Want one-on-one advice on pursuing this path?

      If you think software engineering might be a great option for you, but you need help deciding or thinking about what to do next, our team might be able to help.

      We can help you compare options, make connections, and possibly even help you find jobs or funding opportunities.

      APPLY TO SPEAK WITH OUR TEAM

      Learn more

      Top recommendations

      Further recommendations

      Find a job in this path

      If you think you might be a good fit for this path and you’re ready to start looking for jobs, see our curated list of opportunities:

        View all opportunities

        Read next:  Learn about other high-impact careers

        Want to consider more paths? See our list of the highest-impact career paths according to our research.

        Plus, join our newsletter and we’ll mail you a free book

        Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

        The post Software engineering appeared first on 80,000 Hours.

        ]]>
        Audrey Tang on what we can learn from Taiwan’s experiments with how to do democracy https://80000hours.org/podcast/episodes/audrey-tang-what-we-can-learn-from-taiwan/ Wed, 02 Feb 2022 22:43:27 +0000 https://80000hours.org/?post_type=podcast&p=75963 The post Audrey Tang on what we can learn from Taiwan’s experiments with how to do democracy appeared first on 80,000 Hours.

        ]]>
        The post Audrey Tang on what we can learn from Taiwan’s experiments with how to do democracy appeared first on 80,000 Hours.

        ]]>
        Expert in AI hardware https://80000hours.org/career-reviews/become-an-expert-in-ai-hardware/ Mon, 11 Sep 2023 00:00:44 +0000 https://80000hours.org/?post_type=career_profile&p=74532 The post Expert in AI hardware appeared first on 80,000 Hours.

        ]]>
        In 1965, Gordon Moore observed that the number of transistors you can fit onto a chip seemed to double every year. He boldly predicted, “Integrated circuits will lead to such wonders as home computers[,] automatic controls for automobiles, and personal portable communications equipment.”1

        Moore later revised his estimate to every two years, but the doubling trend held, eventually becoming known as Moore’s Law.

        This technological progress in computer hardware led to consistent doublings of performance, memory capacity, and energy efficiency. This was achieved only through astonishing increases in the complexity of design and production. While Moore was looking at chips with fewer than a hundred transistors, modern chips have transistor counts in the tens of billions and can only be fabricated by some of the most complex machinery humans have invented.2

        Besides personal computers and mobile phones, these enormous gains in computational resources — “compute” — have also been key to today’s rapid advances in artificial intelligence. Training a frontier model like OpenAI’s GPT-4 requires thousands of specialised AI chips with tens of billions of transistors, which can cost tens of thousands of dollars each.3

        As we have outlined in our AI risk problem profile, we think dangers from advanced AI are among the most pressing problems in the world. As they progress this century, AI systems — created with and running on AI hardware — may develop advanced capabilities and features that carry profound risks for humanity’s future.

        Navigating those risks will require crucial work in forecasting AI progress, researching and implementing governance mechanisms, and assisting policy makers, among other things. Expertise in AI hardware can be of use in all these activities.

        We are very enthusiastic about altruistically motivated people who already have AI hardware expertise moving into the AI governance and policy space in the short term. And we’re also enthusiastic about people with the background skills and strong personal fit to succeed in this field gaining AI hardware expertise and experience that could be useful later on.

        Using AI hardware expertise to reduce catastrophic risks is a relatively new field, and there is a lot of work needed right now to develop it.

        It’s hard to predict how the field will evolve, but we’d guess that there will continue to be useful ways to contribute for years to come. At some point, there may be less need for conceiving governance regimes and more work needed working out implementation details of specific policies. So we’re also pretty comfortable recommending people start now on gaining hardware-related skills and experience that could be useful later on. Hardware skills and experience are highly valuable in general, so this path is likely to have good exit options anyway.

        You can also read our career review of AI governance and coordination, which discusses how valuable this kind of expertise can be for policy.

        In a nutshell: Reducing risks from AI is one of the most pressing problems in the world, and we expect people with expertise in AI hardware and related topics will be in particularly high demand in policy and research in this area. For the right person, gaining and applying AI hardware skills to risk-reducing AI governance work could be their most impactful option.

        But becoming an expert in this field is not easy and will not be a good fit for most people, and it may be challenging to chart a clear path through the complex and evolving world of AI governance agendas.

        Pros

        • Opportunity to make a significant contribution to the growing field of AI governance
        • Intellectually challenging work that offers strong career capital for a range of paths
        • Working in a cutting-edge and fast-moving area

        Cons

        • You need strong quantitative and technical skills
        • There’s a lot of uncertainty about what needs to be done in this space
        • There’s a real possibility of causing harm in this field
        • Some — but not all — of the relevant roles may involve stressful work and long hours

        Key takeaways on fit

        For anyone with expertise in AI hardware, using these skills to contribute to risk-reducing governance approaches should be a top contender for your career. If you don’t yet have this experience, it might be worth developing these skills if you’re particularly excited about studying computer science and engineering, electrical engineering, or other relevant fields. These fields require strong maths and science skills.

        We suggest anyone interested in this path should also familiarise themselves with AI governance and coordination.

        Recommended

        If you are well suited to this career, it may be the best way for you to have a social impact.

        Review status

        Based on a medium-depth investigation 

        Why might becoming an expert in AI hardware be high impact?

        The basic argument for why being an expert in AI hardware could be impactful is:

        1. Increasingly advanced AI seems likely to be very consequential this century and may carry existential risks.
        2. There are various ways that expertise in AI hardware can help with (a) forecasting AI progress and (b) ensuring AI is developed responsibly.

        The main reason why expertise in AI hardware can help reduce risk is that, alongside data and ideas, compute is a key input into overall AI progress.4

        Researchers have identified scaling laws showing that, as you train AI systems using more compute and data, those systems predictably improve on many performance metrics. As a result, you now need thousands of expensive AI chips running for months to train a frontier AI model, amounting to tens of millions of dollars in compute costs alone.5



        Credit: Epoch AI (2023)


        AI chips are specialised to perform the specific calculations needed for training and running AI models. In practice, you cannot train frontier models with general-purpose chips — you need specialised chips.6 And to keep up, you need cutting-edge chips.7 AI labs that are stuck with older generations of chips pay more money and spend more time training models.

        It’s possible that compute will become less important of an input into AI progress in the future or that much AI training will be done using hardware other than AI chips.8 But it seems very likely that access to cost-effective compute remains vitally important for at least the next five years, and probably beyond that.9

        Some ways hardware experts could help positively shape the development of AI include:

        • Providing strategic clarity on AI capabilities and progress, in particular the current and future pace and drivers of those things, in order to inform research and decisions relevant to AI governance
        • Researching hardware-related governance mechanisms and policies, which seem promising since AI hardware is necessary, quantifiable, physical, and has a concentrated (though global) supply chain.10 (This field is sometimes called “compute governance.”)
          • Designing monitoring regimes to make compute usage more transparent, for example by researching a compute monitoring scheme for large AI models
          • Determining the feasibility and usefulness of hardware-enabled mechanisms as a tool for AI governance
          • Researching ways to limit access to compute to responsible and regulable actors
          • Developing prototypes of novel hardware security features
          • Understanding how compute governance fits into the broader geopolitical landscape
        • Working in government and policy roles on all the above
          • This may be some of the most important work to be done with these skills, but there’s less clarity as of this writing about what these roles will look like.
        • Doing impactful and safety-oriented work — including liaising with policymakers — from within industry
        • Advising policymakers and answering researchers’ questions as an expert, while working on something else, e.g. in industry
        • Though this is more speculative, you might work for third-party auditing organisations as part of a future compute governance programme.

        There are currently few AI hardware experts working on these areas who are motivated by reducing existential risk that we know of. As of mid-2023, there seem to be about 3–12 existential-risk-focused full-time equivalents (FTEs) forecasting AI progress with a focus on hardware, and about 10–20 such FTEs working on other projects related to compute governance.11

        It’s hard to estimate how many additional AI hardware experts are needed, and the answer could change rapidly — we recommend that you do some of your own research on this and talk with people in the field.

        There are ways in which this work could end up being net negative. For example, restricting certain actors’ access to compute could increase geopolitical tensions or lead to a concentration of power. Or work on AI hardware could lead to cheaper or more effective chips, accelerating AI progress. Or governance tools like compute monitoring regimes could be exploited by bad actors.

        Some compute governance proposals involve monitoring how AI chips are used by companies, which can raise privacy concerns. Finding ways to implement governance while still protecting personal privacy could be valuable work.

        If you do take this path, we encourage you to think carefully through the implications of your plans, ideally in collaboration with strategy and policy experts also focused on creating safe and beneficial AI. (See our article about accidental harm and tips on how to avoid it.)

        Another potential downside of gaining expertise in AI hardware to reduce catastrophic risks is that roles in industry, where your impact could be ambiguous or even negative, may end up being more appealing in some ways than higher-impact but lower-paid roles in policy or research.

        If you believe it might be difficult for you to switch out of high-paying industry positions and move to roles with much more potential to help others and reduce AI risk, you should carefully consider how to mitigate these challenges. You might aim to save more than usual while earning a high salary in case you later end up making less money; you could donate any earnings above a certain threshold; and you can make sure you’re a part of a community that helps you live up to your values.

        Want one-on-one advice on pursuing this path?

        If you think this path might be a great option for you, but you need help deciding or thinking about what to do next, our team might be able to help.

        We can help you compare options, make connections, and possibly even help you find jobs or funding opportunities.

        APPLY TO SPEAK WITH OUR TEAM

        What does working in high-impact AI hardware expert roles actually look like?

        Compute-focused AI governance is an exciting, burgeoning field with lots of activity and many open questions to tackle. There are also relevant policy windows open or likely opening soon as public awareness of risks from AI has increased.

        The most common kind of role for AI hardware experts is research, though other potentially impactful roles include working as a policymaker or staffer, as a policy analyst,12 or communicating research to policymakers or the public.

        Researcher roles are likely to involve things like:

        • Investigating hardware-related governance mechanisms or forecasting AI progress, and communicating results from those investigations to decision makers or other researchers
        • Interviewing experts on specific topics related to this
        • Writing policy briefs
        • Advising researchers working on AI governance
        • Managing or mentoring others who work on this

        Careers in government, especially in the US, may be highly impactful, too. However, it’s not clear yet if the state of policy development on AI has advanced to the point that the government will be aiming to hire AI hardware experts directly. It may at some point become clear to policymakers that AI hardware knowledge is extremely valuable for implementing AI policy, at which point these experts will be in high demand. We have a separate article about opportunities for getting involved in US AI policy.

        Some people working in this field today do so for Washington, DC-based think tanks such as:

        You can also consider research organisations with relatively less of a policy focus, such as:

        Careers in industry (including working for chip designers like Nvidia, semiconductor firms, or cloud providers) and academia could be valuable too, though mainly for developing career capital in the form of skills, connections, and credentials, and such work could unintentionally speed up AI progress.

        Knowledge in AI hardware could also be used to do grantmaking, field-building, and research or policy work on AI governance topics that aren’t centrally about AI hardware. However, for these paths, the returns to greater AI hardware expertise will likely diminish more steeply.

        How to enter AI hardware expert careers

        Though it’s possible to pick up some amount of hardware knowledge while working as, say, an AI governance researcher focused on other topics, the kind of expertise that’s most needed is the sort you only get after some years of studying or working with hardware.

        • If you already have expertise in AI hardware, you can consider applying to research or policy fellowships or for entry-level roles like research assistant. In some cases, it’s possible to transition from a career in hardware or semiconductors directly into a more senior research or policy role, especially if you have some prior experience with AI governance.
          • Though it’s not a career, you can also usefully offer to advise people who are already working on governance and policy questions dealing with AI hardware.
          • Some of the highest-impact jobs may be at major AI labs like OpenAI, DeepMind, and Anthropic.
          • See the US policy fellowship database for a list of policy fellowships.
          • We also have AI safety and policy fellowships on our job board.
        • If you have a degree (or have otherwise gained skills) related to AI hardware, but have no professional experience, you can consider building career capital by taking roles in industry or maybe academia. You’re likely to get the most useful experience working on AI hardware directly for companies like Nvidia, but other chip and semiconductor companies seem promising too.
        • If you don’t yet have experience or skills related to this, it’s unclear whether AI hardware is the best thing for you to focus on. Perhaps it is if you are especially excited about it or feel that you may be an especially good fit for it.
          • Studying computer engineering at the undergraduate level is typically required to work in industry. The coursework requires strong ability in science and maths. You may also want to obtain a master’s degree in the field.
          • The state of the art in this field is constantly evolving, so you should expect to continue learning after your formal education ends. Working at the most cutting-edge companies will likely give you the best understanding of technological developments.
          • An educational background in computer and AI hardware may be sufficient to offer you significant advantages when starting out a career in AI governance, particularly in Washington, DC. Though you’ll likely want to supplement this technical expertise with some policy-related career capital, such as a prestigious fellowship.

        Specific types of knowledge and experience that seem promising include knowledge and experience in AI chip architectures and design, hardware security, cryptography, cybersecurity, semiconductor manufacturing and supply chains, cloud computing, machine learning, and distributed computing. It seems especially valuable to have people who also have knowledge useful for AI governance or policy-making more broadly, though this is not necessary.

        Learn more

        Top recommendations

        Further recommendations

        The post Expert in AI hardware appeared first on 80,000 Hours.

        ]]>
        Chris Olah on what the hell is going on inside neural networks https://80000hours.org/podcast/episodes/chris-olah-interpretability-research/ Wed, 04 Aug 2021 19:49:21 +0000 https://80000hours.org/?post_type=podcast&p=73254 The post Chris Olah on what the hell is going on inside neural networks appeared first on 80,000 Hours.

        ]]>
        The post Chris Olah on what the hell is going on inside neural networks appeared first on 80,000 Hours.

        ]]>
        Brian Christian on the alignment problem https://80000hours.org/podcast/episodes/brian-christian-the-alignment-problem/ Fri, 05 Mar 2021 20:55:49 +0000 https://80000hours.org/?post_type=podcast&p=71879 The post Brian Christian on the alignment problem appeared first on 80,000 Hours.

        ]]>
        The post Brian Christian on the alignment problem appeared first on 80,000 Hours.

        ]]>
        Stuart Russell on the flaws that make today’s AI architecture unsafe, and a new approach that could fix them https://80000hours.org/podcast/episodes/stuart-russell-human-compatible-ai/ Mon, 22 Jun 2020 23:44:13 +0000 https://80000hours.org/?post_type=podcast&p=70004 The post Stuart Russell on the flaws that make today’s AI architecture unsafe, and a new approach that could fix them appeared first on 80,000 Hours.

        ]]>
        The post Stuart Russell on the flaws that make today’s AI architecture unsafe, and a new approach that could fix them appeared first on 80,000 Hours.

        ]]>