Non-technical roles in leading AI labs
In a nutshell: We think it’s probably very valuable for talented people focused on safety and social impact to work at leading AI labs — even if they aren’t in technical or policy roles. Non-technical roles offer opportunities to do things like:
- Shift the culture around AI toward safety and positive social impact.
- Recruit safety-minded researchers.
- Help on some safety-relevant projects.
If you can find a non-technical role that’s an especially good fit for you, we think this might be your highest-impact option.
Sometimes recommended — personal fit dependent
This career will be some people's highest-impact option if their personal fit is especially good.
Review status
Based on a shallow investigation
Why might non-technical roles in leading AI labs be high impact?
Although we think technical AI safety research and AI policy are particularly impactful, having very talented people focused on safety and social impact at top AI labs may also be very valuable, even when they aren’t in technical or policy roles.
For example, you might be able to:
- Shift the culture around AI toward safety and positive social impact by talking publicly about what your organisation is doing to build safe and beneficial AI (like DeepMind has done).
- Recruit safety-minded researchers.
- Design internal processes to consider social impact issues more systematically in research.
- Help different teams coordinate around safety-relevant projects.
We’re not sure which roles are best, but in general those involved in strategy, ethics, or communications seem promising. Or you can pursue a role that makes an AI lab’s safety team more effective — like in operations or project management.
If you can find a position at a specifically AI safety–oriented organisation (like Redwood Research), then any role that helps them do their work better makes a contribution.
That said, it seems possible that some of these roles could have a veneer of contributing to AI safety, without doing much to head off bad outcomes. For this reason, it seems particularly important to continue to think critically and creatively about what kinds of work in this area are useful. You can read more in our article about whether it’s good to work at a leading AI lab (whether in technical or non-technical roles).
Some roles in this space may also provide strong career capital for working in AI policy by putting you in a position to learn about the work these labs are doing, as well as the strategic landscape in AI.
Want one-on-one advice on pursuing this path?
If you think this path might be a great option for you, but you need help deciding or thinking about what to do next, our team might be able to help.
We can help you compare options, make connections, and possibly even help you find jobs or funding opportunities.
Learn more
- Our problem profile on AI risk
- The 80,000 Hours Podcast on Artificial Intelligence (a collection of 10 key AI episodes from our podcast)
- Our career review of working in leading AI labs
- Guide to working in AI policy and strategy
- Podcast: Prof Allan Dafoe on trying to prepare the world for the possibility that AI will destabilise global politics
- Podcast: Ben Garfinkel on scrutinising classic AI risk arguments
Read next: Learn about other high-impact careers
Want to consider more paths? See our list of the highest-impact career paths according to our research.