Artificial sentience
Our overall view
Sometimes recommended
Working on this problem could be among the best ways of improving the long-term future, but we know of fewer high-impact opportunities to work on this issue than on our top priority problems.
Profile depth
Exploratory
Table of Contents
Why might the possibility of artificial sentience be an especially pressing issue?
AI systems in the future may be moral patients — that is, they could deserve moral consideration for their own sake. Why? The biggest reason we’re concerned about this is because they could become sentient — and so feel conscious pleasure, suffering, or other good and bad feelings.
If so, then we will need to ensure that the future goes well not only for humans and animals, but for AI systems themselves.
Mistreating sentient systems or allowing them to suffer — whether intentionally or accidentally, perhaps because we don’t know that they are sentient — could be a moral catastrophe, analogous to factory farming, but on a potentially much larger scale.
Whereas AI alignment and AI governance work seeks to ensure that the development of AI benefits humanity, work on artificial sentience seeks to ensure that the development of AI benefits AI systems themselves, or at least does not harm them.
It might sound a bit outlandish to think that AI systems could be sentient, and it’s true that we don’t have a great understanding of sentience/consciousness. However, many philosophers and consciousness researchers think there’s no reason in principle that an artificial system made from silicon couldn’t be sentient.
One way AI systems could be sentient is if they emulate the computational structure of the human brain. If we are conscious because of the computational structure of our brains (as is plausible), then digital people with the same computational structure would also be sentient. But AI systems that are very different from us might also have their own forms of sentience, in the same way that nonhuman animals like octopuses might.
We’re far from fully understanding this domain. Understanding when/how artificial systems could be conscious is even more difficult than understanding which nonhuman animals are sentient, because artificial systems can be even more architecturally different to us than animals, do not share our biological substrate, and do not share our evolutionary history.
Unlike with nonhuman animals, we are actively engaged in the process of designing artificial systems. And it seems very important to get right — imagine if we mistakenly think that some huge number of systems we create are non-sentient, or feeling pleasure, when really they are suffering. As AI systems continue to grow in both scale and capability, this issue will grow more and more pressing.
Work on artificial sentience can take the form of:
- Increasing our understanding of consciousness and related issues — either via direct research or by field-building to encourage better work on these topics. Research topics span neuroscience and other sciences of biological minds, artificial intelligence, philosophy of mind, and ethics.
- Thinking about the appropriate institutions and norms for making sure that the development of digital minds, if it happens, is managed well. Navigating these issues is especially important if the majority of future beings will be digital rather than biological.
Despite longstanding interest in the question of whether AI systems could be conscious — dating back to the very beginning of the field of AI — rigorous work on artificial sentience is surprisingly neglected, in part because it falls at the intersection of several fields of inquiry. The study of consciousness is also beset with not just empirical uncertainty, but conceptual uncertainty as well. A small group of researchers, however, are doing work focused on the question of artificial consciousness. Institutions where this work happens include the Digital Minds Research Group at the Future of Humanity Institute and the Sentience Institute. Dedicated journals include the Journal of Artificial Intelligence and Consciousness.
Learn more about the possibility of artificial sentience
- Podcast: Robert Long on why large language models like GPT (probably) aren’t conscious
- Podcast: Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe
- Jamie Harris on why artificial sentience matters
- Amanda Askell on consciousness in current AI systems
- Nick Bostrom and Carl Shulman on sharing the world with digital minds
- Brian Tomasik on reinforcement learning in biological and artificial systems
- Luke Muehlhauser’s report on consciousness and moral patienthood, which discusses methodological and ethical issues that apply to both animal and AI consciousness
Read next: Explore other pressing world problems
Want to learn more about global issues we think are especially pressing? See our list of issues that are large in scale, solvable, and neglected, according to our research.