Preventing the spread of false ideas on social media
Our overall view
Sometimes recommended
We’d love to see more people working on this issue. But you might be able to do even more good working on one of our top priority problem areas.
Profile depth
Exploratory
Table of Contents
Why might the spread of false ideas on social media be a pressing problem?
It seems plausible that political discourse is significantly affected by the way many people now receive their information: through algorithms owned and run by social media companies.
This could have a number of harmful effects, including:
- People’s views could become increasingly far from reality. If voters form highly inaccurate and difficult-to-change views about the world, this could hurt policy over a long period of time.
- Governments could make information readily available — for example, about an ongoing pandemic — but if people cannot tell whether the information they are receiving is reliable, they won’t act on this information. This is an example of a problem caused by a lack of epistemic security.
We aren’t sure how important these problems are, or how to best go about addressing them.
It seems likely that more research would be valuable to:
- Assess how strong the evidence is for social media (or the internet in general) driving inaccurate views.
- Identify the extent to which this could impact the long-term future.
What are the major arguments against this problem being pressing?
The issue is receiving attention from social media companies who are well-positioned (though questionably incentivised) to work on it, which makes it harder for an individual to contribute to progress. For example, Facebook, YouTube, Twitter, and other platforms all have policies for tackling misinformation (though of course we might expect conflicts of interest to reduce the effectiveness of these efforts).
The issue has also been addressed by prominent academics, is expressly political, and is regularly in the news. This reduces the likelihood that this problem is particularly neglected and would benefit from additional people working on it, rather than working on another pressing problem that receives less attention.
Additionally, as far as we can tell, the evidence for social media driving inaccurate views is mixed — we’ve included some articles arguing in both directions just below.
That said, we haven’t looked into this problem that much, and there may be innovative ways people can make a difference here that we’re not aware of.
Learn more
Top recommendations
- Podcast: Tristan Harris on the need to change the incentives of social media companies
- Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization by Mark Ledwich and Anna Zaitsev, and a response to a critique of the paper
- How much do recommender systems drive polarization? by Jacob Steinhardt, which suggests that social media is not an important source of political polarisation
- Podcast: Nina Schick on disinformation and the rise of synthetic media
- Podcast: Martin Gurri on the revolt of the public & crisis of authority in the information age
- Podcast: Bruce Schneier on how insecure electronic voting could break the United States — and surveillance without tyranny
- Podcast: Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps
Read next: Explore other pressing world problems
Want to learn more about global issues we think are especially pressing? See our list of issues that are large in scale, solvable, and neglected, according to our research.