January 15, 2025
4 minimum read
Does fact-checking work? Here’s what science proves
Communication and misinformation researchers reveal the value of fact-checking, where perceived bias comes from, and what meta decisions mean

Meta plans to do away with third-party fact-checking programs in favor of “community notes” like X.
PA Image/Alamy Stock Photo
It is said that while the truth moves, a lie can travel halfway around the world. This week, Facebook’s parent company Meta announced plans to discontinue the platform’s fact-checking program, which was founded in 2016 and pays independent groups to verify selected articles and posts, thereby reducing online falsehoods and misinformation. Efforts to contest information received have become a little more difficult.
The company said the move was to counter political bias and censorship among fact-checkers. “Experts, like everyone else, have their own biases and perspectives. This manifests itself in the choices some make about what and how they fact-check.” Chief International Affairs Officer Joel Kaplan wrote on January 7th.
nature We spoke to a communication and misinformation researcher about the value of fact-checking, where perceived bias comes from, and what the meta decision means.
About supporting science journalism
If you enjoyed this article, please consider supporting our award-winning journalism. Currently subscribing. By subscribing, you help ensure future generations of influential stories about the discoveries and ideas that shape the world today.
positive impact
When it comes to convincing people that information is true and trustworthy, “fact-checking is effective,” said Sander van Vann, a social psychologist at the University of Cambridge who served as an unpaid advisor on Facebook’s fact-checking efforts. Del Linden says. “Research provides very consistent evidence that fact-checking at least partially reduces misconceptions about false claims.”
For example, a 2019 meta-analysis on the effectiveness of fact-checking among more than 20,000 people found an “overall significant positive impact on political beliefs.”
“Ideally, we want to prevent people from forming false perceptions in the first place,” van der Linden added. “But if we have to deal with the fact that people are already exposed, reducing it is almost as good as achieving it.”
Jay Van Bavel, a psychologist at New York University in New York City, says fact-checking is less effective when the issue is polarized. “If you’re fact-checking things like Brexit or the US election, that’s where fact-checking doesn’t work very well,” he says. “Part of the reason is that partisan people don’t want to believe anything that makes their party look bad.”
But even if fact-checking doesn’t seem to change people’s minds about a controversial issue, fact-checking can still be helpful, says a researcher at the Cornell Polytechnic Institute for Security, Trust and Safety in New York City. said Alexios Mantsaris, a former fact-checker who leads the initiative.
Facebook currently flags articles and posts that are deemed false by fact checkers. Mantsalis said the platform’s suggestion algorithms will also show fewer users, and flagged content is more likely to be ignored than read and shared.
Kate Starbird, a computer scientist at the University of Washington in Seattle, said reporting a post as problematic can have a ripple effect on other users that research on the effectiveness of fact-checking doesn’t capture. “Measuring the direct impact of labels on user beliefs and behaviors is different from measuring the broader impact of fact-checking in the information ecosystem,” she added.
As misinformation increases, so do red flags.
Regarding Mehta’s claim that there is bias among fact-checkers, Van Bavel said that misinformation from the political right is fact-checked on Facebook and other platforms more than misinformation from the left, and that it is a problem. I agree that we are often warned that there is. But he offers a simple explanation.
“The main reason is that conservative misinformation is more prevalent,” he says. “At least in the United States, if one political party is spreading most of the misinformation, the fact-checking will appear biased because there is far more criticism of that party.”
There is data to support this. Research published in nature A study last year found that while politically conservative people on You have indicated that you are likely to share information from the site. Laity.
“If you want to know whether someone is exposed to misinformation online, the best way to predict it is to know whether they are politically conservative,” said the author, who worked on the analysis. says Gordon Pennycook, a psychologist at Cornell University in Ithaca, New York. .
Implementation matters
Meta CEO Mark Zuckerberg said Facebook could replace third-party fact-checking with a system similar to the “Community Notes” system used by Company X. The idea was to crowdsource corrections and background information from users and add them to posts.
Research shows that these systems can also correct misinformation to some extent. “The way it’s implemented in X doesn’t really work very well,” says van der Linden. In an analysis conducted last year, he found that community notes about It is pointed out that it was found that Keith Coleman, vice president of X products, told Reuters last year that Community Notes “maintains a high bar to make notes effective and maintain trust.”
“Crowdsourcing is a useful solution, but it really depends on how it is implemented,” adds van der Linden. “Replacing fact checks with community notes seems to make things even worse.”
This article is reprinted with permission. first published January 10, 2025.