The following essay is reproduced with permission. The Conversation is an online publication covering the latest research.
We are increasingly aware of how misinformation can affect elections. About 73% of Americans report seeing misleading election news, and about half have a hard time telling what’s true and what’s false.
When it comes to misinformation, it seems the phrase “going viral” is more than just a catchphrase. Scientists have found close parallels between the spread of misinformation and the spread of viruses. In fact, how misinformation spreads can be effectively explained using mathematical models designed to simulate the spread of pathogens.
About supporting science journalism
If you enjoyed this article, please consider supporting our award-winning journalism. Currently subscribing. By subscribing, you help ensure future generations of influential stories about the discoveries and ideas that shape the world today.
Concern about misinformation is widespread, with a recent United Nations survey showing that 85% of people around the world are concerned about misinformation.
These concerns are well-founded. Since the 2016 US election, foreign disinformation has expanded in sophistication and scope. In the 2024 election cycle, dangerous conspiracy theories about “weather manipulation” that undermine proper hurricane management, fake news about immigrant pet feeding inciting violence against the Haitian community, and amplified by the world’s richest man Elon Musk misleading election conspiracy theories were seen.
Recent research employs mathematical models derived from epidemiology (the study of how and why diseases occur in populations). Although these models were originally developed to study the spread of viruses, they can be effectively used to study the spread of misinformation across social networks.
One class of epidemiological models that is effective against misinformation is known as Susceptible-Infectious-Recovery (SIR) models. These simulate the dynamics between a susceptible person (S), an infected person (I), and a recovered or resistant person (R).
These models are generated from a set of differential equations (which help mathematicians understand rates of change) and are easily applied to the spread of misinformation. For example, on social media, misinformation is transmitted from individual to individual, with some becoming infected and others remaining immune. Others act as asymptomatic vectors (vectors of disease), spreading misinformation without knowing or suffering any negative consequences.
These models are very useful because they allow us to predict and simulate population dynamics and come up with measures such as the basic reproduction number (R0), or the average number of cases produced by an “infected” individual.
As a result, there is growing interest in applying such epidemiological approaches to information ecosystems. The estimated R0 for most social media platforms is greater than 1, indicating that the platforms have the potential for misinformation to spread like an epidemic.
looking for a solution
Mathematical modeling typically involves so-called phenomenological studies (in which researchers describe observed patterns) or mechanistic studies (involves making predictions based on known relationships). These models are particularly useful because they allow us to explore how possible interventions can help reduce the spread of misinformation on social networks.
This basic process can be explained with a simple explanatory model shown in the graph below. This allows you to explore and test how the system evolves under different hypotheses.
Prominent social media figures with large followings can become “superspreaders” of election disinformation, spreading false information to potentially hundreds of millions of people. This reflects the current situation where election officials report falling behind in their attempts to fact-check disinformation.
Research shows that in our model, debunking misinformation has only a small effect if we conservatively assume that people have only a 10% chance of becoming infected after exposure. In the 10% infection probability scenario, the population infected by election misinformation increases rapidly (orange line, left panel).
Psychological “vaccination”
The virus-spread analogy for misinformation is apt because it allows scientists to simulate ways to combat the spread of a virus. These interventions include an approach called “psychological inoculation,” also known as pre-banking.
Here, researchers pre-emptively introduce falsehoods and then refute them so that people can develop immunity against misinformation in the future. This is similar to vaccination, where people are given a (weakened) dose of the virus to prepare their immune systems for future infections.
For example, a recent study used an AI chatbot to derive precursors to common myths about election fraud. This includes the potential for political actors to manipulate public opinion with sensational stories, such as false claims that massive overnight vote dumping is upending elections. It included important tips on warning people in advance and how to spot such misleading rumors. These “inoculations” can be incorporated into population models of the spread of misinformation.
Our graph shows that when proactive banking is not adopted, it takes much longer for people to build up immunity to misinformation (left panel, orange line). The right panel shows how the number of people who lose their information can be contained if proactive banking is rolled out at scale (orange line).
The point of these models is not to make the problem seem scary or to suggest that people are gullible vectors of disease. However, there is clear evidence that some fake news articles spread like a simple epidemic, infecting users instantly.
Meanwhile, other stories behave like more complex contagions, requiring people to repeatedly come into contact with misleading sources before becoming “infected.”
The fact that individuals differ in their susceptibility to misinformation does not undermine the usefulness of epidemiologically derived approaches. For example, models can be adjusted depending on how difficult or difficult it is for misinformation to “infect” different subpopulations.
It may be psychologically uncomfortable for some to think of people this way, but as with viruses, most misinformation is spread by a small number of influential superspreaders.
By taking an epidemiological approach to studying fake news, we can predict its spread and model the effectiveness of interventions such as proactive misinformation.
A recent study examined a viral approach that leveraged social media dynamics in the 2020 US presidential election. This study found that a combination of interventions was effective in reducing the spread of misinformation.
Models are never perfect. But if we want to stop the spread of misinformation, we need to understand it in order to effectively counter its social harm.
This article was first published conversation. please read original article.