**I’ve changed the name of the blog from “Truth and Science” to “Calibrating Uncertainty”
Misinformation is as much of a problem now as it has ever been, and continues to have real-world consequences. I went back to refresh myself on some misinformation basics – what it is, how it spreads, who it affects, and what we know about combating it. For this post, I’m providing some notes and resources to serve as a refresher and basic introduction on each of these topics.
What is Misinformation?
Some high level definitions :
Misinformation is the sharing of inaccurate and misleading information in an unintentional way.
Disinformation is the deliberate dissemination of false or inaccurate information in order to discredit a person or organization.
Lots of different types, different sources identify different subtypes:
Fabricated content: completely false content;
Manipulated content: distortion of genuine information or imagery, for example a headline that is made more sensationalist, often popularised by ‘clickbait’;
Imposter content: impersonation of genuine sources, for example by using the branding of an established news agency;
Misleading content: misleading use of information, for example by presenting comment as fact;
False context or connection: factually accurate content that is shared with false contextual information, for example when a headline of an article does not reflect the content;
Satire and parody: presenting humorous but false stores as if they are true. Although not usually categorised as fake news, this may unintentionally fool readers.
Propaganda: information deliberately spread to influence or raise awareness to a particular political cause or value position. Propaganda may resemble advertisements since both are highly visual, but propaganda does not engage in selling products or services.
Clickbait: a form of false advertisement whose main purpose is to encourage users to follow a link to a web page. Clickbait often provides links that misrepresent itself and take users to links that generate advertising revenue for the number of clicks received.
How does it spread?
“The spread of true and false news online” (Vosoughi et al. 2018)
False information has been shown to spread faster than accurate information
This may be because false information tends to be more “novel” and emotionally intense than factual information
Bots spread false and true information at the same rate, suggesting that false information spreads more than the truth because of humans as opposed to bots
Some compiled research on misinformation spread
How likely you are to be exposed to misinformation may be dependent on where you live
Demographic features e.g. political orientation may also influence how likely one is to be exposed to misinformation
People are more likely to believe fake news if they see it in a personally curated network
People are less skeptical of information they encounter on platforms they have personalized — through friend requests and “liked” pages, for instance — to reflect their interests and identity.
Curating one’s feed decreases skepticism towards the information
Repeated exposure to false information may increase the likelihood that a person will believe it, but this notion has been contested by (Lazer et al., 2018)
“Who is most likely to believe and to share misinformation” (Vicol, 2020): A review of lots of research on this topic by FullFact; some highlights
Presents more evidence indicating that repeated exposure may facilitate belief in false information i.e. “the illusory truth” effect
Content which is more fluent (in terms of font size, grammar, word complexity, etc.) appears more true regardless of how true it actually is
Pictures can create the illusion of fluency and truth
Information which fits with one’s existing world view is more likely to be believed (i.e. motivated reasoning)
Virality of content has been shown to be driven by emotional response in some settings
What can we do about it?
First, “addressing fake news requires a multidisciplinary effort” (“The science of fake news”, Lazer et al., 2018); collaboration between researchers, platforms, policy-makers, educators, and the public.
It's been argued to focus on who is producing fake news, as opposed to the fake news itself (Lazer et al., 2018).
In evaluating the prevalence of fake news, we advocate focusing on the original sources—the publishers—rather than individual stories, because we view the defining element of fake news to be the intent and processes of the publisher. A focus on publishers also allows us to avoid the morass of trying to evaluate the accuracy of every single news story.
Fact checking itself is relatively mature, but research on the effectiveness of fact checking has mixed results. What methods to combat misinformation actually work? From (Lazer et al., 2018):
Individuals tend not to question the credibility of information unless it violates their preconceptions or they are incentivized to do so. Otherwise, they may accept information uncritically. People also tend to align their beliefs with the values of their community.
Mixing sources could be beneficial – studies have demonstrated that people are no less likely and in some cases more likely to trust average people or computers to write news articles than journalists and editors, even if the only variable changing is the source i.e. show people the exact same articles but attribute them to different sources.
(Vicol, 2020) synthesizes some of the evidence on how to combat misinformation:
The “illusory truth” effect – where repeating information makes it more believable – can potentially also be used to combat misinformation by repeating the correct information. But with the caveat that repeated corrections only reduce but don’t completely eliminate belief in inaccurate claims.
Encouragingly for fact checkers, the illusory truth effect can be countered, to some extent, by repeating the correct information. Experimental work found that stronger retractions, presented several times, could substantially reduce the influence of repeated misinformation. However, it is important to note that there is an asymmetry in the effects of repeating a claim and repeating its correction. Repeated corrections were found to reduce, but not completely eliminate, belief in inaccurate claims. Furthermore, in the real world, initial inaccurate reports of an event often attract more interest than their retractions.
Motivated reasoning (i.e. belief in claims which are consistent with one’s worldview) can also be mitigated
More encouragingly, a recent strand of literature suggests that motivated reasoning can be moderated, if we can get ourselves to rein in our gut judgement and make an effort to think more analytically.
There’s evidence that, of people who come across misinformation that they know to be false, only a small percentage of them take action to correct it. In the same vein, of those who share misinformation, only a fraction of them are challenged on it. How do we engage people to take action to challenge misinformation when they see it shared?
Some high level recommendations
Make corrections visible to the groups who are most likely to be misinformed.
Catch false claims early and fight back against repeated myths.
Draw audiences into the discussion.
Finally, some notes on how to combat misinformation from a meta-analysis published a few years ago (“A Meta-Analytic Examination of the Continued Influence of Misinformation in the Face of Correction: How Powerful Is It, Why Does It Happen, and How to Stop It?”, Walter and Tukachinsky 2020)
Looking across 32 studies, evidence indicates that correction of misinformation does not entirely eliminate it
Motivated reasoning plays a factor in correcting misinformation – corrections are more effective when they are consistent with a person’s existing worldview
Similarly, coherence with one’s worldview is a strong influence – people appear to be more inclined to maintain a coherent worldview, even if it is incorrect, than to have an incomplete or inconsistent one
For instance, in the context of vaccine safety, if people think that health organizations take part in an elaborate conspiracy to conceal relevant information from concerned parents, a message from health experts negating the vaccination-autism link will do little to reduce belief in misinformation. In fact, such efforts may backfire by making the audiences cling even harder to anti-vaccination myths. A more effective correction would try to substitute the original mental model with a coherent explanation that includes information about vaccine safety as well as explain the corrupt history of the anti-vaccine movement.
Corrections delivered by the source of misinformation are also significantly more effective in correcting the false belief
If the misinformation comes from a source that the person sees as credible, corrections are less effective
Corrections are less effective if the person had repeated exposure to the misinformation being corrected
Timeliness is important – time delay reduces the effectiveness of correcting misinformation
I like the new blog title. I get nervous when Truth is conflated with any method which is subject to our human biases. The point of the post is valid. The distinction between mis- and dis- information is critical to determining the societal responses that would be appropriate. Also, opinion is not information and should be excluded from this discussion. I would add that parody is off limits as well provided it is obvious and not made to appear authentic.
Another distinction I would add is the authority of the source. Publishers who hold government office, physicians, law enforcement, and news organizations have a fiduciary duty to the public to provide accurate and unbiased information and not violate the public's trust. This is hard to sustain, and I'm concerned that this principle has been abandoned in the current period.
It has been replaced with the ethical rationalization put forth by Plato, the Noble Lie. In this worldview there are some ends in society which are so vital that any means to achieve them, including disinformation (aka lying) is justified.
These fiduciary organizations have been compromised in their function by their personal beliefs in causes that they view justify "the Noble Lie". This can be saving a population from pandemics, to saving a nation from a particular elected official, to saving the planet from a climate apocalypse.
The final Durham Report, released on May 16th, 2023 (https://www.washingtonexaminer.com/news/justice/special-counsel-john-durhams-final-report), documents one such disinformation campaign that involved the highest levels of the US intelligence community conspiring with a political candidate to destroy the opposition candidate and his administration. This conspiracy was joined by many news organizations and perpetuated for years. Sometimes a conspiracy theory turns out to be an actual conspiracy.
The rational societal response should be to hold these fiduciaries accountable for this breach of trust by discounting any of their information regarding other stories now or in the future. Office holders should be removed from positions of power and news agencies should be purged of disinforming idealogues. The current state is corrosive to the effectiveness of self-government and threatens its future.