Beliefs are extremely powerful. Because of this, negative and false beliefs have a high potential for harm. This is maybe obvious from a behavior standpoint – distorted beliefs about vaccines, for example, can lead people to unnecessary caution around vaccination [1]. This has real consequences, including outbreaks of vaccine-preventable diseases [2]. But beliefs themselves can also be physically harmful – for example in the so-called nocebo effect, the evil twin of the placebo effect, where simply believing in negative outcomes from a given treatment can cause actual physical symptoms [3]. Tell someone to expect pain from a treatment, and they will be prone to experience greater pain than if they weren’t told to expect it.
It's important then to be able to understand and update one’s beliefs. It’s also something worth thinking about in the context of fighting misinformation.
While doing research for this essay I became aware of the work of Tali Sharot, who’s done a lot of work on the topic of forming and updating beliefs.
My thinking about this before looking into it was that people just needed to be presented with the right information at the right time to be able to update their beliefs. To a certain extent this is true. But what counts as “the right information” and what is “the right time”? My background is in automated fact checking, so I look at this from the perspective of factual accuracy: it’s better to believe things that are true rather than false right? And if it’s all about whats “factual”, shouldn’t timing and presentation be secondary?
Not quite. Accuracy is only one part of the equation when it comes to beliefs.
Sharot highlights this in her seminal paper “The optimism bias” [4]. Namely, as the title of the paper implies, people actually tend to be biased towards optimism in their beliefs. This may sound surprising – I definitely tend to feel bit biased in the opposite direction with my beliefs. But when I thought more about it this made sense – all things held equal, I actually tend to assume normal everyday things will turn out pretty well (or at worst, neutrally). For example when I ride my bike to work, or when I used to drive, I probably severely underestimate(d) my likelihood of getting into an accident, which on a couple unfortunate occasions turned out to be wrong. The science backs this up: for example despite divorce sitting at 50%, newlywed couples will estimate their chances of divorce as almost 0 ([4]). This makes sense since people’s long-term happiness rests on this (if one assumed they were likely going to divorce their new spouse they would probably feel a pretty persistent sense of dread and regret).
This turns out to have implications for how people update their beliefs as well. For example, in [4] Sharot describes the effect of informing people of their risk of different adverse events (e.g. Alzheimer’s disease, cancer, burglary). Those who overestimated their risk and were corrected (“Good news! You’re less likely to get robbed than you thought!”) updated their belief substantially while those who underestimated their risk (“Bad news! You actually have a higher risk of being robbed than you thought.”) would not update their initial belief much.
This “asymmetric updating” [5] has been demonstrated in a variety of ways (see [5] and [6] for examples). The essential message is: people are more likely to update their beliefs when given positive information. However, the idea of what is positive information is dependent on context, especially when looking at political and legal issues. This is exemplified by Sunstein et al. in [7], where they look at updating beliefs around climate change. People who were uncertain if man-made climate change was real were more likely to update their beliefs when told that estimates of temperature-rise were too high (i.e. the temperature will not rise as much as previously thought), while those who already strongly believed in man-made climate change were more likely to update their beliefs when given the bad news that the temperature was likely to rise more than previously thought. So each group expressed an asymmetry in belief updating, but in opposite direction. Does this mean it’s just confirmation bias? Perhaps in this case – but Sharot & Garrett highlight in [5] that in the general case this asymmetric updating operates independent of prior belief. So what could explain this?
Recently, Sharot et al. more formally defined different aspects of what leads people to maintain or change a belief by viewing it as a value-based decision resting on the expected utility of that belief [6]. They break this down into expected internal and external outcomes, and further differentiate between accuracy-dependent and -independent factors. Whether or not one keeps or changes a belief will depend on weighing these different factors in the presence of new evidence (the calculation of which, they argue, is likely something we are at least partially unaware of). Echoing Sharot’s and others work in this area, a persistent theme is: how does this belief make me feel?
Accuracy is only one factor, and the cogency of accurate information for changing a person’s belief is highly context dependent. From [6]:
This framework can account for cases in which people do not change their beliefs in the face of highly credible new evidence. For example, individuals fail to adequately alter their beliefs in the face of information that points toward unpleasant conclusions, such as learning that the likelihood of an adverse event (e.g., an accident or illness) is worse than expected (Kappes & Sharot, 2019; Moutsiana et al., 2015; Sharot et al., 2011), learning that others view them as less attractive than they thought (Eil & Rao, 2011), learning that they are likely to earn less than they expected (Mobius et al., 2011), or learning their preferred presidential candidate is lagging behind in the polls (Tappin et al., 2017). In all these cases, individuals may hold onto inaccurate beliefs that are associated with non-accuracy-dependent outcomes (e.g., the positive feeling of maintaining a belief that it is pleasant to have) that are greater than the external accuracy-dependent outcomes.
Beliefs are then based on subjective feeling and expectation of utility, and while accuracy is indeed a factor in updating a belief, this is really dependent on the environment a person is in and the way in which information is presented to them. For example, in science there is a high value placed on accuracy and a (potential) large cost for presenting inaccurate information (from damaged reputation to losing one’s job). But accuracy has less utility when a belief is, for example, intimately tied to one’s community: one may maintain a false belief because it assures them of acceptance and connection, while updating that belief would leave them ostracized, thus incurring a high cost. In other words, keeping the false belief feels good and changing it would likely feel bad.
I can speculate then that in the climate-change case ([7]), for those who already believe in man-made climate change, the “bad news” of hearing that the temperature will likely rise more than expected probably causes an update in belief because the updated belief (worse potential outcome) serves a utility for those people. There’s the vindication that the person’s prior belief was at least pointing in the right direction (“I was almost right but now I can be totally right!”). There’s also the utility of the knowledge itself — within that person’s social circles, where others are probably like-minded, they can have the satisfaction of both informing others of the correct information and not looking ignorant around others who have also have that information. There’s also the expected cost — the prior belief already conferred a sense of need to act in order to prevent catastrophe, and the updated belief amplifies that sense of need because, well, the catastrophe is worse than thought. Again this is just speculation, but I can see how the framework posited in [6] would fit the case described in [7].
What are the implications for combating misinformation? It means the messaging is important. Facts themselves aren’t enough, and the context and way in which information is presented is critical. One has to surmount cognitive inertia and operate within the milieu of a person’s cognitive biases and what feels good to them. The effectiveness of bare factual correction appears to be limited and can even cause people to dig their heels in more to their false beliefs [8]. Pulling again from [6]:
These points also bear on effective responses to misinformation and “fake news.” In some cases, factual corrections do not work, in part because people do not want to believe them for reasons unrelated to accuracy (Van Bavel et al., 2020). In extreme cases, they can actually backfire and fortify people’s commitment to the beliefs that were supposed to be debunked (Nyhan et al., 2014). One reason may be people’s judgment that if they changed their belief, they would in some sense suffer (perhaps because the new belief would endanger their affiliation with generally like-minded others, perhaps because it would threaten their sense of identity, perhaps because it would make them feel sad or afraid). The implication is that if the correction can be made in a way that does not threaten people’s affiliations or self-understanding or the essentials of their view of the world, it is more likely to be effective (Kahan, 2017). “Surprising validators,” who are not expected to endorse a new belief (e.g., a conservative politician who supports gay rights) but who are credible to people who are considering whether to do so, can succeed in promoting belief change, in part, for this reason (Glaeser & Sunstein, 2014). If a new belief about (say) personal safety and health seems more like an opportunity rather than a threat, people may be more likely to be drawn to it
So here’s the good news: it’s evident that people are drawn to beliefs which serve some utility for them. We change our beliefs for similar reasons – when it consciously or subconsciously “feels” right. Accuracy is one component of this, and with an orientation towards making accurate information feel good to people we can start to think of better ways to combat misinformation. Who are the people that we wish to reach and what is the utility of their inaccurate beliefs to them? Who are people that they would trust to convey new information which may clash with their existing beliefs?
For ourselves as well – which beliefs do we have that are actually serving us and which are beliefs that we hold on to simply because they feel good? Which new beliefs can we adopt which may have a better overall utility? And how does the objective accuracy of our beliefs relate to their utility for us and the people around us?
References
[1] Kuru, Ozan, Dominik Stecula, Hang Lu, Yotam Ophir, Man-pui Sally Chan, Ken Winneg, Kathleen Hall Jamieson, and Dolores Albarracín. "The effects of scientific messages and narratives about vaccination." PLoS One 16, no. 3 (2021): e0248328.
[2] Phadke, Varun K., Robert A. Bednarczyk, Daniel A. Salmon, and Saad B. Omer. "Association between vaccine refusal and vaccine-preventable diseases in the United States: a review of measles and pertussis." Jama 315, no. 11 (2016): 1149-1158.
[3] Colloca, Luana, and Franklin G. Miller. "The nocebo effect and its relevance for clinical practice." Psychosomatic medicine 73, no. 7 (2011): 598.
[4] Sharot, Tali. "The optimism bias." Current biology 21.23 (2011): R941-R945.
[5] Sharot, Tali, and Neil Garrett. "Forming beliefs: Why valence matters." Trends in cognitive sciences 20.1 (2016): 25-33.
[6] Sharot, Tali, Max Rollwage, Cass R. Sunstein, and Stephen M. Fleming. "Why and when beliefs change." Perspectives on Psychological Science 18, no. 1 (2023): 142-151.
[7] Sunstein, Cass R., Sebastian Bobadilla-Suarez, Stephanie C. Lazzaro, and Tali Sharot. "How people update beliefs about climate change: Good news and bad news." Cornell L. Rev. 102 (2016): 1431.
[8] Ecker, Ullrich KH, Stephan Lewandowsky, John Cook, Philipp Schmid, Lisa K. Fazio, Nadia Brashier, Panayiota Kendeou, Emily K. Vraga, and Michelle A. Amazeen. "The psychological drivers of misinformation belief and its resistance to correction." Nature Reviews Psychology 1, no. 1 (2022): 13-29.
"It's important then to be able to understand and update one’s beliefs."
People's beliefs are similar to the Bayesian priors in machine learning models in that they determine the likely response to new information inputs. One danger I see is that people self-select their sources of information based on their existing viewpoints. This leads to even deeper reinforcement and emotional investment in their prior beliefs causing extreme biases.
To get around this I've downloaded the Ground News and Allsides apps that select particular topics and show news stories side by side from left, right, and center political news sources. It's actually entertaining and helpful to see how the same set of facts can get spun in such contradictory ways.
Thanks for a fascinating overview!
This reminds me a lot of Tobias Leenaert's book "How to Create a Vegan World: a Pragmatic Approach". The facts are overwhelming in favor of a plant based diet, but the facts are not enough. Besides the factors you mentioned, I think an important one is that behavior change is hard for many reasons, and belief change without behavior change causes cognitive dissonance, which people would like to avoid. Leenaert's point is that we should make people feel good about eating greener and not shame them into it.
The obvious question, though, is what all this has to do with NLP. Do you have some ideas? One thing I'm working on is mining arguments that are not necessarily factual about topics of interest https://di.ku.dk/english/news/2023/nuggets-mined-from-thousands-of-tweets-can-persuade-us-to-eat-more-climate-friendly/