"It's important then to be able to understand and update one’s beliefs."
People's beliefs are similar to the Bayesian priors in machine learning models in that they determine the likely response to new information inputs. One danger I see is that people self-select their sources of information based on their existing viewpoints. This leads to even deeper reinforcement and emotional investment in their prior beliefs causing extreme biases.
To get around this I've downloaded the Ground News and Allsides apps that select particular topics and show news stories side by side from left, right, and center political news sources. It's actually entertaining and helpful to see how the same set of facts can get spun in such contradictory ways.
This reminds me a lot of Tobias Leenaert's book "How to Create a Vegan World: a Pragmatic Approach". The facts are overwhelming in favor of a plant based diet, but the facts are not enough. Besides the factors you mentioned, I think an important one is that behavior change is hard for many reasons, and belief change without behavior change causes cognitive dissonance, which people would like to avoid. Leenaert's point is that we should make people feel good about eating greener and not shame them into it.
It might help orient our thinking about what are useful outputs for e.g. large language models when it comes to presenting information. I guess this is more of an HCI problem but for example can we induce a prior in an LLM through clever prompting which leads to positive messages around factual information or which is more likely to get people to update false/negative beliefs?
"It's important then to be able to understand and update one’s beliefs."
People's beliefs are similar to the Bayesian priors in machine learning models in that they determine the likely response to new information inputs. One danger I see is that people self-select their sources of information based on their existing viewpoints. This leads to even deeper reinforcement and emotional investment in their prior beliefs causing extreme biases.
To get around this I've downloaded the Ground News and Allsides apps that select particular topics and show news stories side by side from left, right, and center political news sources. It's actually entertaining and helpful to see how the same set of facts can get spun in such contradictory ways.
Thanks for a fascinating overview!
This reminds me a lot of Tobias Leenaert's book "How to Create a Vegan World: a Pragmatic Approach". The facts are overwhelming in favor of a plant based diet, but the facts are not enough. Besides the factors you mentioned, I think an important one is that behavior change is hard for many reasons, and belief change without behavior change causes cognitive dissonance, which people would like to avoid. Leenaert's point is that we should make people feel good about eating greener and not shame them into it.
The obvious question, though, is what all this has to do with NLP. Do you have some ideas? One thing I'm working on is mining arguments that are not necessarily factual about topics of interest https://di.ku.dk/english/news/2023/nuggets-mined-from-thousands-of-tweets-can-persuade-us-to-eat-more-climate-friendly/
It might help orient our thinking about what are useful outputs for e.g. large language models when it comes to presenting information. I guess this is more of an HCI problem but for example can we induce a prior in an LLM through clever prompting which leads to positive messages around factual information or which is more likely to get people to update false/negative beliefs?