How we can better detect misinformation
Originally posted on The Horizons Tracker.
Getting better at detecting misinformation has clear societal benefits. A recent study1, from the University of Cambridge, found that people are more likely to distinguish between misinformation and the truth if they are offered a small cash reward for accuracy or if they are briefly encouraged to act with personal integrity.
The study’s results imply that social media platforms are conducive to the proliferation of fake news not only because users are deceived into believing it, but also due to a disparity in motivation: users are more driven to garner clicks and likes than to disseminate factual information.
Perverse incentives
The research highlights what the researchers believe are the perverse incentives that often drive shares online. The authors explain that the allure of pandering to one’s own social and political “in-group” by attacking the other side is a significant yet overlooked factor contributing to the spread of misinformation or disbelief in accurate news.
The study, which included over 3,300 participants in the United States, equally split between Democrats and Republicans, conducted four experiments. The researchers incentivized half of the participants with a reward of up to one US dollar if they correctly identified true or false headlines, and compared their results to those who were offered no incentive.
The findings showed that a small financial incentive increased participants’ ability to differentiate between true and false news by 31%. Moreover, when participants were asked to identify accurate news that benefited the opposing political party, they demonstrated the most significant improvement. In fact, the monetary reward reduced partisan division over news accuracy by around 30%, with the most significant shift occurring on the Republican side.
Doing the right thing
For example, when offered a dollar, Republicans were 49% more likely to recognize the Associated Press headline ‘Facebook removes Trump ads with symbols once used by Nazis’ as accurate. Meanwhile, Democrats were 20% more likely to acknowledge the Reuters headline ‘Plant a trillion trees: U.S. Republicans offer fossil-fuel friendly climate fix’ as accurate.
However, when the researchers inverted the incentive to “mirror the social media environment” and paid participants to identify headlines likely to receive the best reception from members of their political party, the ability to recognize misinformation decreased by 16%.
“This is not just about ignorance of facts among the public. It is about a social media business model that rewards the spread of divisive content regardless of accuracy,” the researchers explain. “By motivating people to be accurate instead of appealing to those in the same political group, we found greater levels of agreement between Republicans and Democrats about what is actually true.”
The right incentive
Offering incentives to identify accurate news improved news judgment accuracy among all political affiliations, but the impact was much more pronounced among Republican voters. The research team cited prior studies showing that Republicans tend to believe and spread more misinformation than Democrats.
However, the current study revealed that financial incentives significantly narrowed the accuracy gap between Republicans and Democrats, bringing Republicans closer to the same level of accuracy as Democrats and thus reducing the political divide.
“Recent lawsuits have revealed that Fox News hosts shared false claims about ‘stolen’ elections to retain viewers, despite privately disavowing these conspiracy theories. Republican media ecosystems have proved more willing to harness misinformation for profit in recent years,” the authors conclude.
Article source: How We Can Better Detect Misinformation.
Header image source: Dilok Klaisataporn, Free Stock photos by Vecteezy.
Reference:
- Rathje, S., Roozenbeek, J., Van Bavel, J. J., & van der Linden, S. (2023). Accuracy and social motivations shape judgements of (mis) information. Nature Human Behaviour, 7, 892–903. ↩