A flood of fake news circulated through social media and on YouTube during the 2016 United States presidential election campaign. The Facebook advertisement above is one of many examples. In mid-2016, allegations started to emerge linking Russia to the fake news blitz.
In response to these allegations, a number of investigations have been underway, including by the United States House of Representatives Permanent Select Committee on Intelligence and United States Special Counsel Robert Mueller, the former Director of the Federal Bureau of Investigation (FBI).
On 16 February, there was a significant development in the Mueller investigation, with the United States Department of Justice announcing that a grand jury had returned an indictment charging thirteen Russian nationals and three Russian companies “for committing federal crimes while seeking to interfere in the United States political system, including the 2016 Presidential election.”
The indictment states that the Internet Research Agency, “a Russian organization engaged in political and electoral interference operations,” had “sought, in part, to conduct what it called “information warfare against the United States of America” through fictitious U.S. personas on social media platforms and other Internet-based media.”
When the word “warfare” is used it brings to mind thoughts of military battlefield conflict, so the idea that using fake news to interfere in the political system of another country constitutes warfare may at first seem incongruous. However, this type of psychological manipulation has long been seen as an aspect of information warfare.
For example, Martin C. Libicki Writes in a 1995 United States National Defense University paper1 that “Arguable forms of warfare include psychological operations against the national will and culture.”
Libicki’s paper predates social media and YouTube, so he could never have anticipated that these platforms would become weapons of information warfare. But it’s abundantly clear now. As RealKM’s Stephen Bounds states in a recent survey that sought expert opinion on the future of truth and misinformation online, “This is the new reality of warfare today. Wars are going to be literally fought over ‘information supply lines,’ much as food supply lines were critical in wars of years gone by.”
So, fake news is not just an issue for individuals and communities, it’s also a national security issue. And more fake news salvos can be expected in Russia’s information war. Indeed, Chris Cillizza, CNN Editor-at-large, contends that “Russia views its attempted meddling in the 2016 election as a massive success and will almost certainly try to meddle in the 2018 midterm elections and the 2020 presidential election.” It would also be naive to think that other countries won’t launch similar campaigns, if they haven’t already.
Given the seriousness of the fake news issue, is there more that the knowledge management discipline can do?
Is fake news a knowledge management issue?
The first thing we need to do is consider whether fake news is really a knowledge management (KM) issue.
When previously posting the link to an article about fake news into an online KM group, one commenter argued that fake news wasn’t a KM issue. He said that the focus of the KM discipline is on how knowledge is managed, and that we shouldn’t be concerning ourselves with the truth or fiction of the knowledge that is being managed.
However, other commenters on the article said that fake knowledge is an emerging issue that the KM discipline needs to consider. A recent paper2 looking at knowledge management and leadership supports this view, stating that “in times of “fake-news” and “alternative facts” the validation of knowledge is getting more important. Managing knowledge on a process level is not enough.”
If we agree that fake news is a knowledge management issue, then what tangible things can we do to help address the issue? I put forward two options for consideration:
- Helping people to tell better stories online
- Helping people to overcome the truth delusion.
Helping people to tell better stories online
A range of activities have been initiated to try to deal with the fake news issue. These include:
- Establishing fact-checking services. A 2017 study3 found that the services that have been established in western countries, for example FactCheck.org and Snopes, have done little to improve the quality of reporting in online media. However, another study4 has found the Ukrainian service StopFake, which operates in a different way to the western services, to be effective.
- Developing systems to detect fake news. One of the first models was Hoaxy5, created in 2016 as a platform for the collection, detection, and analysis of online misinformation and the fact-checking efforts related to this. Since then, researchers have sought to create more refined models, for example the capture, score, and integrate (CSI) hybrid deep model for fake news detection proposed in a recent paper6.
- Teaching people how to recognize and avoid sharing fake news. Fake news would cease to be a problem if people stopped falling for it and circulating it, so there’s much that media consumers can do. I discuss a range of media consumer actions in the article What YOU can do about fake news.
- Inoculating people against fake news by exposing them to misleading argumentation techniques. A 2017 paper7 looking at climate change misinformation found that “inoculating messages that (1) explain the flawed argumentation technique used in the misinformation or that (2) highlight the scientific consensus on climate change were effective in neutralizing those adverse effects of misinformation.” A University of Cambridge experiment is now applying the learnings from the climate change study more widely through a new online game: “While the previous study focused on disinformation about climate science, the new online game is an experiment in providing “general immunity” against the wide range of fake news that has infected public debate.”
A way in which the KM discipline can complement these activities is through the further development of the KM practice of storytelling. This idea follows on from, and relates to, the inoculation approach.
Storytelling in KM sees truth as being essential in stories. Steve Denning, a pioneer of KM storytelling, states that when telling a story:
It’s very important that it be a true story. It’s not a fictional story … if the story is true, one can say, “This already has happened. It happened right here. Here’s the guy it happened to. Go and check it out. It actually happened.” Then it’s the truth of the story that shakes the listener out of their complacency. They have grapple with the fact that the story actually happened … And it must be authentically true. It’s not just a story that’s factually accurate as far as it goes.
In a 2015 article, KM and business storytelling consultant Shawn Callahan alerts us to the great power of these truthful stories in overcoming misinformation:
To simply negate an untrue story only serves to reinforce the misinformation.
A much more effective way of countering a misleading story is with another story. In fact, this is one of the few approaches that really work.
He offers the same advice in his new book Putting Stories To Work, where he goes on to provide guidance to business leaders in how to tell effective stories.
What if this guidance was adapted and used to train people in combating fake news online through telling better stories on social media and YouTube? I would see this training being delivered in conjunction with inoculation, so that people learn how to readily identify fake news stories, after which they can then proceed to tell and circulate better truthful ones.
Helping people to overcome the truth delusion
It’s only fair to warn you that you may find aspects of what I say in regard to this second option to be confronting.
As I discussed in a recent article, the United States House of Representatives Permanent Select Committee on Intelligence investigations have included an examination of the role that social media companies played in disseminating the fake content produced and paid for by Russian actors, including the Internet Research Agency. As part of this, an open hearing was held with representatives of Facebook, Twitter, and Google.
In a Fast Company article, Austin Carr reports that, as has been the case with Facebook and Twitter, “Alphabet, the tech giant that owns Google, has been under intense scrutiny to acknowledge its role in trafficking the Russia-backed disinformation campaign.” Carr quotes Alphabet executive chairman Eric Schmidt as saying:
One of the things I did not understand was that these systems can be used to manipulate public opinion in ways that are quite inconsistent with what we think of as democracy.
Sorry, but the only way that Schmidt and his staff at Google could not have this understanding is if they are suffering from what I will term the “truth delusion”, which seems to be an almost universal phenomenon in society at present. The “truth delusion” is the false belief that people generally sought to communicate the truth through the media until we suddenly entered a post-truth era following the United Kingdom Brexit referendum and the United States presidential election.
Saying that we’re now in a post-truth era is to propose that just two years ago, public opinion and political debate were based on objective facts and evidence, but then all of a sudden we were been plunged into a great darkness of false information. Could this really have happened? No, to be blunt, the idea that we suddenly we’ve suddenly entered a post-truth era is completely absurd, and the evidence base bears this out.
In a previous article I provide evidence of fake news being used in US presidential election campaigns as far back as 1800, and in another article I provide evidence of my own direct experiences with widespread fake news campaigns more than 20 years ago. Fake news is nothing new – all that has changed is the medium used to circulate it. It’s possible that the arrival of social media and YouTube has led to an increase in fake news, but this can only be speculation in the absence of objective research comparing the volume and nature of fake news before and after.
Why did Schmidt and his staff lack this understanding, and why do we embrace false notions like that of the post-truth era? Our cognitive biases work to assist our survival by simplifying our complex reality into something that we can deal with, but unfortunately our reality is distorted in the process . The reality is that there is and has for a very long time been a proportion of humans who will manipulate knowledge for dishonest purposes, and also a proportion of humans who will readily believe and spread manipulated knowledge. But this reality is difficult for us to cope with emotionally, so rosy retrospection causes us to have a “truth delusion” in which we incorrectly judge the past more positively than the present and reinforce this by creating the false notion of a post-truth era.
However, accepting reality is the first step in dealing with it. For Facebook, Twitter, Google, and other providers of social media and new media, I argue that accepting reality means accepting that people can’t always be trusted to do the right thing, and then reflecting this ever-present potential for dishonesty in platform and system management.
Because people in the KM discipline work with knowledge and the technologies involved in its management, I think we’re well-placed to be able to help organisations confront and overcome the truth delusion, and then put in place stronger and more effective controls for the management of platforms and systems. We can also help to inform and educate people about their cognitive biases and how to overcome them.
What do you think of the two options I have put forward? Please feel free to leave a comment below.
- Libicki, M. (1995). What Is Information Warfare? ACIS Paper 3, Washington, DC:National Defense University. ↩
- Winkler, K., & Wagner, B. (2018). The relevance of knowledge management in the context of leadership. Journal of Applied Leadership and Management, 5(1). ↩
- Vargo, C. J., Guo, L., & Amazeen, M. A. (2017). The agenda-setting power of fake news: A big data analysis of the online media landscape from 2014 to 2016. new media & society, 1461444817712086. ↩
- Haigh, M., Haigh, T., & Kozak, N. I. (2017). Stopping Fake News: The work practices of peer-to-peer counter propaganda. Journalism Studies, 1-26. ↩
- Shao, C., Ciampaglia, G. L., Flammini, A., & Menczer, F. (2016). Hoaxy: A Platform for Tracking Online Misinformation. In Proceedings of the 25th International Conference Companion on World Wide Web (pp. 745-750). International World Wide Web Conferences Steering Committee. ↩
- Ruchansky, N., Seo, S., & Liu, Y. (2017). CSI: A Hybrid Deep Model for Fake News Detection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 797-806). ACM. ↩
- Cook, J., Lewandowsky, S., & Ecker, U. K. (2017). Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PloS one, 12(5), e0175799. ↩