Brain powerCOVID-19 coronavirusFeatured Stories

Engaging with bad knowledge practices

One of the most pressing questions of the pandemic era has been: what should be done about bad knowledge practices? And specifically, what actions are justifiable to prevent individual or group harms arising from bad knowledge?

This is not as easy to answer as it sounds, for knowledge is fluid, evolving and most importantly, contextually separate from objective “truth.” For example, researchers have examined the psychological benefits of tarot1 and individual and social benefits of religion2, despite the fact that the purported truths presented by the activities are either irrelevant to their purpose or unfalsifiable by design.

Therefore, as knowledge managers we must be careful to distinguish between truth-seeking activities using methods such as critical rationalism3 and evaluating the impact of knowledge practices on individuals and groups. We must be clear-minded about the distinction before engaging in systems interventions.

There are three basic scenarios of bad knowledge practices to consider:

  • Individual harms – The question of whether to intervene with a person who is acknowledged to be acting contrary to their self-interest but is otherwise harming no-one is primarily an ethical one. There has been a trend away from laws forbidding personal harms (eg anti-suicide laws) with a recognition that positive education and pastoral care is a more productive approach to changing these kinds of bad knowledge
  • Group harms – Individuals may engage in behaviours that either exploit bad knowledge in others or are driven by bad knowledge of their own, but have a limited impact due to the degree of individual volition in participation. Common examples are fraud, illicit drug distribution, and advocacy of pseudoscientific practices. Consensus and remediation of bad knowledge here can take a range of forms, including group leadership structures (ie families, schools), informal social and cultural norms. Formal legislative or regulatory enforcement may be require to limit extreme examples of harmful behaviour while still allowing significant autonomy and self-determination within groups.
  • Whole of society harms – Some knowledge can lead to actions that have a multiplicative or compounding effect on others (for example, a broad failure to adopt protective behaviours against infectious diseases, or adopt prosocial behaviours more generally). Correcting this presents a unique challenge because any broad attempt to limit availability of knowledge to a pre-determined “good” form of knowledge requires censorship, or propaganda, or both (according to a value-neutral definition of these terms). There is also an inherent conflict of interest between the entity best placed to implement either measure – the leaders of a country (or organisation) – and their self-interest in continuing to lead going forward.

A fundamental limitation of all attempts to improve knowledge is that those seeking to correct knowledge may themselves be incorrect. This problem can be mitigated through a range of techniques including:

  • Consensus – identifying multiple sources who agree on the best knowledge to disseminate
  • Accountability – letting others evaluate your techniques in identifying better knowledge to distribute
  • Steelmanning – seeking out the strongest alternatives and arguments against proposed better knowledge
  • Transparency – telling people when, how, and why you are attempting to correct their knowledge.

Once a harm has been established, and there is an intent to intervene and create better knowledge among a population, we must then consider how best to achieve the desired systemic change. A range of interventions have been identified above, each with their own benefits and drawbacks:

  • Education – Time consuming and costly, especially for complex topics, but highly effective for forming sticky initial knowledge when a person has no preconceptions. While education can develop critical thinking skills, it can also be a form of “soft” propaganda when facts are simply presented to students as “presumed true”.
  • Individual support – Empathetic and caring interventions can lead to the most sustained changes in knowledge, especially for topics of low interest, but this is nearly impossible to scale beyond immediate peer groups.
  • Leadership and norms – The establishment of authority structures and culture norms are critical for stability in all societies. Highly sticky and enduring, structures and norms can be responsible for enduring success or hold back communities for decades until a mass movement for change is created by people who want to operate within a system’s power dynamics to alter them.
  • Legal enforcement – The use of laws and regulations acts as a kind of brute force form of negative knowledge4 inasmuch as people will learn to moderate any knowledge that results in an enforcement outcome.
  • Propaganda – Widely presenting certain knowledge as unambiguously true is a very common form of government manipulation, designed to inculcate certain beliefs and attitudes in the general public. Despite the negative overtones of the term, propaganda can be a positive force when used to strongly communicate useful knowledge, such as the benefits to a parent of vaccinating their child.
  • Censorship – The forceful suppression of undesirable views and knowledge attempts to achieve similar ends to propaganda, generally through very narrow targeting of topics except in the most authoritarian scenarios.  While in theory censorship represents a more targeted correction than the broad knowledge uptake sought by propaganda, the punitive aspects of censorship can lead to both chilling effects on other speech topics and a backfire effect5 among those who were meant to be protected from the “forbidden” knowledge.

In all cases, it seems clear knowledge interventions should be limited to areas where a definable and immediate harm can be identified. Having a diverse and dynamic range of knowledge in teams and organisations is important for ongoing success, and so any attempts we make to engage with and control bad knowledge need to be undertaken with care and clear intent.

Header image source: Nicole Dralle on Pixabay, Public Domain.


  1. Hofer, G. M. (2009). Tarot cards: an investigation of their benefit as a tool for self reflection (Master’s dissertation, Concordia University).
  2. Mochon, D., Norton, M. I., & Ariely, D. (2011). Who benefits from religion? Social Indicators Research, 101(1), 1-15.
  3. Wikipedia, CC BY-SA 3.0.
  4. Gartmeier, M., Bauer, J., Gruber, H., & Heid, H. (2008). Negative knowledge: Understanding professional learning and expertise. Vocations and Learning, 1(2), 87-103.
  5. Jansen, S. C., & Martin, B. (2004). Exposing and opposing censorship: backfire dynamics in freedom-of-speech struggles. Pacific Journalism Review, 10(1), 29-45.
5/5 - (1 vote)

Stephen Bounds

Stephen Bounds is an Information and Knowledge Management Specialist with a wide range of experience across the government and private sectors. As founding editor of RealKM and Executive, Information Management at Cordelta, Stephen provides clear strategic thinking along with a hands-on approach to help organisations successfully develop and implement modern information systems.

Related Articles


  1. Many thanks Stephen for this very valuable perspective, in particular the discussion of techniques and interventions. However, there are two aspects of the overall hypothesis that I’d like to question.

    Firstly, I don’t agree that knowledge can be seen as contextually separate from objective “truth.” There will almost certainly be a diversity of knowledge across those with an interest in, or who are affected by, a particular context. However, these interested or affected parties will typically engage and interact with each other and external information and knowledge sources in a quest for what they collectively see as better overall knowledge, or in other words, a more objective “truth.”

    The KM community itself does this when, in KM forums, it discusses and debates the diversity of knowledge in regard to particular aspects of KM. While the diversity of knowledge will be acknowledged, there will always be a drive towards what the participants in the discussion see as a more objective truth. In that drive, discussion participants will draw on what they see as the most appropriate sources of knowledge – often academic literature and professional expertise. These are two of the four sources of knowledge in evidence-based practice, which reinforces evidence-based KM as an important approach for reducing bad knowledge and facilitating collective movement towards more objective truth.

    Secondly, the discussed intervention approach is based on the assumption that there is a high-level “we” with the overall ethical or moral authority to be able to decide on and then implement interventions leading towards a reduction in bad knowledge.

    However, this approach breaks down when the high-level “we” is not only itself engaged in bad knowledge practices, but actually using the listed interventions to enact and entrench those bad knowledge practices, including to deceive the populace in regard to its ethical and moral authority.

    This situation can be clearly seen in regard to environmental knowledge in Australia. Australia is one of the world’s worst performers in regard to two of the most important global environmental priorities – biodiversity conservation and climate action. To stymie or undermine vital action on these issues, Australian governments actively use censorship as a bad knowledge practice. An example of this censorship can be seen in regard to the current plans to raise the wall of Sydney’s Warragamba Dam. At the same time, Australian governments also use propaganda (in its negative sense) to manipulate much of the Australian public into believing that Australia is a much better environmental performer than it actually is, effectively creating mass delusion (the type of delusion described by David Gurteen in his article on societal KM). An example of this propaganda can be seen in the Australia Government’s threatened species strategy and prospectus, which has a “cute and cuddly” public relations focus to the neglect of a large number of other threatened species.

    Addressing bad knowledge practices when they have been used to establish and entrench mass delusion is very difficult because anyone seeking to challenge the mass delusion can expect a negative or hostile response. However, there are ways forward. Critical discourse analysis is a powerful new technique for uncovering policy biases and the motives behind them, and I’m looking forward to shortly using this technique in my own research. Beyond this, there’s a need for further research into dark side KM tactics. Researchers are already starting to consider KM from a knowledge risks perspective, for example as shown by the emerging body of research in regard to knowledge withholding, hiding, and hoarding.

  2. Thanks Bruce, insightful comments as always. The point I am trying to make is that the utility of knowledge needs to be evaluated separately from its truth, and that truth isn’t a prerequisite for knowledge. I understand that some people have a pragmatic philosophical view that knowledge can only be true if it is also useful, but I feel that this is too absolute a position and doesn’t align with our lay understanding of “truth”.

    To take a contrived example, let’s say that a society had determined that the Fairy Gods needed to be honored through careful handwashing prior to cleaning wounds. The outcome (more sanitary conditions) is systemically positive despite the objective falsity of the knowledge claim underpinning it (existence of Fairy Gods). Therefore, it would be possible to share true statements (“there are no Fairy Gods”) with that society which would lead to worse system outcomes if people stopped handwashing.

    I am not attempting a post-modernist position that the truth doesn’t matter. Of course it does. And aside from any ethical concerns, there are questions about long-term efficacy of any strategy to hide or discourage truth-seeking once trust is taken into consideration.

    But there are systems tradeoffs which are often not sufficiently valued by those who are dedicated to truth-seeking in knowledge (for example, in academia). This gets particularly messy when we acknowledge that truth needs to be thought of in terms of gradients rather than black-and-white propositions.

    When is it sufficient to communicate a simplistic, technically-untrue but mostly-accurate heuristic to the population, and when must our best known facts be communicated in full? When is propaganda genuinely supporting useful outcomes for everyone, and when are they just entrenching power to a few? Is it better for a society to have a trusted top-level institution that tells others a singular best practice, or the more widely divergent knowledge likely to emerge from more extensive grassroots education?

    The Complexity Triangle [] can help us navigate these kinds of choices, but overriding all of this is the need for trust that the various actors within a system are, within certain limits, working towards common goals. Without this, the very basis for our acquisition and retention of indirect knowledge (ie anything we don’t personally observe) is compromised.

Back to top button