Artificial intelligenceBrain powerFeatured Stories

AI is changing the Dunning-Kruger Effect, with higher AI literacy correlating with overestimation of competence

This article is part of an ongoing series looking at AI in KM, and KM in AI.

The Dunning-Kruger Effect (DKE)1 is a cognitive bias where individuals with lower ability tend to overestimate their competence while those with higher ability underestimate it.

A newly-published research paper2 in the journal Computers in Human Behavior has explored issues around the Dunning-Kruger Effect in regard to artificial intelligence (AI), recognising that it is crucial to understand how AI influences individuals’ ability to accurately assess their own competence and make informed decisions. This is particularly that case in situations where overconfidence or underestimation determines the success and efficacy of AI applications in real-world settings.

This means that the research paper’s findings are an important consideration in the ethical and responsible use of AI in knowledge management (KM), as explored in a number of articles in RealKM Magazine‘s artificial intelligence series.

From a psychological perspective, people commonly rely on AI to boost their thinking, raising fundamental questions about how people perceive their augmented performance when collaborating with AI, and whether they remain aware of potential errors. Fundamental biases, such as overtrust and overreliance, impair performance up to the point that the interaction decreases overall performance as compared to having no AI at all.

The research involved two studies. In Study 1, AI was used by 240 participants to solve 20 logical reasoning problems from the Law School Admission Test (LSAT). Study 2 replicated and expanded Study 1, involving a further 452 participants.

Findings

The classic Dunning-Kruger Effect, where lower performers overestimate and higher performers underestimate their performance, was found to disappear with AI use. While AI users in the research sample had improved logical reasoning performance as compared to no AI, they consistently overestimated their performance. This suggests that AI improves performance but leads to highly biased self-assessments.

An unexpected link between AI literacy and self-assessment accuracy was found across both studies. Participants with higher AI literacy were less accurate in self-assessments, contradicting the assumption that higher AI literacy improves self-monitoring and calibration.

The authors state that while a skeptic might attribute the observed distortion to the sample in Study 1, the randomized replication and other research design features in Study 2 still show robust findings of overestimation.

What to do about this problem?

The study authors alert that the their research findings reinforce the need for strategies that foster cognitive resilience and critical engagement with AI. In this regard, they propose the design principles listed in Table 1 below, which can be incorporated into the frameworks for ethical and responsible AI that are described in articles in RealKM Magazine‘s artificial intelligence series.

Table 1. Issues, consequences, and design principles to address impaired self-assessment in human–AI interaction.
Source: Adapted from Fernandes et al., 2026.

issue Consequences Design principles
Overreliance on AI outputs Users trust AI outputs without critical assessment, leading to reduced self-reflection and failure to notice AI errors.
  • Confidence calibration to align user confidence with AI output uncertainty
  • AI uncertainty visualization to make AI output reliability transparent
  • Explanatory AI interfaces to clarify AI decision-making processes and enable users to assess validity
Loss of self-monitoring Users are unable to accurately assess their own performance or monitor task progress, especially in complex decision-making tasks.
  • Post-task reflection to encourage users to evaluate their performance after interacting with AI.
  • Cognitive forcing strategies such as prompts to promote critical thinking and reduce automatic reliance on AI outputs.
Illusion of understanding AI-literate users tend to over-rely and over-trust on AI outputs.
  • “Explain-back” micro-task before submission to help calibrate illusions of knowledge.

Article source: Fernandes et al., 2026, CC BY 4.0.

Header image source: Created by Bruce Boyes with Microsoft Designer Image Creator.

References:

  1. Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121.
  2. Fernandes, D., Villa, S., Nicholls, S., Haavisto, O., Buschek, D., Schmidt, A., … & Welsch, R. (2026). AI Makes You Smarter, But None The Wiser: The Disconnect Between Performance and Metacognition. Computers in Human Behavior, 175, 108779.

Bruce Boyes

Bruce Boyes is a knowledge management (KM), environmental management, and education thought leader with more than 40 years of experience. As editor and lead writer of the award-winning RealKM Magazine, he has personally written more than 500 articles and published more than 2,000 articles overall, resulting in more than 2 million reader views. With a demonstrated ability to identify and implement innovative solutions to social and ecological complexity, Bruce has successfully completed more than 40 programs, projects, and initiatives including leading complex major programs. His many other career highlights include: leading the KM community KM and Sustainable Development Goals (SDGs) initiative, using agile approaches to oversee the on time and under budget implementation of an award-winning $77.4 million recovery program for one of Australia's most iconic river systems, leading a knowledge strategy process for Australia’s 56 natural resource management (NRM) regional organisations, pioneering collaborative learning and governance approaches to empower communities to sustainably manage landscapes and catchments in the face of complexity, being one of the first to join a new landmark aviation complexity initiative, initiating and teaching two new knowledge management subjects at Shanxi University in China, and writing numerous notable environmental strategies, reports, and other works. Bruce is currently a PhD candidate in the Knowledge, Technology and Innovation Group at Wageningen University and Research, and holds a Master of Environmental Management with Distinction and a Certificate of Technology (Electronics).

Related Articles

2 Comments

    1. Thank you Gilbert for your comment. The research involved an AI task so the findings are applicable only to AI. Investigating if the findings also apply to other knowledge fields would need to involve research in and relevant to those fields.

      However, there’s observational evidence to suggest that the Dunning-Kruger Effect still persists strongly and unchanged in other knowledge fields. For example, individuals with lower ability tending to overestimate their competence while those with higher ability underestimate it can be clearly seen in regard to climate science. There are non-experts with little or no knowledge of climate science who are outspoken climate change denialists. At the same time, there are hundreds of climate scientists who contribute to the IPCC reports, which are published with confidence levels reflecting the tendency for those with higher ability to underestimate their competence.

      There are other knowledge fields though where a similarly changed Dunning-Kruger Effect could be evident. These are fields which, like AI is now, are or have been on the peak of inflated expectations of the Gartner hype cycle. One example that comes to mind is Bayesian networks, which like AI have appropriate uses, but which also like AI have seen inflated expectations driven by people with with a higher ability who have overestimated their competence.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button