Artificial intelligenceBrain powerFeatured Stories

Case studies in the unethical and irresponsible use of AI

This article is part of an ongoing series looking at AI in KM, and KM in AI.

Recent articles in RealKM Magazine‘s long-running artificial intelligence (AI) series have identified that ethics is a significant challenge1 in implementing AI in knowledge management (KM), and provided information to assist knowledge managers to address these ethical considerations. This includes AI ethics strategies and recommendations2, dimensions and building blocks for responsible AI3, and AI ethics and governance resources4.

Major KM community organisations should be showing leadership in this regard through the development and implementation of model frameworks for ethical and responsible AI in KM. As well as providing vital guidance for these organisations and their activities, these model frameworks can motivate and educate organization members and networks. However, KM organizations are currently demonstrating opposite capabilities, as highlighted by a troubling case5. And, if anyone in the KM is thinking that this isn’t that important or it’s not their problem, then a disturbing case of ignoring harm6 is an essential read.

If the use of AI in KM continues in the absence of such leadership, then it’s really only a matter of time before knowledge managers end up facing legal action in the courts or public criticism in the media, or both. The two case studies below are examples of what will happen. These are far from isolated examples – the AI Incident Database7 has logged reports of hundreds of such cases.

Case 1. Failing to detect “botshit” in AI-generated outputs

Large language models (LLMs) like ChatGPT do not actually understand8 the texts they are working with. Rather, LLMs are developed through the identification of mathematical relationships between words, with the aim of making predictions about what text goes with what other text. This lack of understanding means that LLMs make mistakes, often described as “hallucinations.” Well-known examples of these hallucinations include changing the meaning of text9, inventing references that don’t actually exist, and attributing quotes to people who never actually said those things.

Because of this, the outputs of LLMs need to be rigorously checked for accuracy10, as part of the safety and accountability measures in frameworks for ethical and responsible AI (as linked in this article’s first paragraph above) (and the potentially significant resources needed for this checking also need to factored into business case considerations in regard to whether or not AI offers an organization real value11).

An example of an organization failing to do these checks is presented in the following article12 in a major Australian newspaper. The organization, consulting firm Deloitte, had also failed to disclose its use of AI in preparing the original report. As the article highlights, these failures have put the organization’s reputation at significant risk, and call into question the credibility of the report’s findings.

A potential contributor to this situation could be the insidious problem of “shadow AI”13, where employees are using their personal AI accounts in the workplace without the knowledge or permission of their employers.

Deloitte to refund government, admits using AI in $440k report.

Case 2. Breaching the privacy of vulnerable people

Another insidious problem that is occurring in the absence of organizational frameworks for ethical and responsible AI is employees entering confidential or private information into public AI platforms14. An example of this is presented is the following article15 from the news service of Australia’s national broadcaster, the ABC.

Not only is this action a breach of Australia’s privacy laws16, but it risks making public the sensitive information of flood victims who have already suffered the devastating loss of their homes and possessions. It is truly appalling that they are now also the victims of the unethical and irresponsible use of AI.

NSW flood victims' personal details loaded to ChatGPT in major data breach

Is your organization using AI without having first developed and put in place frameworks for the ethical and responsible use of AI? If so, your organization and its employees and members risk being the subject of headlines like those above, or worse, could face legal action. What will you do to help your organization address this very serious risk?

Header image source: Kathryn Conrad / Better Images of AI / CC BY 4.0.

References:

  1. Boyes, B. (2025, August 27). What are the key challenges in implementing AI in KM, and how can they be addressed? RealKM Magazine.
  2. Boyes, B. (2025, September 3). Strategies and recommendations for managing ethical considerations in the use of AI in KM. RealKM Magazine.
  3. Boyes, B. (2025, October 2). Four dimensions and five building blocks for responsible AI in knowledge management. RealKM Magazine.
  4. RealKM Magazine. (2025, October 21). AI resources update: 1. AI Ethics and Governance in Practice Program | 2. Global AI Ethics and Governance Observatory. RealKM Magazine.
  5. Boyes, B. (2025, March 13). Troubling case highlights that the KM community lacks key capabilities needed for safe AI use. RealKM Magazine.
  6. Boyes, B. (2025, July 31). Ignoring harm, saving face: strategies of non-knowledge and knowledge avoidance, and what they mean for KM. RealKM Magazine.
  7. RealKM Magazine. (2025, October 28). AI resources update: 1. AI Incident Database | 2. The Geopolitics of AI. RealKM Magazine.
  8. Riemer, K., & Peter, S. (2025, March 7). AI doesn’t really ‘learn’ – and knowing why will help you use it more responsibly. The Conversation.
  9. Boyes, B. (2025, May 7). Are you really doing enough to detect “botshit” in your AI-generated content? RealKM Magazine.
  10. Lockey, S., & Gillespie, N. (2025, October 17). AI ‘workslop’ is creating unnecessary extra work. Here’s how we can stop it. The Conversation.
  11. McKelvey, F. (2025, September 28). Generative AI might end up being worthless – and that could be a good thing. The Conversation.
  12. Tadros, E., & Karp, P. (2025, October 5). Deloitte to refund government. admits using AI in $440k report. Financial Review.
  13. Boyes. B. (2025, September 10). 95% of organizations are getting zero return on AI, while shadow AI thrives. But at what knowledge risk? RealKM Magazine.
  14. TELUS. (2025, February 26). TELUS Digital Survey Reveals Enterprise Employees Are Entering Sensitive Data Into AI Assistants More Than You Think. Press Release.
  15. Adams, C., & Rennie, E. (2025, October 6). NSW flood victims’ personal details loaded to ChatGPT in major data breach. ABC News.
  16. OAIC. (2025, January 17). Guidance on privacy and the use of commercially available AI products. Office of the Australian Information Commissioner.

Bruce Boyes

Bruce Boyes is a knowledge management (KM), environmental management, and education thought leader with more than 40 years of experience. As editor and lead writer of the award-winning RealKM Magazine, he has personally written more than 500 articles and published more than 2,000 articles overall, resulting in more than 2 million reader views. With a demonstrated ability to identify and implement innovative solutions to social and ecological complexity, Bruce has successfully completed more than 40 programs, projects, and initiatives including leading complex major programs. His many other career highlights include: leading the KM community KM and Sustainable Development Goals (SDGs) initiative, using agile approaches to oversee the on time and under budget implementation of an award-winning $77.4 million recovery program for one of Australia's most iconic river systems, leading a knowledge strategy process for Australia’s 56 natural resource management (NRM) regional organisations, pioneering collaborative learning and governance approaches to empower communities to sustainably manage landscapes and catchments in the face of complexity, being one of the first to join a new landmark aviation complexity initiative, initiating and teaching two new knowledge management subjects at Shanxi University in China, and writing numerous notable environmental strategies, reports, and other works. Bruce is currently a PhD candidate in the Knowledge, Technology and Innovation Group at Wageningen University and Research, and holds a Master of Environmental Management with Distinction and a Certificate of Technology (Electronics). As well as his work for RealKM Magazine, Bruce currently also teaches in the Beijing Foreign Studies University (BFSU) Certified High-school Pathway (CHP) program in Baotou, Inner Mongolia, China.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button