
Case studies in the unethical and irresponsible use of AI
This article is part of an ongoing series looking at AI in KM, and KM in AI.
Recent articles in RealKM Magazine‘s long-running artificial intelligence (AI) series have identified that ethics is a significant challenge1 in implementing AI in knowledge management (KM), and provided information to assist knowledge managers to address these ethical considerations. This includes AI ethics strategies and recommendations2, dimensions and building blocks for responsible AI3, and AI ethics and governance resources4.
Major KM community organisations should be showing leadership in this regard through the development and implementation of model frameworks for ethical and responsible AI in KM. As well as providing vital guidance for these organisations and their activities, these model frameworks can motivate and educate organization members and networks. However, KM organizations are currently demonstrating opposite capabilities, as highlighted by a troubling case5. And, if anyone in the KM is thinking that this isn’t that important or it’s not their problem, then a disturbing case of ignoring harm6 is an essential read.
If the use of AI in KM continues in the absence of such leadership, then it’s really only a matter of time before knowledge managers end up facing legal action in the courts or public criticism in the media, or both. The two case studies below are examples of what will happen. These are far from isolated examples – the AI Incident Database7 has logged reports of hundreds of such cases.
Case 1. Failing to detect “botshit” in AI-generated outputs
Large language models (LLMs) like ChatGPT do not actually understand8 the texts they are working with. Rather, LLMs are developed through the identification of mathematical relationships between words, with the aim of making predictions about what text goes with what other text. This lack of understanding means that LLMs make mistakes, often described as “hallucinations.” Well-known examples of these hallucinations include changing the meaning of text9, inventing references that don’t actually exist, and attributing quotes to people who never actually said those things.
Because of this, the outputs of LLMs need to be rigorously checked for accuracy10, as part of the safety and accountability measures in frameworks for ethical and responsible AI (as linked in this article’s first paragraph above) (and the potentially significant resources needed for this checking also need to factored into business case considerations in regard to whether or not AI offers an organization real value11).
An example of an organization failing to do these checks is presented in the following article12 in a major Australian newspaper. The organization, consulting firm Deloitte, had also failed to disclose its use of AI in preparing the original report. As the article highlights, these failures have put the organization’s reputation at significant risk, and call into question the credibility of the report’s findings.
A potential contributor to this situation could be the insidious problem of “shadow AI”13, where employees are using their personal AI accounts in the workplace without the knowledge or permission of their employers.
Case 2. Breaching the privacy of vulnerable people
Another insidious problem that is occurring in the absence of organizational frameworks for ethical and responsible AI is employees entering confidential or private information into public AI platforms14. An example of this is presented is the following article15 from the news service of Australia’s national broadcaster, the ABC.
Not only is this action a breach of Australia’s privacy laws16, but it risks making public the sensitive information of flood victims who have already suffered the devastating loss of their homes and possessions. It is truly appalling that they are now also the victims of the unethical and irresponsible use of AI.
Is your organization using AI without having first developed and put in place frameworks for the ethical and responsible use of AI? If so, your organization and its employees and members risk being the subject of headlines like those above, or worse, could face legal action. What will you do to help your organization address this very serious risk?
Header image source: Kathryn Conrad / Better Images of AI / CC BY 4.0.
References:
- Boyes, B. (2025, August 27). What are the key challenges in implementing AI in KM, and how can they be addressed? RealKM Magazine. ↩
- Boyes, B. (2025, September 3). Strategies and recommendations for managing ethical considerations in the use of AI in KM. RealKM Magazine. ↩
- Boyes, B. (2025, October 2). Four dimensions and five building blocks for responsible AI in knowledge management. RealKM Magazine. ↩
- RealKM Magazine. (2025, October 21). AI resources update: 1. AI Ethics and Governance in Practice Program | 2. Global AI Ethics and Governance Observatory. RealKM Magazine. ↩
- Boyes, B. (2025, March 13). Troubling case highlights that the KM community lacks key capabilities needed for safe AI use. RealKM Magazine. ↩
- Boyes, B. (2025, July 31). Ignoring harm, saving face: strategies of non-knowledge and knowledge avoidance, and what they mean for KM. RealKM Magazine. ↩
- RealKM Magazine. (2025, October 28). AI resources update: 1. AI Incident Database | 2. The Geopolitics of AI. RealKM Magazine. ↩
- Riemer, K., & Peter, S. (2025, March 7). AI doesn’t really ‘learn’ – and knowing why will help you use it more responsibly. The Conversation. ↩
- Boyes, B. (2025, May 7). Are you really doing enough to detect “botshit” in your AI-generated content? RealKM Magazine. ↩
- Lockey, S., & Gillespie, N. (2025, October 17). AI ‘workslop’ is creating unnecessary extra work. Here’s how we can stop it. The Conversation. ↩
- McKelvey, F. (2025, September 28). Generative AI might end up being worthless – and that could be a good thing. The Conversation. ↩
- Tadros, E., & Karp, P. (2025, October 5). Deloitte to refund government. admits using AI in $440k report. Financial Review. ↩
- Boyes. B. (2025, September 10). 95% of organizations are getting zero return on AI, while shadow AI thrives. But at what knowledge risk? RealKM Magazine. ↩
- TELUS. (2025, February 26). TELUS Digital Survey Reveals Enterprise Employees Are Entering Sensitive Data Into AI Assistants More Than You Think. Press Release. ↩
- Adams, C., & Rennie, E. (2025, October 6). NSW flood victims’ personal details loaded to ChatGPT in major data breach. ABC News. ↩
- OAIC. (2025, January 17). Guidance on privacy and the use of commercially available AI products. Office of the Australian Information Commissioner. ↩






