hallucinations
-
Artificial intelligence
Neurosymbolic AI is the answer to large language models’ inability to stop hallucinating
To make neurosymbolic AI fully feasible, there needs to be more research to refine their ability to discern general rules…
-
Artificial intelligence
Bruce Boyes’ presentation to KM Trends 2026: Two very different scenarios for AI in KM
One scenario for the future of AI in KM presents a bad outcome, and the other a good outcome.
-
Artificial intelligence
Case studies in the unethical and irresponsible use of AI
Case studies of failing to detect "botshit" in AI-generated outputs and breaching the privacy of vulnerable people.
-
Artificial intelligence
Understanding the ‘Slopocene’: how the failures of AI can reveal its inner workings
Deliberately pushing AI beyond its intended functions through creative misuse offers a form of AI literacy.
-
Artificial intelligence
Two horror cases highlight the dangers of blind faith in what AI generates
Notable examples of generative AI in KM are starting to emerge, but neglecting botshit can have horrific consequences.
-
Artificial intelligence
Both humans and AI hallucinate – but not in the same way
Unlike humans, LLMs try to predict the most likely response without understanding what they’re saying. This is how they can…