Artificial intelligenceBrain powerFeatured Stories

Two horror cases highlight the dangers of blind faith in what AI generates

In a previous RealKM Magazine article series, Dr Rachad Najjar identified 35 knowledge management (KM) processes that can be augmented by generative artificial intelligence (AI). Coinciding with a new article1 in The Conversation reporting that generative AI has moved far enough along the hype cycle to where it can become genuinely useful, notable examples of these applications are starting to emerge. They include the skills development example from the recent KM4Dev Knowledge Café 37, Vanessa Liu’s expert knowledge example to be presented on 28 August, Stuart French’s personal assistant example to be presented on 4 September, and the expert identification example explored in recent research2.

However, in a recent RealKM Magazine article, Adi Gaskell reports on research3 which supports the need for the KM community to proceed with caution in regard to generative AI. The research looks at risks associated with using generative AI chatbots for content generation. The authors delve into what they refer to as “botshit,” which they define as inaccurate or fabricated content produced by chatbots. They state that:

Our paper explains that when this jumble of truth and falsehood is used for work tasks, it can become botshit. For chatbots to be used reliably, it is important to recognize that their responses can best be thought of as provisional knowledge

The two cases below highlight the horrific outcomes that can result when organizations and leaders have such a level of blind faith in AI and technology that botshit-style outputs become entrenched as key inputs into what then becomes exceptionally bad decision-making.

Forthcoming RealKM Magazine articles will put forward approaches that can help the KM community to avoid such dreadful mistakes while still maximizing the potential for generative AI to support KM.

Case #1: Australian Robodebt AI scandal

The former Liberal-led Australian Government’s heinous ‘Robodebt’ scheme4 is a frightening example of what happens when AI outputs are treated as fact, rather than “provisional knowledge” as the research referenced above advises.

As Noel Cressie writes5 in The Conversation, the Robodebt AI system was designed to catch people exploiting welfare. It compared welfare recipients’ reported fortnightly income with their tax office-reported yearly income, the latter of which was averaged to provide fortnightly figures. If the difference showed an overpayment, a red flag was raised. The AI system then issued a debt notice and put the onus on the recipient to prove they weren’t exploiting the welfare system.

However, the AI system got many of the assessments seriously wrong to such an extent that it triggered numerous suicides6. But the flaws in the scheme were ignored, even when concerns were raised multiple times7. As Meredith Lewis reports in a 2019 RealKM Magazine series, a knowledge-based community protest campaign eventually saw what was deemed an illegal scheme shut down.

In 2022, the new incoming Labor-led Australian Government then launched a Royal Commission into the scheme. The Royal Commission findings have slammed the scheme and those responsible for it8. The Australian Government subsequently announced that it would implement all of the Commissioner’s recommendations9, including commencing investigations into 16 public servants, and putting in place clear review processes where decisions have been automated and an oversight body that can audit automated decisions.

Robodebt condemned as a 'shameful chapter' in withering assessment by federal court judgeCase #2: UK Post Office Horizon scandal

As summarized10 in BBC News and chronicled in Wikipedia11, more than 900 sub-postmasters in the United Kingdom (UK) were prosecuted for stealing between 1999 and 2015. Many of the sub-postmasters went to prison for false accounting and theft, and many were financially ruined. The government-owned Post Office itself took many cases to court, prosecuting 700 people. Another 283 cases were brought by other bodies, including the Crown Prosecution Service (CPS).

However, many were wrongly prosecuted because of incorrect information from the Post Office Horizon computer system. Horizon was produced by Japanese information technology (IT) company Fujitsu in the 1990s. Fujitsu was aware that Horizon contained software bugs as early as 1999, but the Post Office insisted that Horizon was robust and failed to disclose knowledge of the faults in the system during criminal and civil cases. While it was an IT system rather than AI, the same blindness to problems or ‘incomplete curiosity’12 could occur with AI.

When news of the faults finally broke publicly, it triggered legal action by sub-postmasters. The judge’s rulings led to sub-postmasters challenging their convictions in the courts and the government setting up an independent inquiry in 2020. The Post Office Horizon IT Inquiry was converted into a statutory public inquiry the following year. The public inquiry is ongoing and the Metropolitan Police are investigating personnel from the Post Office and Fujitsu.

Courts began to quash the sub-postmasters’ convictions in December 2020. By February 2024, 100 of the convictions had been overturned. Those wrongfully convicted became eligible for compensation, as did more than 2,750 sub-postmasters who had been affected by the scandal but not convicted. The final cost of compensation is expected to exceed £1 billion. In May 2024, the UK Parliament passed a law overturning the convictions of sub-postmasters in England, Wales and Northern Ireland. Scotland passed a similar law the same month.

As a result, the Post Office Horizon scandal has been called the UK’s most widespread miscarriage of justice.

Post Office Horizon inquiry told of ‘incomplete curiosity’ and ‘toxic culture’

Header image source: Created by Bruce Boyes with Perchance AI Photo Generator.

References:

  1. Kovanovic, V. (2024, August 18). Generative AI hype is ending – and now the technology might actually become useful. The Conversation.
  2. Borna, S., Barry, B. A., Makarova, S., Parte, Y., Haider, C. R., Sehgal, A., … & Forte, A. J. (2024). Artificial Intelligence Algorithms for Expert Identification in Medical Domains: A Scoping Review. European Journal of Investigation in Health, Psychology and Education, 14(5), 1182-1196.
  3. Hannigan, T. R., McCarthy, I. P., & Spicer, A. (2024). Beware of Botshit: How to Manage the Epistemic Risks of Generative Chatbots. Business Horizons.
  4. Turner, R. (2021, June 11). Robodebt condemned as a ‘shameful chapter’ in withering assessment by federal court judge. ABC News.
  5. Cressie, N. (2023, March 16). Robodebt not only broke the laws of the land – it also broke laws of mathematics. The Conversation.
  6. McPherson, E. (2020, August 17). Mothers who lost sons to suicide after Centrelink debts write heartbreaking letters to Senate. 9 News.
  7. Henriques-Gomes, L. (2020, September 18). Robodebt court documents show government was warned 76 times debts were not legally enforceable. The Guardian.
  8. O’Donovan, D. (2023, March 10). ‘Amateurish, rushed and disastrous’: royal commission exposes robodebt as ethically indefensible policy targeting vulnerable people. The Conversation.
  9. Evans, J. (2023, November 13). Government formally responds to Robodebt royal commission, revealing 16 public servants being investigated over scheme. ABC News.
  10. BBC News. (2024, July 31). Post Office Horizon scandal: Why hundreds were wrongly prosecuted. BBC News.
  11. Wikipedia, CC BY-SA 4.0.
  12. Croft, J. (2024, July 9). Post Office Horizon inquiry told of ‘incomplete curiosity’ and ‘toxic culture’. The Guardian.
Rate this post

Bruce Boyes

Bruce Boyes (www.bruceboyes.info) is a knowledge management (KM), environmental management, and education professional with over 30 years of experience in Australia and China. His work has received high-level acclaim and been recognised through a number of significant awards. He is currently a PhD candidate in the Knowledge, Technology and Innovation Group at Wageningen University and Research, and holds a Master of Environmental Management with Distinction. He is also the editor, lead writer, and a director of the award-winning RealKM Magazine (www.realkm.com), and teaches in the Beijing Foreign Studies University (BFSU) Certified High-school Program (CHP).

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button