Artificial intelligenceBrain powerFeatured Stories

The risks associated with AI-generated content

Originally posted on The Horizons Tracker.

As generative AI tools, like ChatGPT, mushroomed in popularity in the past 18 months, a growing number of publications have been turning to it to generate content. Research1 from the University of Alberta highlights the risks associated with this trend.

The paper discusses the risks associated with using generative chatbots for content generation. The authors delve into what they refer to as “botshit,” which they define as inaccurate or fabricated content produced by chatbots.

Human supervision

“Our paper explains that when this jumble of truth and falsehood is used for work tasks, it can become botshit,” the researchers explain. “For chatbots to be used reliably, it is important to recognize that their responses can best be thought of as provisional knowledge.”

The authors provide a typology framework to mitigate these risks.  The framework outlines guardrails that can help to reduce the risks associated with chatbot usage:

  • Technologically-oriented guardrails – they focus on how the chatbot works. Depending on whether it’s doing things automatically or on its own, it needs different checks to make sure it’s giving out good info. Think of it like making sure the chatbot is trustworthy for different types of jobs.
  • Organization-oriented guardrails – these are like a set of guidelines that a company makes to avoid spreading false info. They teach the employees how to use the chatbot responsibly and check if it’s doing a good job. It’s a bit like a code of conduct.
  • User-oriented guardrails – these are about what people using the chatbot should do. Different situations call for different levels of checking and thinking. If you’re using the chatbot to confirm stuff, it’s good to question and double-check. But if you’re using it to get ideas, you might not need to be so critical. And, if something seems off, users are encouraged to speak up and ask questions. It’s all about keeping the workplace free from false info.

To trust chatbots, it’s crucial to see their answers as temporary knowledge. Unlike solid info from trusted sources, this temporary knowledge is like a work in progress. Organizations often deal with uncertain situations, and in those cases, they have to use info that might not be 100% confirmed. It’s like relying on sources where people are still debating if the info is true or not.

Article source: The Risks Associated With AI-Generated Content.

Header image source: Om siva Prakash on Unsplash.

Reference:

  1. Hannigan, T. R., McCarthy, I. P., & Spicer, A. (2024). Beware of Botshit: How to Manage the Epistemic Risks of Generative Chatbots. Business Horizons.
Rate this post

Adi Gaskell

I'm an old school liberal with a love of self organizing systems. I hold a masters degree in IT, specializing in artificial intelligence and enjoy exploring the edge of organizational behavior. I specialize in finding the many great things that are happening in the world, and helping organizations apply these changes to their own environments. I also blog for some of the biggest sites in the industry, including Forbes, Social Business News, Social Media Today and Work.com, whilst also covering the latest trends in the social business world on my own website. I have also delivered talks on the subject for the likes of the NUJ, the Guardian, Stevenage Bioscience and CMI, whilst also appearing on shows such as BBC Radio 5 Live and Calgary Today.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button