Artificial intelligenceBrain power

Is the popular narrative harming development of AI?

Originally posted on The Horizons Tracker.

Discussions of artificial intelligence in the public domain are often framed in the language of science fiction, with dystopian pictures from the Terminator franchise or 2001: A Space Odyssey used to chronicle how technology will enslave us.

Whilst this may be entertaining, it does little to actually inform the public about what AI can, and cannot do. The failure of this narrative was discussed in a recent paper1 published by the Royal Society, which explored not only the prevailing narrative around AI, but also previous narratives around controversial technologies and concepts, such as GM food, nuclear power and climate change.

“Both fictional and many non-fictional narratives focus on issues that form either a very small subset of contemporary AI research, or that are decades if not centuries away from becoming a technological reality,” the authors say. “This disconnect between the narratives and the reality of the technology can have several major negative consequences.”

This inaccurate narrative operates at both extremes, with utopian fantasies fuelling a bubble of hype that the technology cannot possibly meet. This bubble is not purely the fault of commentators, but also many in the industry who wish to puff up the capabilities of their technology. This can be damaging as when the reality fails to match the hype, it can damage public confidence in the technology.

Similarly, false fears can also be hugely damaging, as much of the debate around the impact of AI on the workplace has been thus far. This form of narrative has some similarities with the hype-driven form, as both place far too much credibility in outlandish claims of what AI can do. What’s more, they also distract us from discussions that do need to be had, but which are often harder to sell.

“With major social and economic questions at stake, such as the future of work and distribution of wealth, it is important for public debate to be well-founded,” they continue. “Debate needs evidence and insight into the disruptive potential and opportunities created by new forms of business or social networks, as well as attention to the direct impact on particular tasks or jobs.”

An informed debate

So what can be done to improve matters? Obviously understanding why distorted narratives emerge plays a key role, but the report also makes a number of recommendations for practitioners to adopt to improve matters:

  • Take lessons from the discussion of previous emerging technologies, so that discourse can reflect the differing levels of confidence or uncertainty in different types of technologies and over different periods.
  • Reshape AI narratives to encapsulate a wider range of analogies that almost always revolve around the extremes of utopian and dystopian fantasies. If these stories can contain richer social and cultural diversity then even better.
  • These narratives could also emerge in new ways. The authors highlight a number of spaces that have emerged in recent years to support informed dialogue around AI, with the Royal Society themselves hosting a regular public lecture series, called You and AI. This could also involve greater efforts to train and educate the public on AI and the technologies behind it.

It’s not an exhaustive list, but the paper is interesting in that it not only draws on practitioners from the AI field, but also those from other fields that have suffered the slings and arrows of controversy and earned the battle scars to prove it. As such, it provides some valuable lessons on not only how important the narrative around any new technology is, but how it can be guided such that the public are both informed and involved.

“Attempting to control public narratives is neither achievable nor desirable, but present limitations may be at least partly addressed by communicating uncertainty through learning from narratives about other disruptive technologies; widening the body of available narratives, drawing in a wider range of authors and protagonists; and creating spaces for public dialogues,” the authors conclude.

Time will tell whether society sufficiently heeds their advice and we get a slightly more realistic narrative forming about the pluses and minuses of artificial intelligence.

Article source: Is The Popular Narrative Harming Development Of AI?

Header image source: Adapted from Terminator 3 by Insomnia Cured Here, which is licenced by CC BY-SA 2.0.

Reference:

  1. Cave, S., Craig, C., Dihal, K. S., Dillon, S., Montgomery, J., Singler, B., & Taylor, L. (2018). Portrayals and perceptions of AI and why they matter. The Royal Society.
Rate this post

Adi Gaskell

I'm an old school liberal with a love of self organizing systems. I hold a masters degree in IT, specializing in artificial intelligence and enjoy exploring the edge of organizational behavior. I specialize in finding the many great things that are happening in the world, and helping organizations apply these changes to their own environments. I also blog for some of the biggest sites in the industry, including Forbes, Social Business News, Social Media Today and Work.com, whilst also covering the latest trends in the social business world on my own website. I have also delivered talks on the subject for the likes of the NUJ, the Guardian, Stevenage Bioscience and CMI, whilst also appearing on shows such as BBC Radio 5 Live and Calgary Today.

Related Articles

Back to top button