Brain power

What sheep can teach us about driverless cars

Originally posted on The Horizons Tracker.

Sheep may appear to have precious little to do with the extremely high technology behind autonomous vehicles, but new research1 from the USC Institute for Creative Technologies suggests that there are things we can learn from sheep that will aid the development of driverless technology.

The study revolved around how our opinions tend to be heavily influenced by those of our peers.  The researchers found that when we know how our peers think about autonomous vehicles, and especially about their safety, our own thinking adjusts accordingly.

Understanding what we want

The authors argue that technology is usually designed to do what people want, but suggest that the social nature of our desires is often overlooked in this process.  For instance, the trolly dilemma is commonly used to highlight the ethics of driverless technology, but our decision-making in such a scenario is malleable depending on the opinions of those around us.  This suggests that our decision-making is often nuanced and not a case of moral absolutes.

The researchers conducted four simulation experiments to explore how we process and act on the kind of moral dilemmas we might face with autonomous vehicles.  For instance, the experiments showed that people tend to use the severity of injury to both self and others as a guide to our decision-making.  The higher the risk to others, the more likely we are to choose to self-harm ourselves instead.

In one experiment, however, the authors introduced a social dimension by telling volunteers what their peers had chosen in the same situation.  For instance, sometimes they were told that people had chosen to risk their own health, which prompted a 20% rise in participants willing to do likewise.

The researchers believe their work has implications for autonomous technologies, including drones and industrial robots, as manufacturers should appreciate the role our peers play in our decision-making, especially in life or death situations.

It’s also important to ensure a sufficient level of transparency about how the machines are programmed and to allow humans to regain control in such situations if they wish.

There are also implications from a policy perspective, with legislators needing to understand this aspect of human decision-making to ensure rules are crafted appropriately.  Similarly, public health campaigns should also be aware of this possibility and should be developed to help influence future vehicle owners in the right direction.

Article source: What Sheep Can Teach Us About Driverless Cars.

Header image source: Sam Carter on Unsplash.

Reference:

  1. De Melo, C. M., Marsella, S., & Gratch, J. (2020). Risk of Injury in Moral Dilemmas with Autonomous Vehicles. Frontiers in Robotics and AI, 7, 213.
Rate this post

Adi Gaskell

I'm an old school liberal with a love of self organizing systems. I hold a masters degree in IT, specializing in artificial intelligence and enjoy exploring the edge of organizational behavior. I specialize in finding the many great things that are happening in the world, and helping organizations apply these changes to their own environments. I also blog for some of the biggest sites in the industry, including Forbes, Social Business News, Social Media Today and Work.com, whilst also covering the latest trends in the social business world on my own website. I have also delivered talks on the subject for the likes of the NUJ, the Guardian, Stevenage Bioscience and CMI, whilst also appearing on shows such as BBC Radio 5 Live and Calgary Today.

Related Articles

Back to top button