Brain power

How trust in autonomous products emerges

Originally posted on The Horizons Tracker.

Trust in technology is crucial for adoption, but especially for autonomous technologies for which we place so much faith in their ability to function safely and effectively.  Developing and maintaining trust in technologies such as autonomous vehicles is made that much harder by the fact that these technologies are not infallible and should not be regarded as such.

A new paper1 from Stanford University explores how product design can help us to develop the right level of trust in such technologies.  The researchers explored how altering our mood affects our trust in a smart speaker.

“We definitely thought that if people were sad, they would be more suspicious of the speaker and if people were happy, they would be more trusting,” the researchers explain. “It wasn’t even close to that simple.”

Peak performance

The research found that, perhaps unsurprisingly, our opinion of the performance of the technology is underpinned by the trust we have in it.  This trust was found to differ by age, education, and gender, however.  What also emerged, however, was that our mood had an important part to play in our trust levels.

“An important takeaway from this research is that negative emotions are not always bad for forming trust. We want to keep this in mind because trust is not always good,” the researchers explain.

When we develop trust in other people we usually take into account not only their ability but also factors such as their fairness, objectiveness, and honesty.  Our own personal qualities also play a role in the qualities we expect from others.  These factors tend to be ignored when we examine trust in technology, with the competency of the technology all that matters.

Mood and trust

Over a couple of experiments, the researchers attempted to address this gap to better understand how mood affects our trust in technology.  The participants interacted with a simulated smart speaker that contained a number of pre-recorded answers.  Before each interaction, the volunteers were quizzed about their trust in various technologies, before then being shown images that were designed to stimulate either a good mood, a bad mood, or no change in mood at all.

Each volunteer asked the speaker 10 predetermined questions, and was given 10 pre-recorded answers with varying degrees of accuracy and helpfulness.  After each question, the participants provided their satisfaction with the answer and record whether it met their expectations.  After the 10 questions had been answered they revealed how much they trusted the speaker.

The results reveal that when the answers were unsatisfactory, none of the variables either measured or manipulated during the experiment mattered.  When the answers were satisfactory, however, trust appeared higher among men and less educated participants, with older volunteers the least likely to trust the device.

What was perhaps most interesting is that when the mood of those whose expectations were met was primed, it typically led to increased trust regardless of whether the volunteers were primed for a good mood or a bad mood.  The researchers believe this could be because we’re more tolerant when we’re emotional.

“Product designers always try to make people happy when they’re interacting with a product,” the researchers say. “This result is quite powerful because it suggests that we should not only focus on positive emotions but that there is a whole emotional spectrum that is worth studying.”

Suffice to say, actually designing this into a product is by no means easy, but the researchers believe that their findings warrant a degree of attention from product designers.

“It bothers me that engineers pretend that they’re neutral to affecting people’s emotions and their decisions and judgments, but everything they design says, ‘Trust me, I’ve got this totally under control. I’m the most high-tech thing you’ve ever used,’?” the researchers explain. “We test cars for safety, so why should we not test autonomous cars to determine whether the driver has the right level of knowledge and trust in the vehicle to operate it?”

Article source: How Trust In Autonomous Products Emerges.

Header image: Amazon Alexa – Echo Dot. Ajay Suresh on Flickr, CC BY 2.0.

Reference:

  1. Liao, T., & MacDonald, E. F. (2021). Manipulating Users’ Trust of Autonomous Products With Affective Priming. Journal of Mechanical Design, 143(5).
Rate this post

Adi Gaskell

I'm an old school liberal with a love of self organizing systems. I hold a masters degree in IT, specializing in artificial intelligence and enjoy exploring the edge of organizational behavior. I specialize in finding the many great things that are happening in the world, and helping organizations apply these changes to their own environments. I also blog for some of the biggest sites in the industry, including Forbes, Social Business News, Social Media Today and Work.com, whilst also covering the latest trends in the social business world on my own website. I have also delivered talks on the subject for the likes of the NUJ, the Guardian, Stevenage Bioscience and CMI, whilst also appearing on shows such as BBC Radio 5 Live and Calgary Today.

Related Articles

Back to top button