AI struggles to match the flexibility of human thought
Originally posted on The Horizons Tracker.
As we gain a greater understanding of the role generative AI might play in the workplace, there’s a growing consensus that it will work alongside humans rather than replace them. A recent study1 from Harvard Business School underlines why this is likely to be the case.
The research focused on the kind of split-second decisions we all make throughout the day that humans are exceptionally good at. These tasks help us to orient ourselves before we get started and pivot whenever the circumstances dictate. It’s something that humans are good at, but technology isn’t.
Instant flexibility
The authors argue that AI still lacks the innate adaptability that humans effortlessly demonstrate—a capability essential for swiftly responding to changing circumstances.
As many companies turn to AI to streamline operations and boost productivity, the research underscores the technology’s limitations. Unlike humans, AI struggles to navigate evolving environments due to its absence of self-awareness and understanding of its capabilities.
This deficiency prompts concerns about the safety of relying on AI, particularly in scenarios like autonomous vehicles. For instance, when faced with unexpected challenges like getting stuck in a ditch, AI may falter in recognizing the need to address new problems beyond basic navigation.
Head to head
The researchers examined humans and AI, and particular our ability to think flexibly, via several video games, whereby players were required to complete a number of tasks. Each of the tasks was designed to test the players’ ability to locate themselves in the game and then respond accordingly depending on the various environments they found themselves in.
The setup resembled a simplified rendition of a classic video game scenario akin to Mario Kart, featuring four “possible selves” denoted by red squares. However, only one avatar, referred to as the “digital self,” could be controlled by the player’s keypress. To successfully complete the game, whether human or machine, the player had to maneuver the digital self to a designated goal using basic directional moves: up, down, right, or left. Human players utilized arrow keys for navigation. Each game version introduced obstacles that hindered the straightforward process of locating one’s avatar and reaching the goal.
Although the games theoretically allowed players to solve them without self-orientation by selecting the closest avatar to the goal and navigating it accordingly, the researchers hypothesized that human players would employ a self-orienting strategy. This approach involved first identifying their digital self among the avatars and then directing it towards the rewarding goal.
The machines, by contrast, deployed a number of the most common forms of reinforcement learning, each of which had been designed so that the AI would learn from each image in the game. The outcome was unanimous, with the human players winning 4-0.
Room to improve
The study reminds us that for all of the hype and hoopla surrounding AI at the moment, there are many areas where its performance lags behind that of humans. Being able to deal with the unexpected appears one such area, yet this is an area of crucial importance in many domains.
For instance, the researchers highlight how doctors in the Emergency Room have to reorient themselves to each new patient that crosses their path as each provides a completely new and different problem to solve. To be successful, doctors have to reorient themselves for each new patient.
They explain that current AI systems attempt to achieve something similar by throwing a lot of data at the problem in the hope that the technology is able to learn the best approach in a given situation. By contrast, humans are able to continuously adapt to their surroundings which makes them better at this task than AI currently is.
The authors conclude by suggesting that when using AI in swiftly changing situations, it’s best to proceed with caution. Managers should know when AI speeds things up and when it slows them down or is more likely to mess up. Research suggests that AI tends to struggle when conditions change a lot and require quick adjustments.
“In any sort of changing environment setting—like shifting between different workflows, providing personalized care to a wide range of patients with various problems, or the example of an automated vehicle having to respond to changing environments—this is where humans are going to shine more than automation systems,” they conclude. “If you more deeply understand why your AI systems are limited, you are probably better equipped to know when and how to deploy them in practice.”
Article source: AI Struggles To Match The Flexibility Of Human Thought.
Header image source: Created by Bruce Boyes with Perchance AI Photo Generator.
- De Freitas, J., Uğuralp, A. K., Oğuz-Uğuralp, Z., Paul, L. A., Tenenbaum, J., & Ullman, T. D. (2023). Self-orienting in human and machine learning. Nature Human Behaviour, 7(12), 2126-2139. ↩