Artificial intelligenceBrain power

AI is already human-centred, and maybe that’s the problem

Originally posted on The Horizons Tracker.

As concerns about the rising power of artificial intelligence have grown in recent years, the “human-centred AI (HCAI)” movement has emerged to try and counter these concerns. HCAI aims to ensure that AI develops in a way that prioritizes our needs and values in a way that augments rather than replaces humans. It’s a movement that strives to ensure AI improves humans and enhances our capabilities, while also addressing the very real social, cultural, and ethical implications associated with the technology.

A recent paper1 from Radboud University casts a degree of doubt on this noble-sounding movement. The paper suggests we need to rethink what we associate with HCAI as the current definition focuses too much on technical features.

A sociotechnical relationship

Instead, the author argues that AI is fundamentally a sociotechnical relationship, with our cognitive labor either enhanced, displaced, or replaced by the technology. As such, when we view things in this way, it’s impossible for any AI not to be human-centred because it can’t function without human input, cognition, and oversight, even if this human role isn’t always apparent.

“AI is human-centric, not because it behaves like or is designed to be like humans, but because it requires a ghost in the machine, often literally an obfuscated human-in-the-loop to properly function,” the author argues.

The paper argues that each and every AI system should be assessed according to which of these three forms of relations with humans they have:

  • Enhancement, whereby AI is capable of augmenting our capabilities.
  • Replacement, whereby AI largely substitutes for humans.
  • Displacement, whereby AI shunts humans out of cognitively rewarding work, such as we’re seeing at the moment, with AI doing a lot of thinking work for us.

Determining the relationship

The researcher proposes two steps to understand which of these any particular AI technology falls into. The first is to try and discern the relationship. This stage tests whether the technology is related to cognitive tasks or not. It’s reasonable to assume that most of the AI technologies currently on the market do this.

The next step is to then try and characterize the relationship. This is when you hone in on whether technology replaces humans, enhances humans, or displaces humans.

This may seem like a small step, but the paper argues that many modern AI systems contain a huge amount of human effort that’s often hidden from the end user. They’re talking about the ghost workers operating in the shadows, annotating images for a few cents a time. They’re talking about the huge numbers of people who develop the models LLMs rely on or curate the datasets they’re trained on.

The author argues that this obfuscates the amount of human endeavor behind modern AI systems so comprehensively that many are fooled into thinking that the AI is thinking for itself.

Flawed benchmarking

The paper also takes aim at the performance benchmarks often used to highlight the success of AI in the workplace. These benchmarks typically compare man vs machine, but the benchmarks seldom achieve meaningful equivalence. Indeed, the very act of comparing humans to AI is scientifically and logically unsound.

Ultimately, it’s very unlikely that truth will simply fall out of the data. Instead, it’s shaped by human thought and our interaction with the world. That may be an uncomfortable idea in a world awash with hype around AI’s supposed capabilities, but the truth is that parts of human cognition cannot be automated.

There are no AI systems that can think for us without us playing a huge role, and there probably never will be. We might long for such a machine, but only living things are truly self-sustaining. Machines are not. A computer doesn’t decide to turn itself off and on again.

To get past the confusion between correlation and understanding, we must let go of the belief that better performance on benchmarks somehow leads to insight. No stack of test scores can ever amount to a causal explanation.

To de-fetishize AI, we must see it for what it is: a set of relationships between humans and their tools, in which it appears that thinking has been offloaded. But that appearance requires careful analysis. The human mind still sits at the center of it all.

We may not be able to remove the human from the machine. But we can stop treating the human as a ghost, but rather something that is, and always has been, at the very heart of modern AI.

Article source: AI is Already Human-Centred, and Maybe That’s the Problem.

Header image: This artwork captures humanity’s collective endeavour in building artificial intelligence, drawing inspiration from Persian Negargari (miniature painting). Shady Sharify / Better Images of AI / CC BY 4.0.

Reference:

  1. Guest, O. (2025). What Does’ Human-Centred AI’Mean? arXiv preprint arXiv:2507.19960.

Adi Gaskell

I'm an old school liberal with a love of self organizing systems. I hold a masters degree in IT, specializing in artificial intelligence and enjoy exploring the edge of organizational behavior. I specialize in finding the many great things that are happening in the world, and helping organizations apply these changes to their own environments. I also blog for some of the biggest sites in the industry, including Forbes, Social Business News, Social Media Today and Work.com, whilst also covering the latest trends in the social business world on my own website. I have also delivered talks on the subject for the likes of the NUJ, the Guardian, Stevenage Bioscience and CMI, whilst also appearing on shows such as BBC Radio 5 Live and Calgary Today.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button