ABCs of KMArtificial intelligence

Factors supporting and hindering effective human-AI collaboration in knowledge ecosystems

This article is part of an ongoing series looking at AI in KM, and KM in AI.

The use of artificial intelligence (AI) in knowledge work continues to grow exponentially, but at the same time, this brings significant challenges1 such as biases and privacy risks.

The authors of a newly published paper2 in the Journal of Knowledge Management contend that while research in support of AI adoption is expanding, it remains fragmented. To address this, they carry out a systematic literature review of of 101 research articles using the PRISMA guidelines3. Systematic reviews4 produce a reliable knowledge base through accumulating findings from a range of studies. Authors Imran Ali, Khoa Nguyen, Aniqua Mahmood Ali, and Tingru Cui span three different countries across two continents, which can bring research quality benefits5.

In their systematic review, Ali and colleagues delve into the powerful synergy of human-AI collaboration and how this is driving the evolution of knowledge ecosystems, which they describe as “adaptive,
multi-actor networks for knowledge generation and diffusion.” Through their analysis, they develop the integrative antecedent–mediator–moderator–outcome framework shown in Figure 1 and explained after the figure. This is a common way of describing the factors related to a topic or issue, where:

  • Antecedents are the initial factors that influence the outcome. They can be direct or indirect and can include individual characteristics, environmental factors, or situational influences.
  • Outcomes are the direct effects of the antecedents, and can typically be measured.
  • Mediators are variables that explain the relationship between the antecedents and the outcome. They help to understand why or how the relationship occurs.
  • Moderators are factors that influence the relationship between the antecedents and the outcome. They can affect the strength or direction of the relationship.
Factors in effective human-AI collaboration in knowledge ecosystems
Figure 1. Factors supporting and hindering effective human-AI collaboration in knowledge ecosystems (source: Ali et al., 2025).

Antecedents

These factors span four domains: trust, AI capabilities, organizational context, and user expertise:

  • Trust in AI is a cornerstone of collaboration. Positive perceptions of transparency, reliability and ethical alignment enhance user confidence and engagement. Trust enables AI to function as a cognitive scaffold, refining judgments, reducing uncertainty and enhancing decision-making . In contrast, mistrust—stemming from opaque algorithms, biased outputs, or ethical concerns—undermines adoption and results in shallow or inconsistent AI use. Designing AI systems that inspire trust while managing operational complexity remains a core challenge.
  • AI capabilities and contextual relevance also constitute critical antecedents. Effective systems align with user goals, domain-specific requirements and data characteristics, offering insights that complement cognitive needs. Adaptive and context-aware AI augments human intuition by identifying risks and surfacing opportunities in dynamic settings. In contrast, systems that are rigid, poorly tailored or insensitive to evolving conditions often induce user confusion, degrade trust andimpair system effectiveness.
  • Organizational context further shapes human-AI collaboration success. Cultures fostering experimentation, interdisciplinarity and learning are more conducive to AI integration, enabling gains in productivity, adaptability and innovation. Leadership commitment and sufficient resources also facilitate deeper system embedding. Conversely, risk-averse environments and limited infrastructure often relegate AI to peripheral roles, stalling its transformative potential.
  • Structural antecedents include disciplinary and geographical imbalances. Research is concentrated in management and information systems, with a geographic focus on North America, Western Europe and East Asia. This marginalizes perspectives from the Global South, public sector and labor studies. As a result, assumptions regarding trust, infrastructure and digital literacy may lack global transferability. Underrepresented regions face unique constraints involving data scarcity, limited regulation and cultural skepticism that reconfigure human-AI collaboration dynamics. Addressing these gaps through multi-regional, cross-disciplinary inquiry is imperative for building more inclusive, generalizable frameworks.
  • User expertise and experience determine how AI insights are interpreted. Novices need intuitive interfaces, guided feedback and gradual exposure to foster comfort and understanding. Experts seek sophisticated, interpretable tools that support strategic problem-solving and hypothesis-driven decision-making. Customizing systems to user proficiency ensures relevance and fosters long-term engagement.

Outcomes

When antecedents align—positive technology perceptions, context-fit AI capabilities, supportive environments and matched user expertise—transformative outcomes emerge:

  • A primary result is enhanced decision quality and efficiency, as AI supports more accurate, timely and evidence-based judgments. By automating repetitive tasks and reducing cognitive load, AI allows users to concentrate on strategic, high-value work, reshaping decision-making and operational processes.
  • User satisfaction and engagement also increase when AI systems are trustworthy and responsive to user needs. Intuitive interfaces and adaptive functionalities foster exploration and workflow refinement, nurturing innovation and a culture of continuous learning. This engagement enables AI-driven innovation and knowledge creation, where human creativity combines with algorithmic precision to identify patterns, uncover insights and support adaptive strategies.
  • Ethical and social considerations are integral. As algorithmic decisions influence fairness, privacy and accountability, organizations must implement responsible AI governance frameworks to align with societal values and regulations, reinforcing public trust.
  • De-skilling is a critical unintended consequence. Automating judgment-intensive tasks can erode user expertise, especially when systems obscure reasoning or oversimplify decisions This is particularly concerning in high-stakes fields like healthcare or finance, where over-reliance on AI may hinder human oversight and increase risk. To mitigate this, systems should maintain user control, promote understanding and include feedback loops to preserve cognitive engagement.

These outcomes also improve organizational productivity and competitiveness. By augmenting rather than replacing human judgment, AI fosters synergistic collaboration. This enables organizations to navigate complexity, leverage data and meet evolving stakeholder expectations with resilience and adaptability.

Mediators

Mediators are noticeable to translating conducive antecedents into effective human-AI collaboration outcomes by shaping collaboration depth and quality:

  • Trust and social presence enhance user acceptance of algorithmic recommendations. But without these mediators, even reliable AI and supportive environments may not yield improved performance. When AI appears opaque or disconnected, users hesitate or disengage, undermining system utility.
  • Explanation and cognitive alignment are essential for interpreting AI outputs. Transparent, context-sensitive explanations foster relevance and confidence in decisions. Without them, users struggle to map AI logic to domain knowledge, reducing effectiveness.
  • Emotional engagement and psychological safety also mediate outcomes. When users feel safe to challenge AI outputs, co-creation and experimentation emerge, deepening integration and reducing fear of failure.
  • AI over-dependence is another potent mediator. Trust and cognitive alignment can lead users to defer judgment, undermining critical thinking. This becomes risky in complex tasks, where passive reliance hampers adaptive reasoning. Reflective engagement and human oversight must be promoted to avoid this risk.
  • Task alignment and customization further mediate human-AI collaboration success. AI systems should match user skills, task complexity and context. Misaligned tools disengage users or overwhelm them, diluting AI’s impact. Without effective mediation, trust or expertise alone cannot ensure benefits.

Mediators enhance interpretability, emotional investment and contextual fit, ensuring human-AI collaboration promotes innovation, quality decision-making and high-performance outcomes.

Moderators

Moderators shape how favorable antecedents translate into productive human-AI collaboration outcomes by influencing relationship strength, direction and stability:

  • User attitudes crucially affect the efficacy of trust, explanation and cognitive alignment. Positive predispositions—shaped by prior experience, norms or institutional culture—encourage engagement with AI recommendations, fostering innovation and strengthening knowledge ecosystems. In contrast, skepticism or anxiety may lead to hesistance, rejection of AI outputs and a return to manual practices.
  • AI capability bias is a pivotal moderator. When users perceive AI as biased—due to flawed data or opacity—they may distrust and underutilize it, eroding the influence of trust and explanation.
  • Conversely, perceived fairness bolsters confidence and decision quality. This dynamic is especially critical in high-stakes, ethically sensitive environments, where transparency and auditability become essential.
  • Task complexity and AI adaptability also modulate human-AI collaboration effectiveness. When AI matches task demands, especially in complex or data-intensive settings, it enhances decision quality and efficiency. Poorly adapted or rigid systems, however, frustrate users and hinder adoption.
  • Explainability and transparency further shape user trust and acceptance. Clear reasoning, highlighted variables and familiar outputs foster integration, while opaque models cause confusion and resistance. In fields like finance and public health, these elements are essential for accountability.
  • Ethical alignment and perceived autonomy also moderate engagement. AI perceived as respectful of moral norms and individual control drives acceptance and satisfaction. Systems seen as coercive or misaligned may provoke rejection despite strong antecedents.

Moderators ensure human-AI collaboration remains context-sensitive, mediating how users perceive, trust and engage with AI. Their management is part and parcel to fostering trust, adaptability and sustainable integration in knowledge ecosystems.

Header image source: Developers, Turing Commons, CC BY-SA 4.0.

References:

  1. Rezaei, M. (2025). Artificial intelligence in knowledge management: Identifying and addressing the key implementation challenges. Technological Forecasting and Social Change, 217, 124183.
  2. Ali, I., Nguyen, K., Ali, A. M., & Cui, T. (2025). Human–AI collaboration in knowledge ecosystems: a multidisciplinary review, integrative framework and future directions. Journal of Knowledge Management, 1-22.
  3. Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., … & Moher, D. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ, 372.
  4. Boyes, B. (2018, May 18). Using narrative reviews, systematic reviews, and meta-analyses in evidence-based knowledge management (KM). RealKM Magazine.
  5. Gaskell, A. (2019, November 26). The need for academics to develop collaborative skills. The Horizons Tracker.

Bruce Boyes

Bruce Boyes is editor, lead writer, and a director of RealKM Magazine and winner of the International Knowledge Management Award 2025 (Individual Category). He is an experienced knowledge manager, environmental manager, project manager, communicator, and educator, and holds a Master of Environmental Management with Distinction and a Certificate of Technology (Electronics). His many career highlights include: establishing RealKM Magazine as an award-winning resource with more than 2,500 articles and 5 million reader views, leading the knowledge management (KM) community KM and Sustainable Development Goals (SDGs) initiative, using agile approaches to oversee the on time and under budget implementation of an award-winning $77.4 million recovery program for one of Australia's iconic river systems, leading a knowledge strategy process for Australia’s 56 natural resource management (NRM) regional organisations, pioneering collaborative learning and governance approaches to empower communities to sustainably manage landscapes and catchments in the face of complexity, being one of the first to join a new landmark aviation complexity initiative, initiating and teaching two new knowledge management subjects at Shanxi University in China, and writing numerous notable environmental strategies, reports, and other works.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button