
Strategies and recommendations for managing ethical considerations in the use of AI in KM
This article is part of an ongoing series looking at AI in KM, and KM in AI.
A recent RealKM Magazine article1 reported on new research2 exploring the challenges of using artificial intelligence (AI) in knowledge management (KM) and how to address them. The research study categorised the challenges into three distinctive domains: technological, organisational, and ethical.
Another new research study3 published in the journal Telematics and Informatics Reports looks at one of those domains – ethical considerations – and puts forward strategies and recomenndations for addressing these considerations. To do this, study authors Daniel Kogi Njiru, David Muchangi Mugo, and Faith Mueni Musyoka carried out a systematic review using the PRISMA guidelines4. Systematic reviews5 produce a more reliable knowledge base through accumulating findings from a range of studies. The study emphasis was AI-based user profiling for KM systems with a focus on academic environments, but its findings and recommendations are applicable to the ethics of the wider use of AI in KM.
Findings
From their study, Njiru, Mugo, and Musyoka identified five primary areas of ethical considerations: privacy, algorithmic bias, transparency, accountability, and fairness.
Privacy is a major concern in AI, which includes how data is collected, stored, and used. It was the most frequent ethical concern, appearing in 27.9 % of the reviewed studies This emphasizes the need for strong data management practices to protect user privacy and ensure that AI systems do not perpetuate harmful biases.
Algorithmic bias is another important area of concern, addressed in 25.6 % of the reviewed studies. Gender, racial, and age biases are identified as problems. These biases arise from demographic homogeneity, underrepresentation of certain races, and biases against specific age groups. The consequences of these biases are far-reaching, including reinforcing stereotypes, showing cultural insensitivity, and widening generational knowledge gaps.
Transparency is important for building trust in AI systems, and 16.3 % of the reviewed studies focus on this aspect. Key issues include making models interpretable, providing explanations for decisions, and tracing the origin of data. However, these areas are often compromised by misleading correlations, overlooked confounding factors, and temporal inconsistencies. These biases can lead to misleading insights, inaccurate predictions, and unreliable trend analysis.
Accountability is another ethical consideration, with 16.3 % of the reviewed studies examining responsible AI development, audit mechanisms, and legal compliance. The analysis highlights sources of bias such as improper comparators, contextual insensitivity, and historical bias propagation. These biases can lead to unfair performance evaluations, inappropriate knowledge transfer, and the perpetuation of past inequalities.
Fairness is a basic ethical principle, with 13.9 % of the reviewed studies focusing on equal opportunity, non-discrimination, and inclusivity. However, these areas are often influenced by cognitive biases, availability heuristics, and anchoring biases. These biases can result in echo chamber effects, an overemphasis on recent information, and resistance to profile updates.
Strategies
Njiru, Mugo, and Musyoka also uncovered a range of strategies for tackling the identified ethical considerations, and tested their effectiveness. Each strategy demonstrated positive outcomes but also had specific limitations.
The strategies, with links to further information, are:
- Explainable AI6 techniques play an important role in understanding how AI models make decisions.
- Local interpretable model-agnostic explanations (LIME)7 offers interpretable predictions on a local level, but it can be computationally complex.
- SHapley Additive exPlanations (SHAP)8 helps visualize the importance of different features, but scalability can be an issue.
- Counterfactual explanations9 provide user-friendly rationales for decisions, but they must ensure that the explanations are actionable.
- Privacy-preserving algorithms10 are designed to protect sensitive data.
- Differential privacy11 guarantees statistical privacy, but there is a trade-off between privacy and utility.
- Federated learning12 allows for decentralized model training, but there can be communication overheads.
- Homomorphic encryption13 enables secure multi-party computation, but performance may be limited.
- Privacy is preserved through a layered approach using the above federated learning, differential privacy, and homomorphic encryption, which collectively address data anonymity, compliance, and secure processing.
- Ethical guidelines and frameworks ensure ethical practices in AI.
- AI ethics boards promote organizational accountability by requiring diverse representation.
- Ethics-by-design14 principles integrate ethical considerations into the design process, although implementing them within existing workflows can be challenging.
- Ethical impact assessments15 systematically evaluate risks, but they require large resources.
- Human-in-the-loop16 approaches involve human oversight and feedback.
- Expert oversight ensures domain-specific quality control, but scalability can be a challenge.
- User feedback integration allows for continuous system improvement, but there is a risk of user manipulation.
- Collaborative decision-making strikes a balance between human-AI interaction, but it requires careful definition of optimal interaction points.
Key recommendations
From their study findings, Njiru, Mugo, and Musyoka make two further key recommendations:
1. Ethical AI feedback loop (EAFL)
Organizations should implement an ‘ethical AI feedback loop’ (EAFL) system that allows for continuous monitoring and iterative improvement of algorithms.
EAFL is a process where an AI system’s performance is continuously monitored, evaluated for ethical compliance (like fairness, transparency, and privacy), and adjusted based on feedback to ensure responsible and unbiased outcomes. It’s a proactive approach to embed ethical principles into the AI lifecycle, preventing biases from being amplified and ensuring the AI aligns with human values and regulations
Unlike static audits, EAFL enables continuous adaptation closing the ‘actionability gap’. This approach should utilize federated learning techniques to gather insights while safeguarding individual privacy, ensuring that ethical considerations are integral to the development process.
2. Assessing ‘ethical debt’
‘Ethical debt’17 results from not considering possible negative consequences or societal harms. Implementation of EAFL and the strategies above should be accompanied by standardized metrics that assess the ethical debt of AI systems across fairness, transparency, privacy, and accountability. By setting clear, measurable standards, organizations can transition from theoretical discussions to practical, actionable ethical guidelines applicable across various knowledge management contexts.
Article source: Ethical considerations in AI-based user profiling for knowledge management: A critical review, CC BY 4.0.
Header image source: Nacho Kamenov & Humans in the Loop / Better Images of AI / CC BY 4.0.
References:
- Boyes, B. (2025, August 27). What are the key challenges in implementing AI in KM, and how can they be addressed? RealKM Magazine. ↩
- Rezaei, M. (2025). Artificial intelligence in knowledge management: Identifying and addressing the key implementation challenges. Technological Forecasting and Social Change, 217, 124183. ↩
- Njiru, D. K., Mugo, D. M., & Musyoka, F. M. (2025). Ethical Considerations in AI-Based User Profiling for Knowledge Management: A Critical Review. Telematics and Informatics Reports, 100205. ↩
- Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Medicine, 6(7), e1000097. ↩
- Boyes, B. (2018, May 18). Using narrative reviews, systematic reviews, and meta-analyses in evidence-based knowledge management (KM). RealKM Magazine. ↩
- Wikipedia, CC BY-SA 4.0. ↩
- C3.ai. (n.d.). What is Local Interpretable Model-Agnostic Explanations (LIME)? C3.ai Glossary. ↩
- Lundberg, S. (2018). Welcome to the SHAP documentation. SHAP. ↩
- Baron, S. (2023, January 27). Philosophers have studied ‘counterfactuals’ for decades. Will they help us unlock the mysteries of AI? The Conversation. ↩
- Ashraf, M., Rady, S., Abdelkader, T., & Gharib, T. F. (2023). Efficient privacy preserving algorithms for hiding sensitive high utility itemsets. Computers & Security, 132, 103360. ↩
- Wikipedia, CC BY-SA 4.0. ↩
- Wikipedia, CC BY-SA 4.0. ↩
- Wikipedia, CC BY-SA 4.0. ↩
- World Economic Forum. (2020, December). Ethics by Design: An organizational approach to responsible use of technology. World Economic Forum White Paper. ↩
- UNESCO. (2023). Ethical Impact Assessment: A Tool of the Recommendation on the Ethics of Artificial Intelligence. Paris: United Nations Educational, Scientific and Cultural Organization. ↩
- Wikipedia, CC BY-SA 4.0. ↩
- Fiesler, C. (2023, April 19). AI has social consequences, but who pays the price? Tech companies’ problem with ‘ethical debt’. The Conversation. ↩




