
Four dimensions and five building blocks for responsible AI in knowledge management
This article is part of an ongoing series looking at AI in KM, and KM in AI.
Recent RealKM Magazine articles have identified that ethics is a significant challenge1 in implementing artificial intelligence (AI) in knowledge management (KM), and provided strategies and recommendations2 for managing these ethical considerations.
If the use of AI in KM continues in the absence of such strategies, then it’s really only a matter of time before knowledge managers end up facing legal action in the courts or public criticism in the media, or both, as highlighted by the troubling case of a KM organisation3. Indeed, if that KM organisation was based in Australia, such consequences would very likely have already come to bear, causing serious reputational damage for KM as a whole.
And, if anyone in KM is thinking that this isn’t that important or it’s not their problem, then a disturbing case of dark side KM4 is further food for thought.
To further assist the management of ethical considerations in the use of AI in KM, this article provides four dimensions and five building blocks for responsible AI. “Responsible AI” (RAI) refers to5 “an array of principles, practices, and standards that help ensure AI technologies and products are developed and used safely, ethically, and in line with societal expectations.”
Organizations, including KM organizations, should use these frameworks and those linked above to develop and implement AI strategies that have ethical and responsible AI at their core.
Four dimensions of responsible AI
In a 2013 paper6 in the journal Research Policy puts forward four dimensions of responsible innovation which can be applied to the innovation of AI. The dimensions do not float freely, but must connect as an integrated whole. They are:
- Anticipation – The detrimental implications of new technologies are often unforeseen, and risk-based estimates of harm have commonly failed to provide early warnings of future effects. Anticipation prompts knowledge managers to ask ‘what if…?’ questions, to consider contingency, what is known, what is likely, what is plausible, and what is possible. Methods of foresight, technology assessment, horizon scanning or scenario planning can be important techniques. Some scholars have also suggested that socio-literary techniques drawing on science fiction may be powerful ways to democratise thinking about the future.
- Reflexivity – This means holding a mirror up to one’s own activities, commitments, and assumptions, being aware of the limits of knowledge, and being mindful that a particular framing of an issue may not be universally held. Mechanisms such as codes of conduct, moratoriums and the adoption of standards may build this reflexivity,
- Inclusion – The waning of the authority of expert, top-down policy-making has been associated with a rise in the inclusion of new voices in the governance of science and innovation. These small-group processes of public dialogue include consensus conferences, citizens’ juries, deliberative mapping, deliberative polling, and focus groups. They can enable public debate to take place ‘upstream’ in the scientific and technological process.
- Responsiveness – Responsible innovation requires a capacity to change shape or direction in response to stakeholder and public values and changing circumstances. Responsiveness is about adjusting courses of action while recognising the insufficiency of knowledge and control. Diversity is an important feature of productive, resilient, adaptable, and therefore responsive innovation systems. Responsible innovation should not just welcome diversity; it should nurture it.
Five building blocks for responsible AI
In 2024, the Center for Democracy & Technology (CDT) brought together over 30 practitioners for a workshop and interviews. The group included socio-technical researchers, designers, policy experts, technical leaders, and compliance and legal personnel, with experience working on efforts related to responsible AI, AI ethics, and AI safety. Practitioners came from a range of organizations across industry, civil society, and government.
As described in a newly published playbook7, insights from the workshop and interviews revealed five building blocks – the five P’s – needed to operationalise responsible AI:
- People: Empower your experts. Responsible AI goals are best served by multidisciplinary teams that contain varied domain, technical, and social expertise. Rather than seeking “unicorn” hires with all dimensions of expertise, organizations should build interdisciplinary teams, ensure inclusive hiring practices, and strategically decide where RAI work is housed i.e., whether it is centralized, distributed, or a hybrid. Embedding RAI into the organizational fabric and ensuring practitioners are sufficiently supported and influential is critical to developing stable team structures and fostering strong engagement among internal and external stakeholders.
- Priorities: Thoughtfully triage work. For responsible AI practices to be implemented effectively, teams need to clearly define the scope of this work, which can be anchored in both regulatory obligations and ethical commitments. Teams will need to prioritize across factors like risk severity, stakeholder concerns, internal capacity, and long-term impact. As technological and business pressures evolve, ensuring strategic alignment with leadership, organizational culture, and team incentives is crucial to sustaining investment in responsible practices over time.
- Processes: Establish structures for governance. Organizations need structured governance mechanisms that move beyond ad-hoc efforts to tackle emerging issues posed in the development or adoption of AI. These include standardized risk management approaches, clear internal decision-making guidance, and checks and balances to align incentives across disparate business functions. Processes should layer formal methods (e.g., audits, review checkpoints) with informal ones (e.g., ethical norms, internal culture) to support consistency and institutional memory required for effective AI governance.
- Platforms: Invest in responsibility infrastructure. To scale responsible practices, organizations will be well-served by investing in foundational technical and procedural infrastructure, including centralized documentation management systems, AI evaluation tools, off-the-shelf mitigation methods for common harms and failure modes, and post-deployment monitoring platforms. Shared taxonomies and consistent definitions can support cross-team alignment, while functional documentation systems make responsible AI work internally discoverable, accessible, and actionable. Infrastructure that balances automation with the need for human oversight is particularly crucial for navigating high-stakes contexts.
- Progress: Track efforts holistically. Sustaining support for and improving responsible AI practices requires teams to diligently measure and communicate the impact of related efforts. Tailored metrics and indicators can be used to help justify resources and promote internal accountability. Organizational and topical maturity models can also guide incremental improvement and institutionalization of responsible practices; meaningful transparency initiatives can help foster stakeholder trust and democratic engagement in AI governance.
Header image source: Created by Bruce Boyes with Microsoft Designer Image Creator.
References:
- Boyes, B. (2025, August 27). What are the key challenges in implementing AI in KM, and how can they be addressed? RealKM Magazine. ↩
- Boyes, B. (2025, September 3). Strategies and recommendations for managing ethical considerations in the use of AI in KM. RealKM Magazine. ↩
- Boyes, B. (2025, March 13). Troubling case highlights that the KM community lacks key capabilities needed for safe AI use. RealKM Magazine. ↩
- Boyes, B. (2025, July 31). Ignoring harm, saving face: strategies of non-knowledge and knowledge avoidance, and what they mean for KM. RealKM Magazine. ↩
- Cibralic, B., Bogen, M., Bankston, K., & Joshi, R. (2025, September). Principled Practice: A Playbook for Operationalizing Responsible AI. AI Governance Lab, Center for Democracy & Technology. ↩
- Stilgoe, J., Owen, R., & Macnaghten, P. (2020). Developing a framework for responsible innovation. In The ethics of nanotechnology, geoengineering, and clean energy (pp. 347-359). Routledge. ↩
- Cibralic, B., Bogen, M., Bankston, K., & Joshi, R. (2025, September). Principled Practice: A Playbook for Operationalizing Responsible AI. AI Governance Lab, Center for Democracy & Technology. ↩




