Critical Eye is a semi-regular feature where RealKM analyses and discusses the methodology and science behind claims made in publications.
In a recent exchange on the KM4Dev discussion group, a member recently asked the very reasonable question:
I am trying to encourage some local institutions and communities to curate their information/knowledge. Where can I get more evidence showing why curation is such an important thing to do? Examples or experiences will be gratefully appreciated.
RealKM emphasises the importance of evidence as a critical part of the decision-making process, but the nature of the evidence presented also needs to be closely scrutinized. Is the evidence anecdotal or experimental? Is it peer-reviewed? Has it been repeated in other contexts? Is it reasonably free of bias?
Knowledge management as a discipline has a mixed track record in seeking out high-quality evidence for its tools, techniques and practices. While the core of KM sits in a well-established field of scientific predecessors, including complexity theory and library science, sometimes this can add up to less than the sum of its parts.
The community response to the question posed above has been disappointingly typical (although this is not meant as criticism of the specific individuals involved – I am sure I have been guilty of similar responses myself). Remember – the question specifically asked for evidence. Responses provided so far include:
- A link to Jaap Pels’ KM framework which consists of a range of vague assertions with little backing evidence.1
- A link to a blog post by Ewen Le Borgne discussing how individuals learn, again with little backing evidence. The models described are “treated as true” with the main underlying supporting document describing a “theory of change” in a 2011 working paper on applying knowledge management to the West Africa Water Supply, Sanitation and Hygiene Initiative (WA-WASH). This also presents the components of its theory of change “as given”. Oddly, despite recommending the documentation of lessons learned it makes no reference to the 20 WASH lessons learned documented and published back in 1993.
- A discussion around the definition of ‘curation’.
- A very practical description by Beth Kanter of “how to do curation”, with some matrices and charts to give the feeling of authority, but again with little underlying justification for the work. Links do provide some anecdotal evidence that curation can help a person or organisation become the go-to authority on a topic, and assert that this act has value.
- A personal anecdote, asserting value in their own context but acknowledging the lack of rigorous analysis of benefits.
- A link to a specific implementation of curated content, and why that approach was taken. No benefits have yet been established; at the time of the post the implementation appears to be in its early days.
In general, there is very little reference to evidence in the responses given. We do appear to have a working hypothesis around content curation though. It is probably best expressed in its aspirational form as:
In a world submerged by a flood of information, content curators may provide in the coming months and years a new, tremendously valuable service to anyone looking for quality information online: a personalized, qualified selection of the best and most relevant content and resources on a very specific topic or theme.
How would we find evidence to test this hypothesis? Let’s break it down into components:
- the world is submerged by a flood of information
- the act of content curation (ie filtering, selecting and ensuring the availability of information) improves the quality of information
- making quality information available and accessible is a valuable act for institutions and communities
The assertion of information overload is comparatively easy to assess and test. This gets a tick. The improvement of quality through curation is obviously dependent on who is doing the curation but it is also fairly easy to demonstrate that objective criteria for outputs of curation are meeting predefined quality standards. So this is not a problem either.
This leaves evidence of value, one of the longtime bugbears of KM and really the guts of the original problem posed. In the absence of quantitative evidence, is it possible to at least provide evidence that acts of curation can be expected to lead to desirable outcomes? Here are three strong candidates for this claim:
- Curation reduces the number of options presented, making decision-making easier
- Curation facilitates trust-based networks which reduce people’s information overload
- Curation enhances the trust and reputation of an organisation with others
Each of these claims has both a business rationale and evidence-backed research to accompany it. I think it is essential that Knowledge Management becomes more demanding about seeing evidence to support its practices when deciding how best to solve its problems. Unfortunately we have become complacent about assuming value of our techniques without searching for how that value may be realised in practice.