This article is part of section 3.2 of a series of articles featuring the ODI Working Paper Taking responsibility for complexity: How implementation can achieve results in the face of complex problems.
Theory and practice show that, in complex problems, improved links between knowledge and policy (i.e. strengthened feedback loops between implementation and lessons from experience), come from enhanced personal links between knowledge producers and users, in an atmosphere of trust and reciprocity1. Therefore, promoting the ongoing adaptation of interventions to unfolding events and signals needs to focus on promoting buy-in and ownership throughout the implementation chain or network – increasing uptake of evidence through ‘internal’ incentives.
This highlights the importance of embedding M&E2 in policies and programmes. Timing is often the most crucial factor in facilitating the uptake of evaluations3, and so, especially given the unpredictable nature of change in complex settings, having full-time capacity for M&E within implementation teams will be crucial. This could be done by providing teams with drawdown M&E advice or capacity, for example having an internally focused ‘evaluation help desk’ there to help teams design and implement M&E where needed. However, given the need to ensure ‘intelligent customers’ as well as buy-in, it may be crucial to ensure that one or more members of an implementation team has the required skills and capacities too.
One way forward that promises to take into account all of the above can be found in work on developmental evaluation4, which involves embedding evaluators in implementation teams for some time. Rather than focusing on testing and validating programme models, the intention is to support the development of innovations and the adaptation of interventions; it is carried out in the mindset of providing a ‘reality test,’ engaging policy / programme staff in the process and nurturing their hunger for learning and building their ongoing capacity to think and engage evaluatively.
Another priority for improving ownership of M&E involves allowing devolved units to shape their own M&E processes and frameworks rather than them being forced to use one particular tool or other. Although it will be crucial to ensure that M&E and learning are central pillars in implementation and management, choices over how to do this should be decentralised where possible. At the moment, choice of M&E framework is often highly restricted and indicators are often standardised, for example. Again, actors at all levels will need guidance in choosing the best method for their situation.
Schön5 discusses a US regional medical programme which reformed implementation using a systems theory model. Programme design was already decentralised, and longstanding problems in the original scheme were resolved by devolving performance evaluation to the regions also. Agencies still had to carry out M&E and demonstrate performance, but allowing them to do this on their own terms, respecting differences in context, experience and opportunities, helped generate a great deal more enthusiasm and commitment to change, and ultimately was a key factor in programme success.
Institutional context and relationships are key. RAPID’s6 case study on the development of the sustainable livelihoods methodology7 shows that a close and ongoing relationship between policy organisations and academic organisations was central to reorienting practice. Similarly, one key function for think-tanks is to provide a ‘safe’ space for political parties to discuss and test ideas, and then to look at how to turn them into a policy programme8.
Next part (section 3.2.4): Implementation as an evolutionary learning process.
See also these related series:
- Exploring the science of complexity
- Planning and strategy development in the face of complexity
- Managing in the face of complexity.
Article source: Jones, H. (2011). Taking responsibility for complexity: How implementation can achieve results in the face of complex problems. Overseas Development Institute (ODI) Working Paper 330. London: ODI. (https://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/6485.pdf). Republished under CC BY-NC-ND 4.0 in accordance with the Terms and conditions of the ODI website.
References and notes:
- Michaels, S. (2009). ‘Matching Knowledge Brokering Strategies to Environmental Policy Problems and Settings.’ Environmental Science and Policy 12(7): 994-1011. ↩
- Monitoring & Evaluation. ↩
- Jones, N., Jones, H., Steer, L. and Datta, A. (2009). ‘Improving Impact Evaluation Production and Use.’ Working Paper 300. London: ODI. ↩
- Patton, M. (2010). Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. New York: Guilford Publications. ↩
- Schön, D. (1973). Beyond the Stable State. New York: W.W. Norton & Company. ↩
- Research and Policy in Development. ↩
- Solesbury, W. (2003). Sustainable Livelihoods: A Case Study of the Evolution of DFID Policy. ODI Working Paper 217, London: ODI. ↩
- Mendizabal, E. and Sample, K. (2009). ‘Thinking Politics: Think Tanks and Political Parties in Latin America.’ London: ODI. ↩