This article is part of section 3.2 of a series of articles featuring the ODI Working Paper Taking responsibility for complexity: How implementation can achieve results in the face of complex problems.
The shift in approach required is to see intervention as an expression of hypotheses and assumptions. At the same time as attempting to deliver a service or achieve a goal, it is important to do all that is possible to look at the questions a policy poses and to seize opportunities to assess how robust and relevant those hypotheses are. This idea is at the heart of approaches to policy-making that see policy as an experiment. For example, Rondinelli1 proposes that development policies be seen as ‘social experiments,’ arguing that taking into account the underlying uncertainty in delivering change through policy, development projects should be used to learn as we would from experiments, in order to redefine problems and solutions along the way.
In the short term, there may be a trade-off between learning and achieving objectives, so some ‘slack’ may be required to enable an intervention to be based on a better understanding of the situation; what is crucial is that policies place explicit value on knowledge and learning as an outcome of activities, and that this learning be channelled directly back into policy development. There are some ethical issues here, which more traditional approaches to policy experimentation have already examined in detail. These are not issues that should be dismissed lightly: there will always be difficult trade-offs between delivery and learning in complex problems, but there are also some ways of mitigating these2.
The incentives around M&E3 are crucial: where recognising that not everything went to plan is seen as a ‘failure’ that has to be avoided, staff are unlikely to reflect genuinely on issues. An alternative approach is to see that, where a project seems to be starting to ‘flag’ on performance measures, this is an opportunity for learning and further assistance – for example triggering additional support and expertise. The way M&E is integrated with accountability requirements is also crucial. As we have seen, RBM4 may not be appropriate in the face of complex problems: measuring the effects and impact of implementation is important, but individual, team or organisational performance must not be judged solely on whether outcomes were achieved – outcomes are frequently outside the exclusive control of any one actor and so being held accountable for them is counterproductive5. The evidence is not conclusive, but complex tasks may require learning objectives rather than performance goals6.
Governments and aid agencies must allow greater diversity in practice – RBM may still be appropriate for some areas of work but not for others. Another possibility is to use performance frameworks that are ‘multidimensional,’ such as the UK Department for Environment, Food and Rural Affairs’ (DEFRA’s) ‘stretching the web’ tool, which compares projects on a range of measures in a number of dimensions, or the ‘balanced scorecard’ approach and the ‘performance prism,’ which involve taking into account aspects such as stakeholder satisfaction; internal processes and strategies; and innovation and learning; as well as achievement of objectives7.
Finally, variation is something that bureaucracies do not deal with well: they may view it as promoting inconsistencies and waste. In complex problems, it is crucial to ensure sufficient redundancy over (static measures of) efficiency; while it may be clear in hindsight which projects are successful and which did not individually provide good ‘value for money,’ when faced with significant uncertainty it is only sensible to invest in a broad range of options before focusing resources on those that have proven to work8. Promoting and incentivising innovation in service delivery will be vital.
Variation should be seen as the engine of learning; learning gained from a ‘failed’ project should be valued highly; and ensuring sufficient redundancy should be seen as the only responsible approach to programming in complex domains. Aside from exhortations to ‘allow for variation,’ another suggestion is that agencies operate with ‘parallel project selection,’ where there are a number of ‘decision centres’ in the agency, and acceptance by any one is sufficient for the project to be funded9.
Next part (section 3.3): How? A toolkit for negotiated learning.
See also these related series:
- Exploring the science of complexity
- Planning and strategy development in the face of complexity
- Managing in the face of complexity.
Article source: Jones, H. (2011). Taking responsibility for complexity: How implementation can achieve results in the face of complex problems. Overseas Development Institute (ODI) Working Paper 330. London: ODI. (https://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/6485.pdf). Republished under CC BY-NC-ND 4.0 in accordance with the Terms and conditions of the ODI website.
References and notes:
- Rondinelli, D. (1993). Development Projects as Policy Experiments: an Adaptive Approach to Development Administration. London: Routledge. ↩
- For example, RCTs involve randomly assigning an intervention to some beneficiaries while also studying certain indicators in a set of potential beneficiaries who do not receive the intervention. The issue here is that it could be seen as denying the intervention to some people and prioritising something nebulous as ‘testing a hypothesis’ ahead of delivering crucial services and (in some situations) potentially saving lives. There are two considerations here. First, it is possible to actively design policies so as to learn about their effects without necessarily denying services to those who need them. For example, RCTs sometimes employ a ‘pipeline’ design, where the intervention is rolled out to one group before another, rather than instead of. Second, in situations where planning cannot be complete, with the best will in the world it may simply not be possible to understand how best to deliver crucial services without some directed learning, and it could be more unethical to pour large amounts of public resources into a course of action that represents only an initial guess at how to tackle the problem (this idea is at the heart of policy pilots, for example, which are carried out before interventions are rolled out at a large scale). ↩
- Monitoring & Evaluation. ↩
- Results-based Management. ↩
- Lerner, J. and Tetlock, P. (1999). ‘Accounting for the Effects of Accountability.’ Psychological Bulletin 125(2): 255-275. ↩
- Ordonez, L., Schweitzer, M., Galinsky, A. and Bazerman, M. (2009). ‘Goals Gone Wild: The Systematic Side-effects of Over-prescribing Goal-setting.’ Working Paper 09-083. Cambridge, MA: Harvard Business School. ↩
- Hailey, J. and Sorgenfrei, M. (2004). ‘Measuring Success: Issues in Performance Measurement.’ Oxford: INTRAC. ↩
- Beinhocker, E. (2006). The Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics. Cambridge, MA: Harvard Business Press. ↩
- Ellerman, D. (2006). ‘Rethinking Development Assistance: Networks for Decentralized Social Learning.’ ↩
- Bourgon, J. (ed.) (2010). ‘The New Synthesis Project: Preparing Government to Serve Beyond the Predictable. Brazil Roundtable Report.’ Waterloo: University of Waterloo. ↩