Systems & complexityTaking responsibility for complexity

Taking responsibility for complexity (section 2.4): When to take key decisions: unpredictability and emergent change processes

This article is section 2.4 of a series of articles featuring the ODI Working Paper Taking responsibility for complexity: How implementation can achieve results in the face of complex problems.

Complex problems and the challenges they pose. This section [Section 2] should enable the reader to assess whether their implementation challenge is in fact a ‘complex’ problem, and to identify key characteristics to mark out the appropriate tools for managing the type of complexity faced. It first describes what is meant by a complex problem, and then outlines three specific aspects of complex problems that cause problems for traditional policy implementation. It goes into detail on each of these aspects, providing explanations and ideas to help the reader identify whether their policy or programme is complex in this way (Sections 2.3-2.5).

2.4 When to take key decisions: unpredictability and emergent change processes

Many social, political and economic problems are not amenable to detailed forecasting. On a number of issues, processes of change will inevitably entail events and trends that have not been predicted or taken into account; there will always be some amount of discontinuity and surprise. For example, implementing strategies to mitigate and adapt to the future impacts of climate change on a country must work with a number of levels of uncertainty – on the likely impacts inherent in climate data but also on the likely reaction, for example from farmers or other groups, to changing ecosystems.

Services must nonetheless be delivered and programmes must work without robust, stable knowledge on cause and effect. For some issues, it is not well-understood what the most appropriate means are for addressing a problem. This means the full effects and side-effects of policies cannot be fully anticipated; only some aspects of the future can be foreseen, and many possibilities may be equally plausible in advance. A policy that is optimal under the conditions of first implementation may be less so given the continual flux of change. In addition, our goals may change, as well as our understanding of how to achieve them.

This is about when we gain important knowledge to inform action, and when crucial decisions must be made – for complex problems, crucial insights emerge only during an intervention, and it is not possible to be fully confident that policy/programme decisions will be correct ex ante. This means that greater attention must be paid to concerns throughout an intervention, rather than prior to it. Limits may be placed on the value of knowledge production and use before an intervention, with the bulk of this effort instead being applied during the course of the intervention. Since the context in which a policy or programme is operating is changing continuously, and it is not possible to plan for all eventualities, success can depend on assessing and adapting to emerging signals and changing situations: policy and programming must become better at learning.

Regulations for project and programme approval that place all the emphasis on a large volume of detailed technical assessments prior to the disbursement of funds may not always be appropriate. These kinds of issues have been witnessed in development and public policy more generally for a long time. For example, Porter et al.1 argue that development is ‘a moving, evolving multi-faceted thing, and if it was possible to offer an answer today, it would be inappropriate by tomorrow.’ There have been calls for quite some time to shift the balance of efforts away from ex ante assessment – Easterly2 famously made strident criticisms of ‘planners’ in development organisations, but some time before that a number of experts were already calling to see development as ‘process’ 3, in order to respond to the complexity of the challenge, with Milton Esman making an influential call to this effect back in 1980. The following quote puts it well:

Our society and all of its institutions are in continuing processes of transformation […] we must learn to understand, guide, influence and manage these transformations. We must make the capacity for undertaking them integral to ourselves and our institutions. We must, in other words, become adept at “learning”. This “learning” should not be seen as a one-off event or a case of acquiring new knowledge or skills, rather it involves ongoing practice and reflection on one’s own experience. Since knowledge of “best practice” cannot be easily imported from elsewhere, all organisations must involve themselves in learning as a “continuous, on-the-job process” 4

Ongoing learning and adaptation are crucial when engaging in the politics of reform, as Merilee Grindle has systematically demonstrated. Focusing on social sector reforms in Latin America, she has shown that sometimes small but often significant ‘room for manoeuvre’ arises from the dynamic and fluid nature of reform processes, which can (at different moments in the process, some predictable and some not) present opportunities for motivated and responsive actors to make decisive interventions and succeed ‘against the odds’ in securing reforms5. Similarly, recent work on sector budget support suggests that a lack of continuous engagement, dialogue and adjustments by professional staff has led to a ‘missing middle,’ adversely affecting the quality of those reforms6.

Unsuitability of traditional tools

Many structures, systems and approaches to implementing policies and programmes are not well-suited to such problems. Policy is often driven by an ex ante process of clarifying objectives (assumed to be unambiguous), identifying alternative means of achieving them and modelling the associated costs and benefits, selecting the optimum trade-off and then implementing 7. The more difficult the problem, the greater the perceived need for careful planning, intricate assessment and consultation and negotiation with partners and interest groups before anything is done. Implementation is firmly fixed in advance, with programmes and projects tied to specific activities and outputs that result from extensive, even multiyear, negotiations. Efforts during implementation are then restricted to following a rigid preset schedule and plan of activities. Monitoring and evaluation (M&E) are seen implicitly as a tool for control and compliance first and foremost, with less concern devoted to their potential for helping interventions adapt based on lessons from implementation8.

These approaches assume that causality is well-established, and that the dynamics of the problem being addressed are readily predictable. For example, a well-known critique of popular planning methods such as the log frame is that they assume higher powers of foresight than are in fact possible, meaning that projects and programmes are overly rigid from the outset owing to detailed goal definition and action planning, and require specific performance targets for variables that are not possible to predict with such accuracy9.

Box 5: Results-based management

RBM swept across the public sector in Organisation for Economic Co-operation and Development (OECD) countries in the 1990s as part of extensive public sector reforms10. It has also been adopted to a greater or lesser extent by most bilateral and multilateral development agencies11. While there are a number of different formulations of exactly what it involves, generally it is a broad organisational performance management strategy that emphasises the measurement of results at various levels, and the use of that information to prove and improve performance. The idea is that it can be the basis for replicating successful projects, scaling up and doing more of what works and scaling back on things that don’t work, based on alignment with budgeting procedures, management decisions and individual incentives. It is thus hoped that RBM will enable implementation processes to become more adaptive to evidence about what works.

Unfortunately, the evidence is that RBM has not functioned well as a feedback loop, especially for complex problems. Experience shows that agencies using RBM tend to have success in formulating and clarifying highlevel goals and objectives; aligning programme- and project-level goals; and performance measurement, monitoring and reporting. However, repeated evaluations show that there is frequently very little use of performance information for accountability, or for decision-making and project/programme adjustment12,13. The ‘utilisation problem,’ which dampened hopes of improving policy through evaluation from the 1970s onwards, has been recurrent in relation to RBM and performance frameworks. This is not special to the development sector, but also is a common finding where performance management has been introduced in the public sector worldwide14,15.

Recent reviews of the use of performance frameworks in developed country settings show that a large amount of frameworks and indicators are of poor quality16. It is unlikely that this is wholly a matter of appropriate tools being poorly applied – and is possibly more a sign that inappropriate tools are being used. RBM relies on the fact that goals can be defined and specified unambiguously in terms of clear quantifiable indicators, that funding can be driven by predicted results and that the effects of an agency’s work can be aggregated neatly into some overall attributed impact. These are inappropriate assumptions for complex problems, and this almost seems to be an attempt to ‘assume away’ the problem of messy, complex issues.

On the one hand, the result of these unrealistic expectations can be to make implementation tools irrelevant. Despite extensive efforts being put into processes of analysis, assessment and consultation before an intervention, implementation presents a series of new and unexpected challenges. Strategies are made and plans written, and then they are left on the shelf until reporting cycles come around, with the ‘real’ work going unrecorded. Practitioners then pay little attention to the log frame until it is time to report, at which point efforts generally focus on reproducing what the log frame promised (rather than on a process of genuine investigation of the effects of the intervention)17.

Learning and adjustment does nonetheless go on throughout the lifetime of a programme or policy, but the realities of flexibility and adaptation that go into effective development work on the ground are unseen at higher levels. Planning, monitoring and evaluation (PME) is necessarily a tick-box exercise (to fit in with unrealistic assumptions embedded in the tools) drawing efforts away from the ‘real work,’ to justify projects ex post and explain how everything went according to the plan initially set out (whether or not this was in fact the case). Studies show how, even in contexts with high-capacity organisations, ex ante impact assessments, if employed in a rigid manner, become simply a ‘hoop’ to jump through – a kind of ritual seen as a necessary step in ‘scientific’ management but one with very little real use. Plans then go out of the window as problems shift and the context moves on. This is because, instead of being well-ordered processes where ideas are debated and then translated into concrete action, many policy initiatives, of all sizes, are driven by crises or unforeseen events, with far-reaching decisions made in ‘real time.’ Simple, preset indicators often bear little relation to the underlying change processes.

This means that implementing agencies generate reams of barely relevant information, causing information overload and a potentially serious waste of time and money. In many instances, RBM has even hindered the use of results information to improve practice. Many formal and informal incentives within public service organisations push individuals and teams to try to achieve success within short timeframes and to try to claim responsibility for it18. Studies have shown that, in this context, M&E is often carried out to ‘prove not improve’: for example, monitoring activities frequently revolve around reporting on expected indicators as predefined in a log frame, rather than providing real space to look at the unfolding effects and side-effects of an intervention19. Impact evaluations are used most frequently to legitimise existing spend, rather than to provide direct inputs into programmes and decision-making20.

Worse than this, these tools can have adverse effects on interventions. On the one hand, fixing implementation to a high level of detail in advance means project managers will find it difficult to respond to emerging opportunities or to adapt in response to changing circumstances – being ‘locked in’ to specific deliverables that may no longer be so relevant. Moreover, in the context of complex problems, excessive focus on accountability for results can damage the effectiveness of interventions in the short and long term21. The idea that setting specific and challenging goals for individuals can improve performance works only when it is based on certain conditions, and can have adverse side-effects otherwise: where goals are too narrow compared with the nature of the problem being addressed, or too short in term, they can create perverse incentives; where they are too challenging, they can reduce risk taking as well as motivation and commitment22.

Many commentators have argued for some time that it is inappropriate in complex situations to hold projects / programmes to account for ‘impacts’ which may not be feasibly predictable, and over which an individual programme may have only a limited amount of influence23,24. In the face of complex, multidimensional problems, where change may come about over long timeframes and as a result of a combination of efforts25, setting appropriate goals seems to be a highly difficult task; in the face of novel problems, which are again more likely to be complex problems, the task is even harder. It is not surprising, therefore, that a comprehensive synthesis of the literature on the effects of goal setting finds that setting specific targets and goals in the face of a complex task tends to inhibit learning, degrade performance, dissuade individuals from trying alternative methods, stifle creativity and flexibility in implementation and create a culture of reduced collaboration and relationship building26,27. It may be, then, that it is not appropriate to impose RBM as a way to deal with complex problems in many cases.

Moreover, M&E at the moment tends to be carried out in a context whereby something not going to plan is seen as a ‘failure,’ as an embarrassment on the part of staff, rather than as an opportunity to improve understanding of a problem. This leads to defensiveness in front of evaluators or, where staff monitor indicators and carry out reviews themselves, a lack of willingness to take a genuine step back to reflect on what has worked well and what not, to ask why things have occurred and to examine the processes that have led from an intervention to different intended and unintended impacts. Worse than this, the incentives for implementation may be skewed to focus on ‘low hanging fruits,’ taking a risk-averse approach and focusing predominantly on elements of issues that are not complex (to the detriment of the overall delivery of results).

Next part (section 2.5): How are problems understood? Conflicting perspectives and divergent goals.

See also these related series:

Article source: Jones, H. (2011). Taking responsibility for complexity: How implementation can achieve results in the face of complex problems. Overseas Development Institute (ODI) Working Paper 330. London: ODI. (https://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/6485.pdf). Republished under CC BY-NC-ND 4.0 in accordance with the Terms and conditions of the ODI website.

References and notes:

  1. Porter, D., Allen, B. and Thompson, G. (1991). Development in Practice: Paved with Good Intentions. London: Routledge.
  2. Easterly, W. (2006). The White Man’s Burden: Why the West’s Efforts to Aid the Rest Have Done So Much Ill and So Little Good. London: Penguin Press.
  3. Mosse, D., Farrington, J. and Rew, A. (1998). Development as Process: Concepts and Methods for Working with Complexity. London: Routledge and ODI.
  4. Chapman, J. (2004). System Failure: Why Governments Must Learn to Think Differently. London: DEMOS.
  5. Grindle, M. (2004). Despite the Odds: The Contentious Politics of Education Reform. Princeton, NJ: Princeton University Press.
  6. Williamson, T. and Dom, C. (2010). ‘Making sector budget support work for service delivery: an overview’. ODI Project Briefing 36. London: Overseas Development Institute and Oxford: Mokoro.
  7. Chapman, J. (2004). System Failure: Why Governments Must Learn to Think Differently. London: DEMOS.
  8. Bakewell, O. and Garbutt, A. (2004). The Use and Abuse of the Logical Framework Approach. Stockholm: Sida.
  9. Bakewell, O. and Garbutt, A. (2004). The Use and Abuse of the Logical Framework Approach. Stockholm: Sida.
  10. OECD DAC (2000). ‘Results-based Management in the Development Co-operation Agencies: A Review of Experience.’ Paris: OECD DAC.
  11. Hailey, J. and Sorgenfrei, M. (2004). ‘Measuring Success: Issues in Performance Measurement.’ Oxford: INTRAC.
  12. Thomas, P. (2007). ‘Why is Performance-based Accountability So Popular in Theory and Difficult in Practice?’ World Summit on Public Governance: Improving the Performance of the Public Sector. Taipei, 1-3 May.
  13. An evaluation of RBM across the UN system found performance information was ‘of little practical utility to programme managers and operational decision-making’ and ‘achievement or non achievement of programme objectives ultimately has few consequences for resource allocation, work planning or assessment of managerial performance’, OIOS (2008). ‘Review of Results-based Management at the UN.’ Washington, DC: OIOS.
  14. OECD DAC (2000). ‘Results-based management in the Development Co-operation Agencies: A Review of Experience.’ Paris: OECD DAC.
  15. Thomas, P. (2007). ‘Why is Performance-based Accountability So Popular in Theory and Difficult in Practice?’ World Summit on Public Governance: Improving the Performance of the Public Sector. Taipei, 1-3 May.
  16. OIOS (2008). ‘Review of Results-based Management at the UN.’ Washington, DC: OIOS.
  17. Bakewell, O. and Garbutt, A. (2004). The Use and Abuse of the Logical Framework Approach. Stockholm: Sida.
  18. Smutylo, T. (2001). ‘Crouching Impact, Hidden Attribution: Overcoming Threats to Learning in Development Programs.’ Ottawa: Evaluation Unit, IDRC.
  19. Bakewell, O. and Garbutt, A. (2004). The Use and Abuse of the Logical Framework Approach. Stockholm: Sida.
  20. Jones, N., Jones, H., Steer, L. and Datta, A. (2009c). ‘Improving Impact Evaluation Production and Use.’ Working Paper 300. London: ODI.
  21. Ebrahim, A. (2005). ‘Accountability Myopia: Losing Sight of Organizational Learning.’ Nonprofit and Voluntary Sector Quarterly 34(1): 56-87.
  22. Ordonez, L., Schweitzer, M., Galinsky, A. and Bazerman, M. (2009) ‘Goals Gone Wild: The Systematic Side-effects of Over-prescribing Goal-setting.’ Working Paper 09-083. Cambridge, MA: Harvard Business School.
  23. Earl, S., Carden, F. and Smutylo, T. (2001). Outcome Mapping: Building Learning and Reflection into Development Programs. Ottawa: IDRC.
  24. Jones, N., Jones, H., Steer, L. and Datta, A. (2009c). ‘Improving Impact Evaluation Production and Use.’ Working Paper 300. London: ODI.
  25. ‘A common mistake in complex systems is to assign blame or credit to a small part of the system, when in fact the entire system is responsible; one of the most important elements of any policy discussion is the specific incentives facing individual agents’, Axelrod, R. and Cohen, M. (2000), Harnessing Complexity: Organizational Implications of a Scientific Frontier. New York: Basic Books.
  26. APSC (2009). ‘Delivering Performance and Accountability, Contemporary Government Challenges.’ Canberra: APSC.
  27. Kamarck, E. (2007). The End of Government … As We Know It: Making Public Policy Work. Boulder, CO: Lynne Reinner.
Rate this post

Harry Jones

Author of the Overseas Development Institute (ODI) paper "Taking responsibility for complexity: How implementation can achieve results in the face of complex problems."

Related Articles

Back to top button