Systems & complexityTaking responsibility for complexity

Taking responsibility for complexity (section 3.3.2): Focusing on how change happens

This article is part of section 3.3 of a series of articles featuring the ODI Working Paper Taking responsibility for complexity: How implementation can achieve results in the face of complex problems.

In the context of limited or incomplete knowledge about cause and effect, implementation processes must tie analytical and management efforts to explicit questions as to how change happens in their context. Since it cannot be taken for granted how change will happen, it is very important to make explicit ideas and assumptions underlying implementation, and to test and reflect on this purposefully.

One tool for the PME1 of projects and programmes is developing a ‘theory of change’ (ToC)2 – a model of how it is that the project / programme activities are envisaged to result in the desired changes3. A ToC is an essential tool for M&E4 of complex activities from the perspective of enhancing decision-making and improving projects in an iterative way, but also from the point of view of reporting and accountability to external stakeholders:

  • Improving projects: In complex situations, project and programme managers face ambiguity, where the available knowledge and information support several different interpretations at the same time. This means teams need to come together to question their models of change, their underlying assumptions and the relevance of their goals. It is important to explicitly discuss the framing of an issue and whether interpretations truly follow from available data, as well as what is missing or uncertain.
  • Accountability and reporting: Providing a clear statement of strategy and direction, and analysing a project’s expectations for change, is an important part of evaluating that project5. Moreover, once a ToC is completed, it then also lays out a number of dimensions and intermediate outcomes against which the project’s influence can be measured, and a variety of areas that may provide key performance indicators to assist in judgements as to whether it is achieving the intended outcomes.

There are ToCs implicitly embedded in most PME tools. It is crucial to choose tools that have a ToC that fits the context and problem of application. For example, OM is based on an actor-centred ToC, which is relevant where actors are the driving force for change – for example where policy-influencing actors and their relationships, networks, perspectives and interests are key factors in shaping outcomes. Gearing the ToC around actors provides a clear and concrete focus for M&E activities, measured by the changes in behaviour, actions and relationships of those individuals, groups or organisations with whom the initiative is working directly and seeking to influence6.

Moreover, it will frequently be wise to explicitly develop ToCs for the particular programme – otherwise causal connections may be left implicit and untested. For example, the ‘causal chain’ approach to planning (e.g. in a log frame) is based on the assumption that change is best understood through a succession of events and outcomes, but the actual theoretical content and hypotheses about causal links generally tend to remain implicit7. Rogers8 provides a wealth of guidance about how to fit ToCs to complex challenges, such as incorporating simultaneous causal strands (two or more chains of events that are all required in order for the intervention to succeed) or alternative causal strands (where a programme could work through one or another path). The emphasis should not necessarily be on making things highly intricate, rather on trying to provide a realistic and intuitive model that clearly sets out a team’s assumptions and ideas about change.

Preferred methods for evaluation may need to shift in order to ensure a better understanding of complex problems. First, theory-based evaluation and programme theory evaluation (e.g. Funnell and Rogers9) are the natural partner to ToC approaches to planning. Moreover, in complex contexts that are likely to involve change being produced by the interaction of a variety of forces, tools such as the log frame and econometric impact evaluation, based on a ‘successionist’ understanding of causality, are not so relevant. Other, equally legitimate, approaches to understanding change and causality in the natural and social sciences, with corresponding methods for impact evaluation, are:

  • ‘Generative’ causality, involving identifying underlying processes that lead to change (e.g. assessing causality by understanding people’s operative reasons for their actions or behaviour change10);
  • A ‘configurational’ approach to causality, looking at how outcomes follow from the combination of a fruitful combination of attributes11,12.

Based on a configurational understanding of causation where a certain number of factors are seen as conditions required for success, it is valuable to look at instances that represent different combinations of these factors being present and absent and analyse which conditions truly are necessary and sufficient. It is particularly crucial to look into instances where all of the factors are present but the ‘success’ criterion is not. This is the approach taken by ‘qualitative comparative analysis,’ a method pioneered by Charles Ragin13.

Generative approaches are particularly relevant for understanding how programme mechanisms interact with various different contexts: Pawson’s14 ‘realist evaluation’ considers how a programme may function by means of various different causal mechanisms, which would interact with various potential contexts in order to produce an outcome. For example, the literature shows that the influence of research on policy will play out in very different ways depending on whether the government happens to have an interest in the issue or the capacity to respond15. The corresponding method for systematic review, ‘realist synthesis,’ may be a highly useful approach to bring together the relevant evidence to address complex problems.

Box 10: Improving irrigation in Nepal

An innovative programme in the central hills of Nepal shows how implementation can take into account the need for self-organisation, and how learning the needs for implementation can respect complexity.

Efforts to improve irrigation systems in South Asia have tended to approach this in a technocratic fashion, hiring external water engineers to construct modern systems to replace those farmers have built. However, despite substantial investments, there has been limited long-term success. An intervention programme for 19 irrigation systems in the central hills of Nepal by the Water and Energy Commission Secretariat and the International Irrigation Management Institute took a different approach. Rather than attempting to impose solutions on farmers, the programme was sensitive to the need for self-organisation in the following ways:

  • Mapping existing efforts: In advance of implementation, an inventory of all existing farmer-managed irrigation systems was prepared. It was analysed for potential to expand and potential impact of expansion.
  • Farmer-led planning: Farmers provided full (ranked) priorities for irrigation system improvements, and had veto over any engineering plans not consistent with their preferences. They and drew up their own rules for managing the systems and works.
  • Working on the basis of commitment: Willingness of farmers to be involved was a prerequisite of the programme. There was a requirement for a local management body or group and the identification of local leaders. Funding was not provided in full for improvements.

In line with this, tools for linking knowledge with implementation were embedded in the following ways:

  • Multi-skilled implementation teams: Teams of engineers, overseers, agriculturalists, social scientists and people with construction skills worked in tandem with local decision-making groups to manage works.
  • Peer-to-peer learning: Farmers’ skills were improved through farmer-to-farmer training tours. They were designed to transfer experiences from well-managed systems through site visits and informal exchanges, and farmers from well-managed systems acted as consultants.
  • Facilitated deliberation: The integration of new ideas was facilitated through guided discussions between farmers and by coinciding training tours with meetings of local decision-making bodies.

In addition to this, an evaluation of the programme was designed to take into account the complex configuration of forces shaping the programme’s impact, rather than viewing change as a simple additive process. It focused on understanding how unfolding patterns of irrigation performance were shaped by a number of key variables in different contexts. Using qualitative comparative analysis, the evaluation found that investments in infrastructure improved the technical efficiency of systems only in the short term, with improvements in efficiency withering away in the face of harsh physical environments. However, these infrastructure improvement works did catalyse sustained improvements in water adequacy, by providing incentives for collective action and opportunities to build functioning working relationships, which persisted long after the programme.

Source: Lam and Ostrom16.

Next part (section 3.3.3): Realistic foresight.

See also these related series:

Article source: Jones, H. (2011). Taking responsibility for complexity: How implementation can achieve results in the face of complex problems. Overseas Development Institute (ODI) Working Paper 330. London: ODI. (https://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/6485.pdf). Republished under CC BY-NC-ND 4.0 in accordance with the Terms and conditions of the ODI website.

References and notes:

  1. Planning, Monitoring and Evaluation.
  2. This is referred to in various ways, for example as a ‘logic model,’ ‘programme theory’ or ‘roadmap.’
  3. Whelan, J. (2008). Advocacy Evaluation: Review and Opportunities. Brisbane: The Change Agency.
  4. Monitoring & Evaluation.
  5. Evaluators often have to construct the ToC from the assumptions and ideas implicit in a project’s conception and implementation if there has not been one constructed explicitly already, but this may often be ‘too late’ for it to provide the strategic benefits it could have.
  6. Smutylo, T. (2001). ‘Crouching Impact, Hidden Attribution: Overcoming Threats to Learning in Development Programs.’ Ottawa: Evaluation Unit, IDRC.
  7. Sridharan, S., and Nakaima, A. (2010). ‘Ten Steps to Making Evaluations Matter.’ Evaluation and Program Planning 34(2):135-146.
  8. Rogers, P. (2008). ‘Using Programme Theory to Evaluate Complicated and Complex Aspects of Interventions.’ Evaluation 14(1): 29-48.
  9. Funnell, S. and Rogers, P. (2011). Purposeful Program Theory. San Francisco: John Wiley and Sons.
  10. Bhola, H. (2000). ‘A Discourse on Impact Evaluation: A Model and Its Application to a Literacy Intervention in Ghana.’ Evaluation 6(2): 161-178.
  11. Pawson, R. (2002). ‘Evidence-based Policy: The Promise of “Realist Synthesis.”’ Evaluation 8(3): 340-358.
  12. It is also possible to assess the counterfactual using non-experimental theory-driven methods, such as ‘process tracing,’ which examines causation as part of a theory focusing on a sequence of causal steps.
  13. Ragin, C. (1989). The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies. Berkeley, CA: University of California Press.
  14. Pawson, R. (2002). ‘Evidence-based Policy: The Promise of “Realist Synthesis.”’ Evaluation 8(3): 340-358.
  15. Carden, F. (2009). Knowledge to Policy: Making the Most of Development Research. Ottawa: IDRC.
  16. Lam, W. and Ostrom, E. (2009). ‘Analyzing the dynamic complexity of development interventions: lessons from an irrigation experiment in Nepal’, Policy Sci (2010) 43:1–25.
Rate this post

Harry Jones

Author of the Overseas Development Institute (ODI) paper "Taking responsibility for complexity: How implementation can achieve results in the face of complex problems."

Related Articles

Back to top button