How measuring impact gets in the way of real world change

By Toby Lowe

toby-lowe
Toby Lowe (biography)

Why is the idea that we can measure our impact to understand how well we are performing fundamentally flawed? Why is it impossible to “demonstrate your impact” in complex environments?

Although the idea of measuring impact is seductive, almost all useful social change is achieved as part of a complex system. In other words, your work is a small part of a much larger web of entangled and interdependent activity and social forces.

The systems map of the outcome of obesity, shown in the figure below, illustrates this perfectly – it shows all the factors contributing to people being obese (or not), and all the relationships between those factors.

This is the reality of trying to make impact in the world – your actions are part of a web of relationships – most of which are beyond your control, many of which are beyond your influence, quite a few of which will be completely invisible to you.

All of these things combine with your actions to create impact in the world. Let’s work this example through using the obesity systems map. Say that you’re one of the people operating in the bottom right corner of this system – you’re providing “healthcare and treatment options” to address obesity. Let’s say you’re delivering weight loss programmes in neighbourhoods. How would you distinguish the impact of your weight loss programme from the influence of all the other factors in this system?

Short answer – you can’t. Someone on your programme sees a film that changes their perspective on the meals they cook. Someone on your programme changes jobs, to a place with a canteen where they only serve healthy options. Someone is made redundant, so they can’t afford to buy organic food. What was the impact of your programme in these situations?

lowe_systems-map_outcomes-of-obesity
Systems map of the outcomes of obesity (Source: UK Government’s Foresight Programme, 2007, p. 129)

This reveals a fundamental truth about the nature of complex systems. In a complex system, it is impossible to distinguish the effect of particular actors on the overall pattern. This is because complex systems produce emergent, nonlinear behaviour. The tiniest change in input variables creates potentially huge changes in results. Consequently, you can’t produce a reliable counterfactual in a complex system. (You can’t say what would have happened if X wasn’t present). And if you can’t produce a reliable counterfactual, then you cannot reliably identify the impact of your activity.

Impact isn’t “delivered”

If we want to achieve impact in the world, the crucial uncomfortable truth that must be faced (from the perspective of traditional management thinking) is that impact isn’t something that can be “delivered.” In fact, the whole “delivery” mindset is damaging to creating impact in the real world.

We have been encouraged to believe a fantasy – that we can “deliver” impact by a linear planning process. While this has appeal by making us feel more in control of the world than we actually are, looking at the actual evidence on how outcomes are made (like the systems map of obesity), we can see that this kind of programme logic model does not represent an accurate or robust portrayal of how outcomes are really made.

lowe_businessillustrator-com_complexity-cartoon
Thanks to Business Illustrator for permission to use the figure

When we think of impact as something we can ‘deliver’, we are pretending to ourselves to make the task of managing social change easier. And the purpose of good management is not to make the task of management easier, it is to confront the uncomfortable messiness of how the world actually works. If we care about making impact in the real world, we need to stop pretending.

This kind of pretending matters because it makes the work of achieving real change in the world harder to do.

At its least worst, it wastes everybody’s time attempting the impossible – time spent “demonstrating your impact” or creating linear programme logic models is essentially time spent inventing a fantasy.

But the truly pernicious aspect of “demonstrating your impact” or linear programme planning is when it is used for accountability or governance purposes. When people, teams or organisations are rewarded for demonstrating impact – when funding is given to those who can ‘prove’ their impact, or contracts awarded on this basis, or promotions/pay rises are secured in this way – it corrupts the information we need to improve how things are working.

We know this is the case because this is what the evidence overwhelmingly and unarguably tells us happens when we seek to use “impact” (or outcomes) for accountability, governance and/or performance management purposes. The key point is summed up in Campbell’s Law: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor” (Campbell, 1979, p.85).

What can we do instead?

If we’re not asking organisations to demonstrate their impact, how can we create accountability for spending resources well?

This is a great question. Fortunately, it also has a couple of straightforward answers.

  1. Remember – asking people and organisations to demonstrate their impact doesn’t create accountability – it creates a bunch of fantasy data.
  2. Ask people and organisations to be accountable for experimenting and learning together – collaboratively. Create accountability for enabling the healthy systems, which are how positive outcomes are actually achieved in the real world, as highlighted in my previous i2Insights contribution, Managing complexity with human learning systems.

The key shift to make here is to move from funding for “demonstrable” impact (because this – paradoxically makes real impact harder to achieve) to funding for collaborative learning and adaptation. This is how real impact is made.

What has your experience been? Do these ideas resonate with you?

To find out more:

This i2Insights contribution is an extract from Lowe, T. (2023). Explode on impact. Medium, 23rd June. (Online): https://toby-89881.medium.com/explode-on-impact-cba283b908cb

References:

Campbell, D. T. (1979). Assessing the impact of planned social change. Evaluation and Program Planning, 2: 67-90.

UK Government’s Foresight Programme. (2007). Tackling obesities: Future choice – Project report. (2007). United Kingdom Government Office for Science: London, United Kingdom. (Online): https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/287937/07-1184x-tackling-obesities-future-choices-report.pdf (PDF 10.3MB).

Biography: Toby Lowe PhD (https://www.linkedin.com/in/toby-lowe/) is a visiting professor of Public Management, Centre for Public Impact Europe, on secondment from Newcastle Business School, Northumbria University, Newcastle, UK. He is interested in public management/administration, complexity and systems thinking.

15 thoughts on “How measuring impact gets in the way of real world change”

  1. This was a great post which has a direct relevance to my work where I do hold people accountable by mainly using linear logic models to assess the performance of interventions that happen in complex settings. You are right in pointing out the severe limitations of these linear logic models when it comes to impact accounting. But there are some advancements about how to use quantitative methods that can be applied in complex dynamics: 10.1073/pnas.2215676120 is one recent example. I believe that we will have better modelling and empirical tools in near future. This will help us improve our understanding of, let’s say, obesity and use quantitative measures for decision making.

    Reply
  2. Thanks Toby, great to see and good prompt for further thinking. Given the various global challenges at play – climate and biodiversity crises, etc. – there is also an urgency, which may require us to be more mission-driven. Though critical, I’m not sure if experimenting and learning together is a sufficient mission on its own to address these urgent challenges.

    Reply
      • Thanks Toby – so in terms of funders evaluating value of investment, there does, presumably, need to be some form of evaluation of purpose (and how much progress teams make toward that purpose) in your view?

        Reply
        • The key point from a complexity perspective is that it is impoissible to disentangle the effect of the funded intervention from other causal forces. So any ‘progress’ that is observed cannot be attributed to the funding, or to the intervention being funded. So – whilst it is useful to understand what is happening with the overall system of interest, that information isn’t particularly useful to help projects/organisations to answer the question: how could we be doing our work better?

          The governance questions that we have seen funders start to ask which is useful in enabling the work to work better is: how are you experimenting collaboratively (with other actors in this system)? what have you learnt? how have you learnt it? What adaptations to your practices are you making from this learning?

          Other useful governance questions concern the ‘health’ of the system – and so can be asked of whoever is playing a system stewardship role – what is the quality of relationship between actors in this system? how well are they able to collaborate and learn together? Are all relevant voices/perspectives being included? (are the boundaries of the system drawn appropriately?) Are power differentials being tackled?

          Reply
          • Thanks Toby, that’s useful. In case of interest, we’re just in the process of finalising evalaution plans for our seven interventions and programme-wide – link to TRUUD project below – so these are questions we’re asking ourselves right now. We agree that mapping complex causal pathways in such complex systems is impossible, and I like your focus on collaboration* and the ‘health’ of the system.

            Our main focus on intervention and programme assessment is on stakeholder engagement and what they think will happen, e.g.: If we do that, what do you think is likely to happen, or do you think that will be effective? Exact questions yet TBC, and we also have issues of effective representation, not least given inaccessibility of senior decision-makers to workshops etc, and time needed for interviews.

            We’re actually presenting on our group’s evaluation approach at the Systems Evaluation Network (SEN) event in March. The SEN network is mainly health-world in origin and on physical activity interventions and obesity etc., but they’re keen to shift ‘upstream’. I think it’s open to anyone. It’s mainly local government public health officers (c.400 members) at the moment. All grappling with the same issues on complexity. I sometimes wonder if we’re over-complicating…we may need more children to tell us adults what to do!

            https://truud.ac.uk/
            @systemsevalnet

            *On collaboration, we have a meta-research/research-on-research work package that’s been tracking this over the last four years, we wrote a paper recently on research operationalisation that drew on Gabriele’s framework, and we’re just about to complete a paper titled: “What is “good” co-production in the context of planetary health research?”.

            Reply
  3. what an insight! Thanks for Campbell’s Law!
    If we talk about impacts (in terms of performance) it makes full sense, but not quite as when we talk about environmental impacts, for instance (or other similar impacts that are not in terms of performance of a project or program).

    Reply
    • Hi Daniel. Yes – i am specifically talknig about “impact” as it’s used in the public/performance management sense. (And yes isn’t Campbell’s Law great? If i had my way, it would be pinned to the desk of every public servant in the world….)

      Reply
  4. I agree with many of the points made, but I also think that there are meaningful ways to steer projects so that they are more likely to have impact. This is addressed in the paper:
    Hering, J. G., Hoffmann, S., Meierhofer, R., Schmid, M., & Peter, A. J. (2012). Assessing the societal benefits of applied research and expert consulting in water science and technology. GAIA: Ecological Perspectives for Science and Society, 21(2), 95-101. https://doi.org/10.14512/gaia.21.2.6
    available open access at: https://www.dora.lib4ri.ch/eawag/islandora/object/eawag:8854

    Reply
  5. Finally, somebody who comprehends the impossibility of predicting outcomes in complex systems. And more, who recognizes the impossibility of measuring the impact of individuals on the overall system outcome.

    Churchill once said, “Man will occasionally stumble over the truth, but usually he just picks himself up and continues on.”

    In the 1990s, mainstream management stumbled over the truth in Ed Deming— i.e., the counterproductivity of merit perfomance systems in complex (or even just complicated) systems.

    But predictably, they just picked themselves up and didn’t just continue on — they actually reverted and regressed to the way they had always done business. At least in regard to the way they assessed individual performance.

    Good article! 😊

    Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Integration and Implementation Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading