This article is part of a series critiquing nudge theory.
The 2008 publication of Richard Thaler and Cass Sunstein’s book Nudge: Improving Decisions About Health, Wealth, and Happiness1 saw the rapid emergence of the idea of using nudge theory2 to change human behaviour:
Nudge theory is a concept in behavioral economics, decision making, behavioral policy, social psychology, consumer behavior, and related behavioral sciences that proposes adaptive designs of the decision environment (choice architecture) as ways to influence the behavior and decision-making of groups or individuals. Nudging contrasts with other ways to achieve compliance, such as education, legislation or enforcement.
In 2010, David Cameron, then Prime Minister of the United Kingdom, established3 a “nudge unit” in the Cabinet Office with the aim of using nudge theory to improve government policy and services and deliver cost savings. Known formally as the Behavioural Insights Team4, the unit took advice from Richard Thaler, co-author of the Nudge book. Stimulated by the UK nudge unit, other national governments and international organisations have similarly adopted nudge theory.
However, as documented in a RealKM Magazine article series, nudge theory began to be criticised almost as soon as it emerged. In the first article in that series, I summarised three key initial criticisms of the UK government’s application of nudge theory that had been put forward in the academic literature. These criticisms, as put forward by their authors, are:
- nudge theory ignores the full range of determinants of behaviour5
- the policy recommendations of the UK nudge unit are not actually in accordance with nudge theory6
- nudging is paternalistic, manipulative, and sometimes deceitful7.
I subsequently provided a case study related to two of these criticisms in the article “Nudge initiative creates confusion and undermines trust.”
Despite the academic criticism, nudge theory continued to remain popular. But in 2018, a significant and startling series of events started to turn the tide. This was high-profile Cornell University food researcher Brian Wansink’s spectacular fall from grace. Wansink was well-known for his landmark research showing that altering food environments could nudge people into healthier eating, for example through reduced serving sizes – the idea that “small plates lose weight.” Only Wansink’s research didn’t actually show this – he is responsible for one of the worst cases of academic misconduct in recent years, as I document in the article series “Simplistic solutions to complex problems turns behavioural science into a dangerous pseudoscience.”
In the time since, further critiques of the application of nudge theory have been published in the academic literature, for example as reported in the nudge theory series articles “Does nudging always result in better decisions?” and “The high failure rate of behavioral nudges.”
UK COVID-19 nudge groupthink
A further and even more significant crisis of confidence for nudge theory came in 2020 and 2021. In early 2020, as the COVID-19 pandemic erupted globally, the UK Government initially sought to control the spread of COVID-19 through a behavioural nudge approach rather than the lockdowns and movement restrictions being imposed in other countries. Decision-making in this regard was informed by advice from the UK Government’s Scientific Advisory Group for Emergencies (SAGE).
As reported in the media article “Keep Calm and Wash Your Hands: Britain’s Strategy to Beat Virus“8, the SAGE approach used nudge unit mathematical behavioural models. The article quotes SAGE member David Halpern as saying that the reason for taking this approach was the potential for restrictions to have unintended consequences, in particular the risk of “behavioural fatigue.” Professor David Halpern is Chief Executive of the UK Behavioural Insights Team, and is also the What Works National Adviser appointed by the Prime Minister to help the UK Government apply evidence to public policy. As discussed above, the Behavioural Insights Team was the world’s very first nudge unit, and David Halpern has been its leader since its inception in 2010.
However, as the media article “Nudge theory is a poor substitute for hard science in matters of life or death“9 reports, the SAGE approach immediately triggered serious concern, with more than 600 behavioural economists signing a letter10 questioning the evidence base for the notion of behavioural fatigue. In response to this expressed concern, article author Sonia Sodha writes that:
Rightly so: a rapid evidence review of behavioural science as it relates to pandemics only fleetingly refers to evidence that extending a lockdown might increase non-compliance, but this turns out to be a study about extending deployment in the armed forces. “Behavioural fatigue is a nebulous concept,” the review’s authors later concluded in the Irish Times.
This is a common critique of behavioural economics: some (not all) members of the discipline have a tendency to overclaim and overgeneralise, based on small studies carried out in a very different context, often on university students in academic settings …
The problem with all forms of expertise in public policy is that it is often the most formidable salespeople who claim greater certainty than the evidence allows who are invited to jet around the world advising governments. But the irony for behavioural scientists is that this is a product of them trading off, and falling prey to, the very biases they have made their names calling out.
I can only imagine how easy it might have been for [Prime Minister Boris] Johnson to succumb to confirmation bias in looking for reasons to delay a lockdown: what prime minister wants to shut down the economy? And it is the optimism bias of the behavioural tsars that has led them to place too much stock in their own judgment in a world of limited evidence.
Sodha’s criticisms have since been validated by the UK Government House of Commons Health and Social Care and Science and Technology Committees report “Coronavirus: lessons learned to date.”11 The issues explored by the committees included reasons for the initial delay in the UK Government implementing a full lockdown, as discussed in items 96-114 of the report. The committees found that:
- the assumption that the public would have limited tolerance of lockdown restrictions, that is, they would experience behavioural fatigue, “turned out to be wrong”
- taking a more precautionary approach in the first weeks of the pandemic, despite the SAGE advice, may have contributed to better overall outcomes
- the initial scientific advice from SAGE was not sufficiently challenged by elected decision-makers
- scientific advice was not sufficiently internationally diverse, with all but one of the 87 people listed as having participated in at least one meeting of SAGE being only from UK institutions
- mathematical modelling played an influential role in UK scientific advice, despite academic skepticism of this modelling.
In summary, the committees state that, in regard to the delayed lockdown:
In the first three months the [no lockdown] strategy reflected official scientific advice to the Government which was accepted and implemented … The fact that the UK approach reflected a consensus between official scientific advisers and the Government indicates a degree of groupthink that was present at the time which meant we were not as open to approaches being taken elsewhere as we should have been.
Hooray! A good news meta-analysis! But is it?
The mounting academic, pubic, and political criticism of the application of nudge theory hasn’t resulted in the abandonment of nudge theory. Rather, the view has emerged that while nudge theory may not work in some circumstances, it can and does still succeed in plenty of others.
Changing individuals’ behavior is key to tackling some of today’s most pressing societal challenges such as the COVID-19 pandemic or climate change. Choice architecture interventions aim to nudge people toward personally and socially desirable behavior through the design of choice environments … Here we quantitatively review over a decade of research, showing that choice architecture interventions successfully promote behavior change across key behavioral domains, populations, and locations.
However, as Stuart Ritchie contends in the article “Nudged off a cliff” in his Science Fictions newsletter, a series of rebuttal letters to the meta-analysis puts this hype in doubt. The newsletter follows on from Ritchie’s 2020 book of the same name14 which was about fraud, bias, negligence, and hype in science.
In addition to discussing the key issue raised in the rebuttal letters, Ritchie alerts that the meta-analysis used papers where disgraced food science researcher Brian Wansink, whose work was discussed above, was lead author or co-author. Although the analysed papers were not among those that had been retracted by journals, the inclusion of work by a researcher found to have committed academic misconduct is concerning, and could potentially have biased the very strong meta-analysis results for food outcomes.
The critical problem raised in the rebuttals and discussed by Ritchie is the issue of “publication bias.” An examination of the data used in the meta-analysis shows that its results have been significantly influenced by publication bias. Ritchie advises that:
[publication bias occurs] because scientists don’t bother publishing – or are substantially less likely to publish – nudge studies that don’t find positive effects.
The authors of the meta-analysis had identified this possibility, but the publication bias analysis wasn’t really integrated into the meta-anaysis as a whole. In response, the writers of one of the rebuttal letters carried out their own meta-analysis using a newly-developed statistical technique called “Robust Bayesian Meta-Analysis” (RoBMA). Ritchie reports that:
Using RoBMA, the letter-writers found that not only was there strong evidence for really bad publication bias, and not only did correcting for this reduce the effects to near-zero in almost every case, but that there was, for at least some of the categories, strong evidence that there was no overall effect of nudges in the literature. In other words, their correction pushed the effect of nudges entirely off a cliff.
Another of the rebuttal letters makes the point about the variability of nudges. The letter writers argue that because of this variability, the news alert finding reported above that nudges “successfully promote behavior change across key behavioral domains, populations, and locations” is highly implausible. How can there be an “average effect of nudges” across all the different nudge types, behavioural outcomes, contexts, countries, and so on?
But Ritchie cautions that this variability can also mean that close replication studies of similar nudges could reveal that some nudges actually do work. He reports on another recent review study15 that offers potential hope in this regard. The study findings indicate that some nudges do seem to have a real impact, albeit a much smaller one than we’ve been led to believe.
So what should we do in regard to nudge theory until further such research is carried out? In concluding his article, Ritchie provides the following advice:
There’s a massive set of published studies on nudges, and there’s often considerable excitement in the media (egged on by the researchers’ press releases) when a new paper comes out. Inadvertently, by hinting at how serious the effect of publication bias could be, the PNAS meta-analysis puts that whole literature under a cloud. Until academic nudge researchers dramatically up their game – we’re talking Registered Reports, replication studies, open data, open code, the whole thing – we should be extremely sceptical of anyone who leans on this literature to suggest any policy changes.
But even if strong evidence emerges that allows policymakers to move beyond the criticisms of the effectiveness of nudge theory, this evidence base won’t address an equally important aspect of the criticism of nudge theory. As stated at the beginning of this article above, this criticism is that nudging is paternalistic, manipulative, and sometimes deceitful16. I’ve already documented a case study where an Australian government was accused of using nudge theory in this way.
Just as we should be extremely sceptical of anyone who leans on the current limited research base to suggest any policy changes, as Ritchie has advised, we should also be extremely sceptical of anyone who uses nudge theory without an open public examination of the ethics of doing so and a thorough consideration of alternative options such as community engagement.
- Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving Decisions About Health, Wealth, and Happiness. Connecticut: Yale University Press. ↩
- Wikipedia, CC BY-SA 3.0, Retrieved 1 October 2022. ↩
- Wintour, P. (2010, September 10. ). David Cameron’s ‘nudge unit’ aims to improve economic behaviour. The Guardian. ↩
- Wikipedia, CC BY-SA 3.0, Retrieved 1 October 2022. ↩
- Bonell, C., McKee, M., Fletcher, A., Haines, A., & Wilkinson, P. (2011). Nudge smudge: UK Government misrepresents” nudge”. The Lancet, 377(9784), 2158. ↩
- Ryan, J. D. (2017). To what extent have the policy recommendations of the Behavioural Insights Team been in accordance with nudge theory (Master’s thesis, University of Twente). ↩
- Campbell, D. (2017). Cleverer Than Command? Social & Legal Studies, 26, 111-126. ↩
- Hutton, R. (2020, March 11). Keep Calm and Wash Your Hands: Britain’s Strategy to Beat Virus. Bloomberg. ↩
- Sodha, S. (2020, April 26). Nudge theory is a poor substitute for hard science in matters of life or death. The Guardian. ↩
- Hahn, U., Chater, N., Lagnado, D., Osman, M., & Raihani, N. (2020, March 16). Why a group of behavioural scientists penned an open letter to the UK Government questioning its coronavirus response. Behavioral Scientist. ↩
- House of Commons Health and Social Care, and Science and Technology Committees. (2021). Coronavirus: lessons learned to date. Sixth report of the Health and Social Care Committee and third report of the Science and Technology Committee of session 2021-22. ↩
- Mertens, S., Herberz, M., Hahnel, U. J., & Brosch, T. (2022). The effectiveness of nudging: A meta-analysis of choice architecture interventions across behavioral domains. Proceedings of the National Academy of Sciences (PNAS), 119(1), e2107346118. ↩
- Université de Genève. (2022, January 17). Inciting instead of coercing, ‘nudges’ prove their effectiveness. EurekAlert! American Association for the Advancement of Science (AAAS). ↩
- Ritchie, S. (2020). Science Fictions: Exposing Fraud, Bias, Negligence and Hype in Science. Random House. ↩
- DellaVigna, S., & Linos, E. (2022). RCTs to scale: Comprehensive evidence from two nudge units. Econometrica, 90(1), 81-116. ↩
- Campbell, D. (2017). Cleverer Than Command? Social & Legal Studies, 26, 111-126. ↩