1 Introduction

Three days before the EU referendum, an interview with Richard Thaler was posted on the financial information website MarketWatch.com. He acknowledged the Leave campaign was appealing to people’s “guts”, and thought few voters were engaged in cost–benefit analyses, but on balance he expected a win for Remain. “I am not a prognosticator,” he said, “but I would bet on them staying. And I think that there is a tendency, when push comes to shove, to stick with the status quo” (Thaler 2016).

The aim of this study, carried out in September 2017, was to understand what behavioural lessons communicators could learn from the campaign. Harold Clarke, Matthew Goodwin and Paul Whiteley in their 2017 book ‘Brexit’ said that the outcome surprised journalists, academics and pollsters alike. The margin of victory was narrow. If only a small fraction of those who turned out (approximately 635,000 people) had cast their votes the other way, the result would have been different.

In their 2008 book ‘Nudge’, Cass Sunstein and Richard Thaler say that when it comes to politics: “voters… seem to rely primarily on their Automatic System.” The assumption behind this study is not that behavioural science can fully explain the result, but rather that in a close race even marginal gains can make a difference. It asks what happened to the ‘status quo bias’. It considers the importance of messengers, the use of loss aversion in the campaign, and whether positive immigration arguments might have affected the outcome. It also notes how behavioural science was successfully used to increase turnout.

The research outlined here is taken from a broader study based on the MINDSPACE framework (Table 1) that was developed in 2010 by the Cabinet Office and the Institute for Government, and co-authored by Paul Dolan, Michael Hallsworth, David Halpern, Dominic King and Ivo Vlaev. The mnemonic was intended to raise awareness among civil servants of “nine of the most robust (non-coercive) influences on our behaviour” and to encourage the take-up of behavioural science by practitioners. Each of the MINDSPACE influences was considered, and five of the lessons learned are presented here.

Table 1 The MINDSPACE framework.

2 Methodology

The methodology for the study was based on a model trialled by Todd Rogers and Brigitte Madrian on a 2016 Harvard class. A randomised controlled trial (RCT) followed a lengthy survey designed to tire participants out so they would fall back on System 1 processing in the final test.

A limitation is that the study was carried out on 14 September 2017, over a year after the vote. Given the passage of time, the survey results are taken as illustrative rather than conclusive, and are discussed in conjunction with contemporaneous data. The RCT provides higher quality evidence and is discussed in more detail below.

Prolific Academic provided 500 people who described themselves as ‘Leave voters’. Those who said in the survey that they had not voted or had voted Remain were excluded, and six dropped out before the RCT, which was not compulsory, leaving 459.

A further limitation is that the sample was not weighted to reflect the Leave vote overall. Participants were younger (average age 39.5), more female (71%) and more pro-Labour (29%) than the average Leave voter. However, the geographical range reflected the overall Leave vote, and the sample had supporters of all major parties in proportions that reflected the results of the 2015 general election.

A final limitation is that, for budget reasons, only Leave voters were considered. No suggestion is made here that this group are unique in their levels of knowledge or their behavioural susceptibility. This study was exploratory, and it would be highly desirable to include Remain voters in any further research for completeness and comparison. The choice to focus on Leave voters reflects commentary (Evans and Menon 2017) that UK politics had become detached from some of the people it was meant to represent.

3 Results

3.1 Whatever happened to ‘status quo bias’?

Richard Thaler, in making his prediction, was referring to ‘status quo bias’, the general tendency for people to disproportionately favour the status quo over alternatives (Samuelson and Zeckhauser 1988). In the MINDSPACE framework, this might be considered as the default. Ahead of the EU referendum, commentators (Whiteley and Clarke 2016) cited the work of Lawrence LeDuc (2003) to suggest that voters might swing towards Remain, the status quo option, as the vote drew nearer. Responding after the vote, LeDuc (2016) said that in previous referenda undecided voters had indeed tended “to gravitate towards the status quo or the less risky option near the end of the campaign”. However, he said the research had also established the existence of “second order effects” which can be “as important as risk aversion”. These include a “desire to chastise the government or the individuals or groups behind the proposal”, misinformation, or a lack of understanding of the issue. For those who pinned their hopes on ‘status quo bias’, did they overlook the small print?

This study did not test whether voters sought to punish the government, nor whether they were fed misinformation. These points have been explored by others (Clarke et al. 2017; Farrell and Goldsmith 2017; Oliver 2016; Shipman 2016), who argue that sections of the UK public had been angered by austerity and a series of financial and political scandals. They also discuss the extent to which claims made during the campaign were distortions, deliberate or otherwise. Nor does this study test the extent to which voters understood the risks. This point has been addressed by Stephen Fisher and Alan Renwick (2018), who said the economic arguments put forward by the Remain campaign were outweighed by a clear expectation that a Leave vote would mean immigration would fall (as most wanted). What this study did address was LeDuc’s third point on the level of understanding. Did this sample possess relevant knowledge, and did they understand the basic terms that were being used in the debate?

Participants were presented with five true/false questionsFootnote 1 about EU membership. Some 61% of participants knew that the EU was a market of over 500 million consumers with which UK companies could trade freely, but on three other questions (on Norway and Switzerland, the euro and border controls) the percentage of correct answers was statistically indistinguishable from a guess, and on the fifth (asking whether the UK could stop Turkey joining the EU if other EU members wanted that—a reference to the UK’s veto on new member states), only 16% gave the correct answer; statistically, a guess would have been more accurate.

Presented with multiple choice questions, 97% over-estimated the percentage of the UK’s budget spent on EU membership; 63% over-estimated the number of migrants in the country; and less than a third (29%) gave the correct answer for the proportion of the UK’s trade that went to other EU countries.Footnote 2

What surprised commentators (Roberts 2016) at the time of the referendum was a Google Trends report on 24 June 2016, which showed that “What is the EU?” was the second most searched-for phrase about the European Union in the hours after the vote. This study asked Leave voters how well they understood the basic terms being used in the debate (Fig. 1). It found that while a majority (61%) would feel at least reasonably confident explaining “the European Union” to someone else, fewer than half would feel the same way about explaining “the single market” (42%) or “exports of goods and services” (46%). Only a quarter (26%) would feel at least reasonably confident explaining “tariffs” and less than a fifth (19%) would feel confident explaining “the Norway option”.

Fig. 1
figure 1

Knowledge of terms used in the EU debate

3.2 Which messengers matter?

What lessons can be learned about the messengers used in the campaign, and the media used to communicate with voters? The MINDSPACE framework advises that “we are heavily influenced by who communicates information”. It says the most effective messengers are likeable, similar and authoritative.

Decisions about messengers can be taken very quickly. Charles Ballew and Alexander Todorov (2007) found that voters can take just a tenth of second to form a view about a candidate’s competence from looking at their face alone. According to Gabriel Lenz and Chappell Lawson (2011), “appealing-looking politicians benefit disproportionately from television exposure, primarily among less knowledgeable individuals.”

In this study, to test System 1 responses, participants were primed to provide answers quickly, then shown photos against identical backgrounds with no name captions of four leading politicians: David Cameron (Prime Minister at the time of the vote, and Remain campaign leader); Boris Johnson (who led the Leave campaign); Labour leader Jeremy Corbyn; and UK Independence Party (UKIP) leader Nigel Farage. They were asked for their views on these people’s likeability, their similarity to the participant and their perceived authority on EU issues. A notable caveat is the time that had elapsed between the survey and the vote. This period saw Cameron and Farage resign as party leaders, Johnson appointed Foreign Secretary and Corbyn leading the Labour Party in the 2017 general election. For this reason, the study also asked how opinion had changed since the vote. The net scores are shown in Table 2.

Table 2 Net scores for leading politicians

Shown together, the question is whether these results provide a more nuanced picture than standalone questions about competence. Johnson beat Cameron on every measure tested, but his likeability comes out as the dominant characteristic. This was reflected by Clarke et al. (2017) who found that “feelings about Johnson had very strong effects on the probability of casting a Leave ballot”. Corbyn was criticised by Labour Party members for his perceived lack of enthusiasm for EU membership during the campaign (Farrell and Goldsmith 2017; Shipman 2016). These results suggest that similarity was his strong suit for this audience. Farage was regarded as authoritative on the subject to which he has devoted much of his political life. While his net likeability score was negative, 14% said he was the politician they liked the most. According to Clarke et al. (2017), Farage appealed to different Eurosceptic voters than Johnson. Rather than being a “toxic asset”, Farage helped Leave to maximise its support, they said.

The next set of questions addressed a point raised by Leave campaigner Michael Gove (Mance 2017) when he said: “People in this country have had enough of experts.” This study looked at how authoritative Remain’s messengers were, testing them alongside pro-Leave messengers such as the entrepreneur James Dyson and the TaxPayers’ Alliance. Again, only faces or logos were shown to test System 1 responses, and participants were primed to respond quickly, on the assumption that voters would pay only fleeting attention to campaign material and would not necessarily read small-print captions.

The study found that out of the nine Remain messengers tested, only three were recognised by more than half of this audience. Two-thirds did not recognise organisations such as the Confederation of British Industry (CBI) and the International Monetary Fund (IMF) (Fig. 2). Many Leave voters felt that none of Remain’s messengers had views that mattered—with one notable exception. The National Health Service (NHS) was recognised by 93% and almost two-thirds (65%) felt its opinion counted. However, according to Clarke et al. (2017), many voters thought that a vote for Leave would help the NHS.

Fig. 2
figure 2

Comparing messengers

A third set of questions sought to establish whether ministers held credibility with these voters. In 2016, the British Election Study team found that a tendency to vote Leave was correlated with a tendency to trust “the wisdom of ordinary people” over the “opinion of experts”. For this study, a photograph of the then Business Secretary Sajid Javid was produced and labelled as “Business Secretary”. Another photo showed an anonymous businessman labelled only as “Local businessman”. These were tested alongside the logos for the Department for Business, Innovation and Skills (BIS) and the CBI. Participants were asked who they trusted most and least on three different measures. In each case, the anonymous “Local businessman” was ranked as the most trusted and the cabinet minister was ranked as the least trusted (Fig. 3).

Fig. 3
figure 3

Who do you trust the most and least?

So what messengers were effective for Leave voters? In ‘Influence’, Robert Cialdini (2007) describes ‘social proof’ as a powerful weapon: “We view a behavior as more correct in a given situation to the degree that we see others performing it”. In the MINDSPACE framework, this effect falls into the category of norms. The question for communicators is which norms count in any particular context? Fieldhouse (2014) established that the views of those closest to a voter can affect their political loyalties. In this study, participants were asked how those closest to them had voted. It found that Leave voters were more likely to vote like family and friends than work colleagues (Fig. 4).

Fig. 4
figure 4

Which social norms mattered?

As a final point on messengers, Craig Oliver, Head of Communications at Number 10 during the referendum campaign, wrote in his 2016 book ‘Unleashing Demons’ that some voters were “almost unreachable”. It is fair to say many were disengaged (Curtice 2016). Of this sample, 40% said they did not read the newspapers, and a third (32%) said they did not follow news and current affairs at all. Yet on the basis that the medium is a messenger, the study tested which outlets these Leave voters did pay attention to.

Whitehall press offices often seek to get their ministers on the influential ‘Today’ programme on BBC Radio 4. This study found that only 5% of the Leave voters sampled listened to news and current affairs on Radio 4. Of the newspapers, even if campaigners had succeeded in getting front-page coverage every day in The Times, The Telegraph, The Independent, The Guardian and The Financial Times, looking at their combined readership, such activity would have reached only one in five of this audience (18%). The main newspaper that counted for these Leave voters was the Daily Mail, which was read by 19%, while 53% watched the BBC news. However, 67% of this sample said Facebook was a source for information on news and current affairs. It ranked well ahead of “online news websites” at 38% and Twitter at 30%.

In a 2017 blog, Dominic Cummings, campaign director of the official Vote Leave campaign, said that Vote Leave sent out nearly “a billion targeted digital adverts” at the end of the campaign as “adverts are more effective the closer to the decision moment they hit the brain”, a point on the ‘availability bias’. Given the opacity of social media, he said this activity went largely unnoticed by political journalists. In his words:

It is actually hard even for very competent and determined people to track digital communication accurately, and it is important that the political media is not set up to do this. There was not a single report anywhere (and very little curiosity) on how the official Leave campaign spent 98% of its marketing budget.

The unofficial Leave.EU campaign was also active on social media. In ‘Bad Boys of Brexit’ (2016), its leader Arron Banks claims to have reached nearly 20 million people a week in this way.

Over 1 year had passed between this survey and the vote, so recollections may have been faulty, but 19% of participants said they remembered being personally targeted on social media or online by Leave.EU and 17% by Vote Leave. Comparable figures were 16% for the Remain campaign and 15% for UK government material.

In his 2017 book ‘What’s Your Bias’, Lee de-Wit discusses the use of social media during the EU referendum; and the activities of the political consultancies Cambridge Analytica and AggregateIQ have come under considerable scrutiny since the vote (Gibney 2018; Information Commissioner’s Office 2018). For behavioural scientists, what is of interest is that Facebook and Twitter tap into System 1 thinking (Sharot 2017). Sunstein describes in ‘Going to Extremes’ (2009) how ‘information cascades’ can develop on social media. Where people lack information, they give more authority to others’ views, particularly when many appear to believe the same thing. There is some evidence that Twitterbots were used during the campaign with a view to creating such effects. Marco Bastos and Dan Mercea (2017) established that 13,493 Twitterbots tweeted with a “clear slant towards the Leave campaign” before disappearing shortly after the vote. What is clear from this study is that messages targeted towards Leave voters via social media were sufficiently effective that they were remembered over a year after the vote. Facebook was a way to reach up to two-thirds of this audience, and if campaign material were shared, then family and friends were likely to be effective messengers.

3.3 Using loss aversion to frame incentives

MINDSPACE advises that our responses to incentives are shaped by “predictable mental shortcuts such as strongly avoiding losses”. Daniel Kahneman and Amos Tversky established that people care twice as much about potential losses as about potential gains (Kahneman 2011). This study looked at how loss aversion was used in two contexts in the referendum campaign: with the slogan “Take Back Control”; and with the framing of the leading economic incentives.

Cummings (2017) said that he began with the slogan of ‘Take Control’ but amended it to ‘Take Back Control’ as: “‘back’ plays into a strong evolved instinct—we hate losing things, especially control”. According to a British Election Study report (2016a, b), control was an issue for Leave voters. Those with an ‘external locus of control’ were “much more likely” to vote Leave than those with an ‘internal locus of control’.

This study asked participants which slogan they thought worked better. Four times as many opted for “Take Back Control” over “Take Control” (67% vs. 16%).

Important economic arguments in the campaign were also loss-framed: the £350 m a week which the Leave campaign said was being sent to the EU, and the potential loss of £4300 per household per year which HM Treasury (HMT 2016) said UK households would lose if voters opted for Leave (albeit after 15 years in one of three potential scenarios).

Putting to one side questions about the credibility of these figures and any time preference effects, participants were asked to compare the two figures at face value in the present time. Given that there were 27 m UK households in 2016 (ONS), the question being asked can be expressed as follows:

$${\text{Is}}\;\pounds 3 50{\text{m}} \times 5 2\;{\text{weeks}}\; > \;\pounds 4 300 \times 2 7 {\text{m}}\;{\text{households}}?$$

The left-hand side of the equation amounts to £18.2 billion a year, while the right-hand side amounts to £116.1 billion a year—a figure six times larger. Kahneman (2011) gives the example of 17 × 24 as a sum that requires the type of effortful System 2 thinking that humans like to avoid. Was the relationship between these two figures intuitively self-evident?

When participants in this study were asked if they remembered these figures, £350 m a week was recalled by ten times more people (72% vs. 7%). This was unsurprising as it was used prominently and spent longer in the public eye. Participants were then asked which figure was larger for “the UK as a whole”. Only a third (35%) gave the correct answer, as against 39% who thought the Leave figure was greater and 26% who didn’t know. Participants were then given the information needed to perform the calculation (the number of UK households) and asked to choose which of four graphs showed the figures in the correct proportions. The correct graph was the least popular choice, picked by only 15%. The majority (39% + 18% = 57%) chose options showing £350 m a week as the larger figure (Fig. 5).

Fig. 5
figure 5

£350 m a week vs. £4300 per household per year

3.4 Are attitudes towards immigration fixed or does framing matter?

Salience and affect are two of the other elements in the MINDSPACE framework. This study considered how salient immigration was and whether affect had the power to make people feel more positive about it.

When it comes to salience, Cialdini (2016) notes that “the amount of news coverage can make a big difference in the perceived [his italics] significance of an issue among observers as they are exposed to the coverage.” The media’s role is twofold. It draws attention to a subject, signalling it matters; and it provides examples that come easily to mind. In the last month of the campaign, immigration stories were very prominent, being carried on the front pages of the Daily Mail on an almost daily basis. This was at a time when 40% of voters overall were making up their minds (Ashcroft 2016). Of this sample, 31% said they decided how to cast their ballots “just before voting”.

Evidence suggests that immigration was a major issue for Leave voters. Before the referendum, the British Election Study team (2016a) asked voters what mattered most to them in deciding how to vote. The answer for Leave voters can be seen in Fig. 6. Viewed rationally, the immigration question was a debate about free movement rights for existing EU citizens, but both Vote Leave and Leave.EU capitalised on the highly salient Syrian refugee crisis. Cummings said this was worth “millions in advertising” (Farrell 2017), while Farage used a 2015 photo of refugees at the border between Croatia and Slovenia for the “Breaking Point” poster unveiled shortly before the vote (Chandler 2016). Then, in the final stretch, the Leave side suggested that Turkey, with its large, predominantly Muslim population, would soon join the EU, irrespective of the UK’s veto on new member states. Cummings (2017) said after the vote that, without the immigration argument, Leave wouldn’t have won.

Fig. 6
figure 6

Source: British Election Study team 2016

Leave voters’ word cloud.

So how did the Remain campaign respond? Guardian journalist Raphael Behr (2016) wrote that their media scripts contained a positive line on immigration that was “barely aired”. This study considered whether positive arguments would have made a difference. Was it the case that voters’ views on immigration were fixed or did it matter how the question was framed? This was the focus of the randomised controlled trial that followed the survey in this experiment.

What elements should a positive argument on immigration contain? Cialdini (2017) says: “People don’t counter-argue stories… If you want to be successful in a post-fact world, you do it by presenting accounts, narratives, stories and images and metaphors”. Matthew Feinberg and Rob Willer (2015) have shown that arguments framed in terms of the moral values of the audience can be persuasive. Polling suggested that Leave voters tend to have socially conservative values (Ashcroft 2016). Then, Paul Slovic, discussing ‘psychic numbing” (2007, 2016), says humans have a “flawed arithmetic of compassion”. In cases of genocide or disaster, people find it hard to empathise when faced with large numbers, but they do feel empathy when faced with smaller groups or individuals.

The challenge then was to construct a narrative around a smaller, more sympathetic sub-group of immigrants using the moral values held by Leave voters. The test for the experiment was an action rather than a self-reported survey response. After completing the survey, the sample was split into two groups. All participants were told that they had earned £2 from taking part. They were then asked to choose if they wanted to donate 50p to a charity—the Alf Dubs Children’s Fund—or to keep it for themselves. The 50p was framed as an extra to avoid triggering loss aversion. Participants were told the charity helped to find legal routes to bring unaccompanied child migrants to safety in the UK and other countries. Logically, therefore, they could expect that a donation was likely to increase overall immigration to the UK. One group was asked to donate after being presented with a fact, and the other group after being presented with a behaviourally-based narrative employing affect. The two texts can be seen in Fig. 7. The null hypothesis was that voters are rational with fixed and known preferences, so there should be no difference between the effectiveness of the two appeals as both groups would be equally likely (or unlikely) to contribute to a pro-migrant charity.

Fig. 7
figure 7

The narrative and the fact-based texts

The results (Fig. 8) showed that Leave voters were almost twice as likely to donate when asked with the behaviourally-based narrative (50%) as with the fact-based text (28%). Using a Chi squared test, the result is significant at p < 0.01, and the null hypothesis can be rejected. With the amount rounded up, a donation of £100 was made to the Alf Dubs Children’s Fund.

Fig. 8
figure 8

Leave voters donate to a pro-migrant charity when asked with a narrative

The implication is that positive arguments on immigration might have had an impact on the vote, if they had been framed for this audience using the appropriate behavioural techniques. The results suggest that further research is needed on what works when it comes to changing attitudes on this subject.

3.5 Turning out the vote

The final behavioural lesson relates to turnout. In 2010, David Nickerson and Todd Rogers showed how behavioural science can help get voters to the polls. The Electoral Commission (2016a, b) embraced a range of behavioural techniques to increase voter registration. Online registration made it easy for voters to register, and a behaviourally-based campaign was run, with advertisements using techniques such as social norms and networks (#RegAFriend), loss aversion (“You Can’t Vote” unless you register) and scarcity (“Time is running out”).

The Commission noted the UK Government’s support. It offered “a huge range of channels and [an] extensive partner and stakeholder network to engage with the public”. It recommended that such cooperation continue in all future polls since the government can reach “millions of people” with no extra expenditure. The end result was that Commission overshot its targets, and received more than 2.5 million registrations online.

This study didn’t test whether participants had seen this campaign, but it did ask about voting history. The vast majority (78%) had never voted in a referendum before, some 9% had never voted in a general election; and 11% didn’t vote in the 2015 general election.

4 Discussion

Less than a month before the EU referendum, Ryan Coetzee, Director of Strategy for the Remain campaign, sent an email to senior campaign staff. Quoted in Geoffrey Evans’ and Anand Menon’s 2017 book ‘Brexit and British Politics’, he wrote: “Voters are very sceptical about our warnings on the economy. They don’t trust these reports. They don’t trust the Treasury. And many don’t like the messengers.”

This article discusses some of the behavioural factors why this might have been the case. It suggests five lessons for future campaigns which draw on the insights from the MINDSPACE framework.

The first is that campaigners should know their audiences. Levels of knowledge were low, which may help to explain why the default option of the status quo did not prevail in this instance. As Dominic Cummings, campaign director of the official Vote Leave campaign, wrote in 2017: “I am not aware of a single MP or political journalist who understands the single market… The number of people who do is tiny”. Communicators should also be aware that members of the public may not see these issues in the same light as politicians and journalists. A YouGov opinion poll in November 2018 found that 59% of the population found Brexit boring.

The second is that messengers matter. Messengers used by the Remain campaign were not effective for this audience of Leave voters, who preferred the views of “ordinary people”. However, one messenger did count above all others—the National Health Service. As to the medium, this audience are relatively disengaged and hard to reach by conventional channels, but the majority received news via Facebook, and family and friends were likely to be influential messengers.

The third lesson, on incentives, is that voters are reluctant to do the maths. The relationship between the two most prominent figures (Remain’s £4300 per household per year and Leave’s £350 m a week) was not intuitively easy to grasp, with Leave voters assuming the Remain figure was smaller (it is six times larger). However, loss-framing clearly works, as the ‘Take Back Control’ slogan demonstrates.

On immigration, a highly salient topic at the time, the lesson is that views are not fixed; and behaviourally-based arguments employing affect could have an impact. More research is needed in this area as the UK is far from being the only country where concern over levels of immigration can run high.

The fifth lesson, where the experience of the EU referendum confirmed previous findings, is that behavioural science is effective at raising turnout.

What can communicators learn from this campaign? A general lesson is that a ‘Test, Learn, Adapt’ strategy, based on evidence, and as recommended by the UK’s Behavioural Insights Team (Haynes et al. 2012), could be helpful in designing any future campaigns.

5 Conclusion

In conclusion, Thaler didn't predict the outcome of the referendum (although he may have intended the interview as a gentle nudge), but he was right in saying with Sunstein that political decisions are affected by System 1. Even with the conservative assumption that any behavioural impacts would have been small, this was a close race where marginal gains would have made a difference. The final word goes to Daniel Kahneman (2011). He said: “when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution”. The politicians thought they were asking the British people about EU membership. What question did the people think they were answering?