1 Introduction: a world in which evidence no longer persuades

We live in a ‘misinformation age’ (O’Connor and Weatherall 2019), witnessing ‘the declining value of truth as society’s reserve currency’ (D’Ancona 2017) and an ‘unprecedented level of political disinformation that threatens to undermine the very possibility of shared agreement on facts’ (Sargent 2018). When a White House correspondent says ‘There’s no such thing, unfortunately, anymore, as facts’ (Fallows 2016) or the US President’s personal lawyer, as if it’s obvious, that ‘Truth isn’t truth’ (Toobin 2018), this is taken, understandably, as evidence of a cultural change. Many scholars are asking to what extent this change is related to the particular attributes of the ‘digital media ecology’ (Tumber and Waisbord 2021). This article contributes to that discussion by suggesting new models of truth, science and objectivity are needed that take account of the fundamentally persuasive nature of a digitally-mediated social sphere. Social media, network effects, search engine optimisation, recommender algorithms, instantaneity, widespread usage of mobile devices and data-driven micro-targeting have instrumentalised information of all kinds in terms of its persuasive function. So it is in these terms we should approach truth and falsehood—as a social practice, and as phenomena of collective machine-mediated iteration. Truth, whether transcendental or factual, has been a perennial subject of discussion in multiple languages, cultures and disciplines (e.g. Blackburn abd Simmons 1999), and binary thinking long opposed in favour of ideas of truth emergent in, and from, practice (for new materialism approaches to which see e.g. Barad 2007; Paxson and Helmreich 2013, p 169). This article addresses public perceptions and uses of truth, an idea of truth that assumes its force neither from the irrefutability of objective facts, nor from the inevitably relative nature of many perspectival ‘truths’, but from the rational basis assumed to underly public discussion, the precondition of a ‘public sphere’. In 1962 when Jürgen Habermas coined that phrase he did not imagine a significant portion of that sphere made up of bots impersonating members of a public, or trolls deliberately repeating centrally-disseminated statements impersonating the views of individuals. But perhaps more important, the public ideas of truth that are the foundation of such a ‘public sphere’ often tacitly depend on an unexamined collective assumption that truth and falsehood are mutually exclusive. In a digital era, popular understandings of science and objectivity have unfortunately enabled an online weaponizing of this ‘either/or’ binary idea of truth/falsehood as an ongoing battleground: to create confusion and doubt; as an emotional motivator to trigger outrage and engagement; and as a driver of tribalizing negative cohesion. Encouraging end-users to see truth/falsehood as an act of communication attaching to contexts, rather than a universal or absolute quality or condition attaching to contents, is therefore an important step in reversing some of the negative effects of digital technologies.

1.1 Misinformation: repetition, not invention, as the problem

Cailin O’Connor and James Weatherall’s overview of disinformation in science The Misinformation Age (2019) introduces the socially- and psychologically-driven processes that have historically favoured the spread of false beliefs. They focus on the part played in science denialism by models of ‘science’ itself: by confirmation bias, conformity bias, cognitive bias and other forms of motivated reasoning. All these models have in common what they call ‘selective sharing’. With twentieth century advances in mass communications this emphasis on only certain facts became ‘industrial selection’ (public influence campaigns pursued at scale by corporations and political interests). Following Orestes and Conway (Merchants of Doubt, 2010) and what Michaels and Montforton have described as the ‘manufacture of uncertainty’ (Michaels and Montforton, 2005; Michaels 2008) O’Connor and Weatherall discuss how the ‘Tobacco Strategy’ developed in the 1950s—that is, using science to fight science by creating doubt, mostly by selectively repeating minority views that obscure majority consensus—has been successfully applied to campaigns to delay the regulation of other industrial products, especially fossil fuels. ‘Doubt is our product’, as a Tobacco industry executive famously said in an internal memo in 1969, ‘since it is the best means of competing with the body of fact that exists in the minds of the public’ (O’Connor and Weatherall 2019:95; McIntyre 2018: 24). Since big tobacco first systematically sought to cast doubt on the fact that smoking is harmful, doubt-creation campaigns, from mercury in fish and acid rain to CFCs and climate change, have sometimes involved the same scientists, PR companies, and financial sponsors in networked ‘subcommunities’ (O’Connor and Weatherall 2018; Kim 2019): for doubt about one issue of scientific trust helps create doubt about all. ‘Victory will be achieved,’ as the Petroleum Institute summed up its campaign to counter the 1992 Kyoto Protocol’s requirement to immediately reduce CO2 emissions, ‘when average citizens ‘understand’ (recognize) uncertainties in climate science; when recognition of uncertainties becomes part of the ‘conventional wisdom’ (McIntyre 2018, p 31). The science denialism strategies O’Connor and Weatherall wrote about in their 2019 book were promptly exemplified by the COVID pandemic. It is not surprising, as many have pointed out (e.g. the Yale Climate Connections Project: Nuccitelli 2020) that the WHO calls COVID an ‘infodemic’, a worldwide phenomenon that ‘should be treated as a scientific discipline on a par with understanding the spread of the disease itself, since behaviour change is critical to every pandemic response’ (WHO 2020, 2021). Throughout their 2019 book and wider research O’Connor and Weatherall consistently emphasise that creating false information is of secondary importance to how information of all kinds is disseminated. Selective sharing of disproportionate results, the misrepresentation of scientific consensus, the creation of false oppositions, controlling the terms of a debate, and appeals to collective identity and emotion: the goals of ‘propagandists’ have been achieved by manipulating social processes of ‘sharing’ and ‘spread’ more than by the creation of false content.

1.2 Science as a matter of public communication

A definition of ‘science’ limited to those who produce it ignores the fact that the practical agency of scientific facts depends on how they are shared (Ritchie 2020; Beer 2009). The COVID pandemic has been a vivid illustration of this. For historian of science Jim Secord, the assumed ‘shared objectivity’ on which science previously depended has broken down (Secord 2020). Truth requires shared attitudes about what facts are, and something becomes a truth through the act of being shared. Historian of science Steve Shapin similarly emphasises that the agency of facts depends crucially on how and by whom they are repeated (Shapin 2019). For Shapin and others who have discussed the reciprocal evolution of scientific discoveries and the ‘news’ in the eighteenth and nineteenth centuries, which depended on the same practical infrastructural developments (whether increased speed and range of travel of people and goods, or the evolution of mass communication technologies, usefully outlined in Gregory and Miller’s 1998’s 1998Science in Public: Communication, Culture and Credibility and Hirsch and Silverstone’s (eds) 1994Consuming Technologies) a strong relationship between a crisis of truth and the affordances of the digital era might be expected.

Certainly, the political impacts of some pre-digital expansions of communications technologies (e.g. cheap print in the nineteenth-century, mass radio broadcasting and cinema-going in the twentieth) offer some basic pointers as to why the global spread of mobile devices and monetisation of personal data that occurred in the early 2010s (Martinez 2017; Zuboff 2019) might rapidly lead to a disproportionate state of ‘after-objectivity’, or ‘post-factuality’ (McNair 2017; Könneker 2018, 2021). But two things are distinct about the digital era: first, speed—information is shared so fast that virality becomes a key source of power, and a goal in itself, especially for would-be-persuaders; second, the ability of receivers of information to instantly influence that information (Secord 2020). Together these facilitate an unprecedented social environment in which ‘instantaneous, uncritical behaviour’ (Arritage and Vaccari 2021) has serious consequences for the ability of scientific knowledge to impact reality (e.g. vaccine resistance and anti-lockdown activism in the COVID pandemic: see e.g. Murphy 2020). But the difference is not only that end-users themselves influence ideas or drive trends: it is also that their existence as a reified imaginary creates new stakes in controlling the perception of trends. This is why bots and trolls—interveners in trends—are receiving such significant investment from national governments and professional influencers. For example, from May 1st to July 31st 2018 49% of all Russian language social media messages about NATO in the Baltic States and Poland were bots, with Russian-originated troll accounts on Twitter in English reaching a spike of 44% in July 2018 immediately prior to and during Trump’s visit to Europe (NATO Strategic Communication Centre of Excellence 2018). Polling also participates in the perception of trends, becoming a live, testing mechanism for an influence campaign’s effectiveness, rather than research about existing views. The extraordinary rise of state-promoted bots and trolls (Jankowicz 2020; Pomerantsev 2019) not only stands as proof that power lies in manipulating the iteration of content via ‘computational propaganda’ (Wooley 2018; Tsyrenzhapova and Wooley 2021) but marks this power as a fundamental affordance of the digital public environment. Manipulating apparent consensus allows a minor or evanescent intervention in terms of cost and scale to be instrumentalised to disproportionately significant political result (O’Connor 2019).

In a world flooded uncontrollably with too much information, that since the 1950s has used the molds of ‘science’ and objectivity to create doubt, we have therefore moved from problems of false content creation to problems of the misleading sharing of content. From amplification, to framing, labelling, and creating unrepresentative or false oppositions, individual instances of data are systematically re- and decontextualised to persuasive ends. In terms of the publication of scientific results, this is a move away from ‘research misconduct’ being the problem—i.e. making incorrect or misleading claims, a matter of content—towards what Mario Biaglioli calls ‘post-production misconduct’ (Biagioli 2020), i.e. the prejudicial repetition of claims, a matter of context.

1.3 ‘Post truth’ and ‘fake-news’ as interventions in context, not content

Commentators on post-truth from the disciplines of philosophy, political science and social science have often associated it with the affordances of the digital era, in combination with other changes such as neoliberalism and globalisation (e.g. Farrel 2012; Farkas 2019; Zimdars and Mcleod 2020; Benkler et al. 2018; Bucher 2018). Those from journalism, history of science and communications backgrounds also stress the roles of increasing income inequality, the deregulation of media and communications, and post-modernism (McIntyre 2018, pp 34, 59; D’Ancona 2017; Levitin 2017; Rabin-Havt 2016). Some outliers take a misanthropic determinist approach (Davis 2017) or even trace post-truth, as so often with a new theory, back to the ancient Greeks (Fuller 2017). These various explanatory factors all address and imply different subject publics and audiences, and many avoid the difficult but central issue of their own cultural specificity. A useful overview is that of historian and philosopher of science Lee McIntyre, who draws from many of the same late twentieth-century psychology and behavioural science studies as O’Connor and Weatherall, but prefers the term ‘post truth’ over ‘misinformation’ to capture what for him is a systemic and global problem about the relocation of political power: ‘Truth is being challenged…as a mechanism for asserting political dominance’ (McIntyre 2018, p xiv). Journalist and ex-editor of The Spectator Matthew D’Ancona, while acknowledging he is speaking only about the UK and the US, makes a case for digital technology—i.e. algorithmically-driven advertiser platforms and social media – as ‘the all-important, primary, indispensable engine of Post-Truth’ (D’Ancona 2017, p 49).

Studies of ‘fake news’ cover similar discursive territory as those of post-truth, tending to be more empirical, and tied to specific countries and effects. President Trump claimed he popularised the term and the idea of fake news pre-emptively early in his presidency to make sure that when he was criticised by the press the public would not believe the ‘negative stories’ (Remnick 2020). As distinct from post truth, fake news is understandably associated specifically with partisanship and political polarisation, as both its driving mechanism and desired effect. It works to deliberately manufacture and sustain public conflict, and is thus overtly concerned with the idea of truth as an aspect of the public environment: with the operational agency of the idea of truth, particularly the ability to replace methods of evaluation with emotional identity politics; or as Matthew D’Ancona puts it, fake news is engaged in changing our ‘attitude to truth, rather than truth itself’ (D’Ancona 2017, p 126; Jankowicz 2020). In Facebook’s own research on ‘fake news’ in April 2017, three out of four of the kinds of fake news it distinguishes are to do not with content, but with how content is shared. It acknowledges the creation of individual falsehoods, or ‘false news’ (i.e. articles intentionally misstating facts to arouse emotion); but then goes on to stress the importance of ‘influence operations’ (deliberate dissemination by governments or other organisations with intent to distort political sentiment); ‘false amplifiers’ (coordinated activity by inauthentic accounts i.e. bots and trolls with the intent of manipulating political discussion); and ‘disinformation’ the intentional spreading of manipulated information (Lanchester 2017, p 6). Fake news is spontaneously shared as much as six times more than evidence-based news (Orlowski 2020) and is thus specifically about ‘circulation’ (Bounegru 2017). The Public Data Lab’s (2021) ‘Field Guide to Fake News’ says it is not about truth values as such but about the control of frames: of attention, labels, debates, narratives.Footnote 1 As such it grows out of a long tradition of “truth regimes” crafted by states seeking specific population-wide effects (Zeveleva 2019). If fake news is designed specifically to be repeated, and is dependent on social processes of sharing (Sloam and Fernbach 2017; Tandoc 2021), it is worth noting that conflict over the very idea of truth/falsehood itself encourages that spread.

Studies that approach the problem of truth through the lenses of post-truth and fake news thus broadly draw the same conclusion as O’Connor and Weatherall: that what someone believes is a function of how information of all kinds reaches them (Zimdars and Mcleod 2020; Farkas 2019). This is indisputably the principle condition that has changed with the digital environment. Commentators from within the technology industry itself (all those interviewed in Jeff Orlowski’s (2020) Social Dilemma, for exampleFootnote 2) not only see post-truth and fake news as a product of the digital era, but see their common driver, the ‘manipulation of human behaviour for profit […] coded into these companies’, as a non-trivial and urgent harm. ‘Infinite scrolling and push notifications keep users constantly engaged; personalised recommendations use data not just to predict but also to influence our actions, turning users into easy prey for advertisers and propagandists’ (Girish 2020). Certainly, since the online-organised attack on the US Capitol on January 6th 2021 and ongoing related attempts to discredit the US election, the correlation between the spread of social media use and the increase in belief in conspiracy theories (e.g. Uscinski 2018; Moshakis 2018; O’Connor and Weatherall 2019, Ch.3) is impossible to ignore.

Yet some political scientists continue to hold that untraceable, socially-networked, inscrutably-financed digital communications can be almost entirely dismissed from contemporary electoral analysis (Sobolevska and Ford 2020). This is problematic not least because the established categories and oppositions that traditionally feature in such studies—labour versus capital, the state versus the people, left versus right, Labour versus Conservatives, Republicans versus Democrats, etc.—not only no longer necessarily apply in a digital era, but their anachronistic manipulation by would-be persuaders is part of the harm. Not only are electorates persuaded to political action by publicly invisible means, but this persuasion requires no relationship to truth value whatsoever, nor even to a coherent practical outcome, as the storming of the US capitol suggests. As a political event, this digitally-driven physical assault was a public act of communication, an investment in an ongoing public contest over a political narrative (Donovan et al. 2021).

To control narratives was Trump’s approach to politics from the start: to ‘fight cases in the court of public opinion’ as his White House Communications Director put it, rather than in actual legislative contexts, specifically using a post-social media ‘fragmentation of all media…strategy’ newly available via digital methods (Osnos 2018; see also Roberts et al. 2018; Boczkowski and Papacharissi 2018; Fielitz and Thurston 2019). Strategists promoting false statistics or ‘alternative facts’ on Trump’s behalf expect to defend these false claims based on a media system shaped not by gatekeepers but by end-users. Newt Gingrich made this explicit in early 2017 defending Trump’s false claim that crime in the US nationwide had risen by 47%, when in fact it was at a historic low (McIntyre 2018, p 5). First, Gingrich defended the false claim by saying crime was up in a few places, such as Chicago and Washington (disproportionate selection); then that it was up in the nation’s capitol (selection again, with emotive reasoning); then that the ‘average American’ does not think it is down (appeal to an imaginary consensus); then that the claim that crime is down is just his interlocutor’s own personal opinion (personalisation); then, when told again it is a fact, and the FBI’s own data, that everything Gingrich himself has just said is also a ‘fact’ (tit for tat, or false equivalence) and that his opponent’s ‘theory’ is based on statistics that ignore the ‘lived human reality’ of Gingrich’s facts; and finally that people do not feel it is true that crime is down, so the FBI’s statistics do not match people’s feelings (an emotional consensus outweighs always-suspect evidence). This sequence of defensive arguments depends on imagining an implied ‘public’ as a final arbiter, and assumes that widespread belief is its own proof. The existence of widespread belief is the ‘fact’ Gingrich refers to and defends: in other words, the contest here is about the agency of any claim or statistic to persuade. But Gingrich’s arguments are also an example of the effective ‘mimicking’ by propagandists and conspiracy theorists of models of scientific reasoning (Poole and Giraud 2019). Gingrich’s faux modelling of argument has a fundamental relationship to conspiracist strategies more widely (Lewandowsky and Cook 2020), the appeal of which is rooted in the offer of an alternate community of belief, and of a special identity as simultaneously victim (dismissed outlier) and savant (access to a special truth). Conspiracy theories can be understood as a realignment of a person’s identity precisely with their social context (Leal and Drochon 2018). The typical conspiracist’s belief that evidence has been systematically manipulated, and their typical selective reinterpretation of the random—seeing pattern where there is none—is a distorted mimicking of scientific reasoning specifically potentialized by the digital environment, in which all information is atomised, decontextualised, and rendered as an emotional prompt and badge of identity.

1.4 Truth vs. falsehood as itself the driver of an emotional ecosystem

If patterns of sharing that belong to information are now, in practical terms, more persuasive than informational content, it is important to analyse the role played in this shift of authority by the perversion of popular models of science, truth, expertise and knowledge more widely. The idea of truth versus falsehood as a binary either/or property or condition attaching to individual instances of content, for example, plays a central role in facilitating the conflict, division and doubt that misinformation, post-truth and fake news are designed to create. Doubt-creation depends on a population taking for granted old categories and concepts and emotionally reacting without time to notice the wider frame and game. In this newly live environment, correction of piecemeal instances of truth/falsehood, in isolation, without reference to the frame and game, can play into influencers’ distractive agendas Szalai (2019). The ‘noise’ of emotionally driven veridical claim and counterclaim makes it harder to notice the oppositions or ‘sides’ being implied, the narrative being created, the misrepresentations of apparent consensus being performed, or the norms of fairness or balance being misapplied: in other words, the prejudicial repetition of truth claims. It delays identification of who or what is choosing to repeat something, and why.

It is not only, therefore, that ‘emotions trump facts’ (Goh and Soon 2021), which has always been a potential problem: but that in a digital era public and personal emotions about truth versus falsity—an opposition most scientists would expect to contextualise in a discussion that carefully distinguishes knowledge from (mere) information—have been harnessed to mechanisms of sharing via technologies whose business model relies precisely on a feeling-driven iterative ‘either/or’ yes or no: the click, like, share. The ‘engine’ that drives post truth and fake news is self-reflexive: the idea of truth itself is the driver, taking advantage of the in-built emotional and self-generating properties of digitally mediated public communications.

In offering atomised decontextualised information, the digital environment specifically enables a double standard for truth/falsehood, where—depending on the narratives spun around them—a big lie can flourish without political cost, but at the same time an entire political class, issue, institution or movement can have doubt cast on it by a single minor correction, or atypical fact (O’Connor 2019). In such conditions, binary ideas of truth and falsehood as a mutually exclusive ‘either/or’ opposition, and as an immanent property of individual instances of content, specifically enable the problem. They distract attention from asking questions about in whose interests it might be to believe or not believe any particular truth claim. This is the principle used by Trump to question the results of the US 2020 election, for example, focussing on decontextualised, individual instances of disputed vote tallies as pseudo ‘evidence’ while avoiding consideration of wider aggregates of data. This persuasion depends on the capacity for these isolated, atomised instances of information to be taken out of context and be disproportionately read. What is needed is contextualised, multipolar, plural thinking (Mouffe 2013).

Truth is, therefore, now, in a practical sense, a social, rather than an epistemological problem; a social problem caused by a dynamic system whose affordances and business model specifically sustain it as a problem. So a systematic approach to the problem of truth is needed, one that offers the public an overall new way of seeing, or general principle: in the case of this article, the suggestion that in a digitally–mediated culture, power lies in controlling what gets repeated: shared, disseminated, spread, amplified, liked, clicked on. Users need to consider the use-value of truth as an emotive trigger; and to ‘read’ how truth values are being used in context. The content is the context, in an algorithmically–driven information environment.

2 Digital public space as machine-learned behaviourism at scale

The digital environment renders public space, but this is not a ‘public sphere’ (Boyd 2010; Papacharissi 2013; Poletti and Rack 2014). The monetisation of harvested user data (Zuboff 2019) and the sudden spread of mobile phone use since c. 2012 (Martinez 2017) has enabled the microtargeting of individual users based on reciprocal feedback from recommendation engines to create an ongoing, self-learning, interactive system. As Carl Miller describes it, in the new ‘global power grab’, ‘you do not buy space in a particular publication; you buy space in front of a particular kind of person, wherever they happen to go on the Internet’ (Miller 2018). Wikipedia, accord to Jaron Lanier, is one the few places on the internet that is still not tailored to the individual eyeballs looking at it, but is genuinely public information, where ‘we’ all see the same thing. Lanier’s way of dramatizing users’ current relationship with, for example, their Facebook and YouTube feeds is to invite users to imagine dictionary or encyclopaedia search results tailored to them individually, with the goal of shaping their actions to commercial or political ends (Orlowski 2020). For AI pioneer Geoffrey Hinton, the critical change that has occurred in the past two decades is that data sets rather than humans have become the programmers. Data sets are neither responsible, nor accountable.Footnote 3 As DeepMind Professor of Machine Learning at Cambridge University Neil Lawrence puts it, a ‘bunch of interacting software components’ working in a way ‘nobody fully understands’ are affecting society in ‘dramatic ways’ (Jolin 2020). Most people still think of the internet and Google as a kind of public library, rather than (as has been the case since c. 2012) a platform whose customers are its advertisers, not its users. Users and their attention are the product; and platforms compete to be the best to deliver this product to advertisers. They are, therefore, designed to compute and then reinforce a user’s assumed preferences, interests and allegiances with attention-grabbing programming that exploits and consequently amplifies societal prejudice (Noble 2018). There is almost no information online that is not, in some way, recommended by an algorithm: the internet is already a highly-regulated environment, just with profit for advertiser platforms as its goal, rather than accountability, traceability, or the public good. Vocabularies of ecosystems, networks, trends, or economies and ecologies of knowledge and attention (Wu 2016; Williams 2018) point to the difficulty of intervening in what technology sells as ‘public’ space but is in fact a new kind of curated privacy (Mahlouly 2013; Sousa et al. 2013; Hrudka 2020, in this volume).

As this fracturing of apparent publics into silos and echo-chambers has become more pronounced, so has the invisibility of this very process. Companies that sell digital public relations or electoral influence operations using AI-mediated principles of psychology and behavioural science advertise as a selling point the fact the public are ignorant of these processes of manipulation (Geoghegan 2020; Levitin 2017). Such fractured publics are no longer seeing a whole picture but typically believe they are. They exist in an attention economy in which previous choice architectures based on comparative evidence, diverse facts or reasoned argument—what Sander Van der Linden calls the ‘valley of open-mindedness’ (Van der Linden 2017)—are significantly reduced. There is no longer a single, overarching ‘public sphere’ in which a contest between opposing views can be observed, in which evidence can be fairly balanced, or from which individuals can form independent views. The idea of a public as a unity of diverse views became an anachronism in 2012 with ongoing individual data harvesting, profiling, and microtargeting. Individuals often have no knowledge of the information being received by those who think differently from them. Yet the persistence of the imaginary of a shared public sphere as a mix of ever-changing independent views, itself enhances the ability of formerly extreme or minority beliefs to appear to dominate as consensual. Deliberate influence operations exploit the deniability, appearance of consensus, and false spontaneity (the sense of something collectively ‘happening’) this new medium offers. For Cathy O’Neil, algorithms are social opinions expressed in numerical terms, ‘instrumentalised’ to direct individual preferences as part of a panoply of persuasive agents. Their interactive processes involving reciprocity, feedback, hidden networked effects, liveness and continuity need to be made more transparent (O’Neil 2017).

Key to the power of this system of persuasive agents is the exploitation of their inbuilt tendency to self-propagation. In this, the offer of identity and community in a fractured, globalised world, and the triggering of emotion, especially outrage, each reinforce the other (Lewandowsky and Cook 2020; Martin and McIntyre 1994). As Rene DiResta says 'It's not that highly motivated propagandists haven’t existed before; it’s that the platforms make it possible to spread manipulative narratives with phenomenal ease—and without very much money’ (Orlowski 2020; DiResta 2020). This interactive system is a mechanised, algorithmic expression of the principles of psychology and behavioural science discussed by O’Connor and Weatherall: it is, as Ron Deibert says, ‘behaviourism applied on a mass scale’ (Deibert and Runciman 2021).

Bespoke algorithms can now pre-harvest data on targeted groups before they begin a social media (and therefore untraceable, undeclared, invisible) persuasion campaign, using not just falsehoods, but falsehoods micro-tailored to specific target communities. AI-analysed polling is used to live test and refine persuasion campaigns as they proceed. The ‘split-testing’ built into the design of the UK’s Leave campaign’s micro-targeted Facebook ads in 2016, for example, followed extensive data-gathering (Geoghegan 2020) that included operations to gain access to desirable but hard-to-reach target populations such as young male non-voters, whose personal contact details were obtained via a football competition with a £50,000 prize that was never won (Cadwalladr 2018). The services of international digital PR strategy firms were purchased by the Conservative party to sway the UK 2019 general election and to control negative public perceptions of the government’s handling of the COVID pandemic as the UK suffered the worst death-rate and economic hit in the world (e.g. Topham Guerin, whose slogan is ‘Digital, differently’Footnote 4). The ‘persuasive technology’Footnote 5 of 1997 is now, in 2021, a routine professionalised aspect of electoral politics around the world (Bossetta 2018). But as computers calculate persuasive effects in real time and review them on an ongoing basis, their targets are largely unaware of this, and those who pay for and who stand to benefit from this digitally-driven persuasion remain invisible. These new political advantages accrue specifically from the AI-enhanced control of the iteration of information.

Regulating the technology giants who monopolise this transnational interactive system is thus important, but neither an easy, nor a sufficient solution. This live, ongoing interactive digital public space is not necessarily a unitary or geographically-located entity. It varies from country to country, group to group, persuasive purpose to persuasive purpose and time to time. In China, Russia, Spain and some other countries, its affordances have arguably enabled counter-hegemonic movements (Zhang and Garcia Martinez in this volume; Sampedro Blanco and Avidad 2018). But since 2012 remarkably similar negative socio-political effects have emerged across an extraordinary variety of languages, cultures, and nations: notably increased social division, doubt about facts, and the amplification of unrepresentative minority views. What digital publics particularly subject to disinformation campaigns have in common, whether in the Philippines, India, Brazil, the US or the UK, is that each part does not hear or see other parts, and all parts believe themselves to be larger than they are. As John Naughton said recently, it is of little practical use to apply universal or shared ideas of truth to a fractured, siloed, unseen network of networks, constantly reshaping itself at speed and in which (for example) around half of all twitter accounts are bots or trolls: ‘we have to let go of the idea that if people had access to more information, or to correct information, they would think differently’ (Naughton 2018). In a world where facts no longer persuade, the systematic organisation of attention becomes the primary persuasive determinant. So an approach is needed that brings into visibility these common patterns, and understands the interactive human–machine algorithmically-generated public environment as one above all of persuasion: one that, therefore, places user education/resistance at its centre.

3 Repetition as an angle of approach

Bots and trolls—by definition, paid for by somebody—point to repetition as the significant difference between how information exists in a digital public space versus earlier analogue persuasive environments. Looking at the interventions bots and trolls make as repeaters designed to suppress or enhance trends underscores that what creates these new persuasive potentials is the interaction between seeders and repeaters of content. Focussing on the effects of digital technologies in isolation can miss the core problem, which is the new potentials offered by the symbiotic relationship between mainstream media and social media. A newspaper, today, is less its print content or its subscriber base than its ability to reciprocally influence and be influenced by vast like-minded networks of digital communities, in processes that are both deliberate and not, and whose effects are typically invisible to their persuasive targets and deracinated from their progenitors. A public shift of attention onto how information is repeated must include information of all kinds, across all media, revealing a complex information environment in which centuries-old methods of influence, protest and opinion-manipulation combine with unprecedented potentials of disproportionate and rapid amplification.

In Danah Boyd’s opening keynote speech at the Republica conference in Berlin in 2018, ‘How an Algorithmic World Can Be Undermined’, she said it was crucial that journalists, as 'first stage amplifiers', and social media users, as ‘second stage amplifiers’ (a casual like, share, post or retweet) become aware of the importance of 'strategic silence' (Boyd 2018). It was essential, she said, that the ‘reporting ecosystem’ stop unthinkingly reporting things simply because they have been reported by others. Eric Boehlert’s blogsite ‘Press Run’, founded in February 2020, seeks to educate news producers and news amplifiers about ‘feedback loops’, seed stories, ‘both sides’ journalism, and clickbait.Footnote 6 An example of the two-way interdependence of seeders and first and second stage repeaters is the gun-owner who travelled to Washington D. C. to open fire on the employees of the Comet ‘Ping Pong’ Pizza Parlour in 2016. He had heard about the paedophile ring allegedly being run from that location by presidential candidate Hillary Clinton not from the conspiracist social media channels that made this claim, but from the Washington Post’s reporting of the story to debunk it. He googled the Pizza parlour, and search engine recommendations based on his other interests (eg in guns) took him to the conspiracist websites (O’Connor and Weatherall 2019, p 149, 168; see also Zimdars and McLeod 2020). Bad actors understand how to seed this reciprocal environment to create news events that can originate in either mainstream or social media, in a responsive cycle that is self-creating, effectively ‘mediatising’ politics (Hallin 2021; see also Tandoc 2021, and Baker and Chadwick 2021, in the same volume). This combination of psychological and structural elements generates unprecedented conditions for the production of false beliefs. Dramatic stories like the Pizzagate shooter are atypical distractions: more important are the promulgation of false beliefs that act as mild disincentives, causing just enough doubt and demoralisation to confuse and suppress opposition. In the latter, the false appearance of widespread consensus (numbers of hits, clicks, likes; the algorithmic recommendation of similar content) is key. Nothing persuades as powerfully as the belief that many others think the same.

3.1 Iteration as the erasure of perspective: ‘sideism’

Users who believe themselves to be participating in a growing consensus are in fact often being siloed with the like-minded. If they show interest in any one particular issue or opinion, unseen recommendations send them deeper into a networked set of algorithmically-related issues and opinions without them necessarily knowing this, nor what information others outside their own constantly refined and curated categories are receiving. This creation of overall ‘sides’ around associated political issues or views is a particular feature of the ‘right-wing media ecosystem’ in the US since the 1980s (Roberts et al. 2018), in which every view can be taken as representing a 'side'. This partly follows from siloes being networked into systems of siloes to extend what is essentially an offer of collective identity that in order to sustain itself across such difference becomes a generalised cultural identity against. These are not teams, allegiances or sides chosen periodically from a visibly mixed starting group: they are teams, allegiances or sides constantly being performed—like gender—via the ongoing identifying and labelling of a cultural opponent, opposition, or ‘other’. This offer of identity in a polarised, blame-driven information environment can easily become a totalising proposition (‘if you are not with us, you are against us’), like the psychological dynamics of any cult, or the overt informer/conformer dynamics explored in Brecht’s Fear and Misery of the Third Reich (Brecht [1938–1945] trs. Willett 2012). A specific affordance of an algorithmically-driven information environment is the capacity to numerically evaluate all information, whether by human or machine calculation, as representing one ‘side’ or another (Oates 2021). Such associative ‘sideism’, anchored to emotion, can be a more important persuasive valence than whether or not an individual statement is recognised as true or false (Stillman 2017). Anthropologists, historians and social scientists have discussed how an established collective narrative can trump personal experience that contradicts it. The internet delivers this human tendency at scale. Further, the digital environment erases visible spaces of gathered contrasting arguments and evidence, from whose comparison a contextualised, reasoned belief might be drawn (or indeed, a false belief changed). This causes a ‘collapse of trust’ (D’Ancona 2017, p 36) that is not just of any trust, but the collapse specifically of a wider public trust, trust precisely in a mixed consensus despite and including difference; trust in a larger, diverse, broad-church ‘we’, as John Lanchester insightfully argues (Lanchester 2017, p 5)—and with it, faith in governments and electoral processes (Miller 2018).

The disruption that results from the production of doubt thus not only serves the interests of tobacco, fossil fuels and other industries eager to delay regulation: in a global digital world it clears the playing field for transnational financial interests who would like to avoid or control the irksome regulation, taxation and oversight of nation state legislatures. Electoral democracy as a political process is less, therefore, at risk of gradual degradation (Ahlstrom-Vij 2021; Silverman et al. 2020) than an unprecedented opportunity offered by digitally mediated cultures to transnational strategic influencers, especially in two-party first-past-the-post systems like the US and the UK, where marginal election results, easily gamable by cynical or external forces, can have disproportionate impacts (Davis 2017; Jamieson 2018; Nelson 2019; Lepore 2020). Regulation and anti-trust legislation against the monopoly tech platforms, even changes to their business model, will not necessarily change this fundamental conflict of interest between transnational financial stakes and their potential regulation by national governments; nor the underlying problem, which is that many well-funded networked international interests stand to benefit from intranational societal chaos, division, and doubt about matters of fact. By definition, no billionaire can have a relationship to only one nation.

One effect of Reagan’s deregulation of the media in the 1980s was to deliberately encourage ‘sideism’, i.e. the idea that everything favours either an ‘us’ or a ‘them’. With the consequent creation of overtly partisan news production (e.g. Rush Limbaugh, Fox News, Sinclair Media) funded by individuals not necessarily invested in national stability (e.g. the Koch and Barclay brothers, Mercer and Murdoch families), a new space of permission emerged for asserting bias and thus claiming bias in others. This put traditional news organisations in the position of defending themselves as non-partisan, and under consequent intense pressure to appear objective, often by attempting to unwisely deny their editorial role and constantly demonstrate ‘balance’. With the advent of digital communications the stage was thus already set for a re-orientation of fact and objectivity as effectively a form of opinion. One consequence of the now pervasive use of false balance, false equivalence, and misused norms of fairness—where, for example, the views of climate science denialists, representing less than 1% of all scientists, are given equal air time with those of the other 99%—has been to, over time, render facts, evidence and knowledge themselves as a ‘side’. This is not an easily reversible change. It is tied in with coincident investments in denigrating expertise and demonising the educated as an ‘elite’, all three together creating fertile conditions for conspiracy theories to flourish. Publics can now be persuaded to view scientific consensus as itself politically partisan: a challenge not only to truth but also to basic public safety and security.

As historian Jill Lepore (Lepore 2018, Ch. 11) and others point out, from the invention of the printing press, to newspapers, to the telephone and radio (Sreberny and Torfeh 2014) technologies of reproduction have all had their particular social and political impacts. The cassette tape has been linked to the spread of fundamentalist Islam (Sreberny and Mohammadi 1994) and the mobile phone to the Arab Spring (Aouragh and Alexander 2011), for example. All technological change contributes to socio-economic transformations, after which previous power relations and social dynamics tend to re-establish order (Castells 2000). But none of these earlier technologies existed in the unprecedented conditions of an unregulated mathematically-modelled machine-learning system based on instantaneous, ongoing, networked psychological profiling. In this environment, in order to seed misinformation it is no longer necessary to marshall US troops to orchestrate a Gulf of Tomkin incident for the news media, or to send CIA agents to physically fake arms shipments from the Sandinstas to El Salvador (Omang and Neier 1985, p 18). All that is needed is a strategic tweet, repeated across worldview-congruent social media networks, with a data sprint on someone’s laptop to make sure the repetition is moving, in real time, as desired.

3.2 When facts lose their power, it is repetition that persuades

The persuasive effects of repetition itself have long been understood. Repetition was one of the three core principles of twentieth-century fascist propaganda, along with confusing the distinction between truth and falsehood, and demonising an ‘other’ to build community identity. All three work powerfully together to persuade; all three are also built-in affordances of the business model of tech platforms. It is not surprising that the works of Orwell, Arendt, and Benjamin are being found newly relevant by commentators on post-truth, fake news and misinformation (Snyder 2017). But key to notice is also that repetition is self-generating: spread begets spread, whether agreeing or disagreeing. A digital environment expands this advantage with unprecedented potentials of scale, number, and speed: hence the need for the new term ‘virality’. It is not iteration as persuasion that is new, but its specific potentialisation by a digital environment.

Repetition has also always had an essential relationship to truth. Like its digital result—numbers of followers, likes or views—it works to reify, to bring something into existence. ‘Repetition is what makes fake news work,’ as Emily Dreyfuss puts it, citing a 2012 Central Washington University Study:

‘When people attempt to assess truth they rely on two things: whether the information jibes with their understanding, and whether it feels familiar. But researchers have found that familiarity can trump rationality—so much so, that hearing over and over again that a certain fact is wrong, can have a paradoxical effect. It's so familiar that it starts to feel right.’ (Dreyfuss 2017).

Behavioural scientist Maya Shankar agrees: ‘Debunking a myth often does little more than reinforce it’ (Stillman 2017). Shankar, who did her postdoctoral research at Stanford’s Decision Neuroscience Lab, says when people hear something multiple times, ‘a listener may not remember if it is true or false….it just feels recognizable’ (Stillman 2017). This means that in a digital environment the bigger the lieFootnote 7 the better, as outrage stimulates correction, which stimulates spread. These psychological principles—what some call the ‘illusory truth effect’ (Stafford 2016), or ‘cognitive fluency’ (Fazio 2019)—are built into a digital environment. Shankar considers that Trump’s understanding of the cognitive impacts of repetition lie behind his success: not only the ‘sticky’ slogans (Lock Her Up, Make America Great Again) but the insight that flagrant, patent falsehoods are effective. In a context in which ‘we all live algorithmic lives’ (Bucher 2018) to refute a lie on the internet is, specifically, to boost it; to counter or correct a false claim might paradoxically increase its belief potential. For example, when Barack Obama produced his birth certificate to counter Trump’s lie that he was not born in the United States, the number of people who believed Trump’s lie went up, not down (D’Ancona 2017, p 68). As Stillman puts it: ‘repetition works’.

3.3 The digital public sphere as a continuous performance

The offer to favour a ‘side’ made by digital environments and the tendency to self-propagation are ongoing propositions. Unlike previous physical public spaces—the town hall, village green, theatre, town square—which people might choose to gather in at certain moments to either celebrate or protest, digitally-mediated public space is continuously produced. It is constantly being performed, and always evolving. So it matters, collectively, what individuals look at, and for how long; it matters collectively what individuals both do and do not see, read or do. To click or not to click, to like or not like, are collectively existential decisions. It is sometimes users’ failure to understand the ongoing, live, performative nature of digital public space—in which not only strategic silence, but timing is crucial—that allows governments and other persuasive interests to successfully professionally game it.

Virality, in this sense, has something in common with tradition. For however much cultural traditions consist in or engage with specific works of art or calendar-based ritual, they are above all collective practices, and depend on live, ongoing public engagement to exist. ‘Like applause in an auditorium, a tradition, narrative or label can be started by a single agent or event, but crucially depends on the complicity of others to exist…the starting point of a collusive tradition, story or trend can be located by anyone, anywhere, at any time: what is key is to keep it being repeated, being shared, being spread through collective capacities to recognise in the present: in other words, traditions only exist as performed.’ (Foster 2018). Viral processes on the internet operate in a similar way but with vastly increased speed, scale, and immediacy: there is no moment of rest. Narratives are never not actively and continuously shaping collective identity and vice versa; values are never not in contest. Moments of cusp when there is the potential for change in public knowledge or attitudes are critical, as well as immediate after-cusps as that potential ebbs and preferred manipulated versions of events crystallise, or counter-narratives are skilfully defused. Manipulators work assiduously to make sure their desired narrative gains critical mass and predominates, whether by repeated denial, distraction, attrition or centralised messaging at scale. A digital public space persuades one way or another all the time, as a performative condition of digital sociality. Manipulators understand, as McIntyre puts it, describing the transactional nature of ‘Trumpspeak,’ that ‘the value of speech is measured exclusively in terms of its effects.’ (McIntyre 2018, p 168). That these effects are time-sensitive, temporally-located, and exist in durational series over time is insufficiently understood by publics still being encouraged to imagine an earlier analogue public sphere of townhall debate and classroom argument.

4 Solutions: public awareness of iterative persuasion effects

End-users are thus simultaneously both the targets and the vectors of iterative persuasion. So it is not surprising that solutions increasingly focus on end-user education. For Roozenbeek and Van der Linden, the focus should be ‘on the common tactics used in the production of misinformation, rather than just the content of a specific persuasion attempt’ (Roozenbeek and van der Linden 2019). In his 2021 book What Do We Know and What Should We Do About Fake News? Nick Anstead, who argues that fake news is not about individual instances of information but a crisis of institutional authority, legitimacy and trust, calls for a ‘discursive solution’ within whole populations. For Steve Shapin, publics in a digital era need not just to ‘know science’ but where science lives: who to recognize as knowledgeable and reliable; who to trust; which institutions to consider as the homes of genuine knowledge.’ He calls this ‘social knowledge’ (Shapin 2019). Ron Deibert in his 2020Reset: Reclaiming the Internet for Civil Society, advocates introducing ‘friction’ to counter the apparent erasure of spatial and temporal social distance caused by digital technologies. Deibert cites a group of Italian scientists who were able to moderate the encouragement of genocidal and other violent IRL events associated with Whatsapp by simply limiting the size of Whatsapp groups and the ease and rapidity with which groups can message other groups.

Because ‘repeated misinformation is more likely to be judged as true’ (Ecker et al. 2010) and because once information becomes viral, or in the process of becoming viral, it ‘tends to stick,’ as those behind the ‘Debunking Handbook’ have argued (Lewandowsky et al. 2020) Cambridge University’s Social Decision-Making LabFootnote 8 has been developing ‘Prebunking’ techniques, designed to help people pre-emptively resist persuasion by misinformation, using the analogy of biological immunization: specifically, exploring methods to develop ‘attitudinal resistance through inoculation’ (Roozenbeek et al. 2021). To this end they have developed an inoculating ‘Fake News Game’ in which players impersonate fake news producers, using six documented techniques commonly used in the production of misinformation: polarisation, invoking emotions, spreading conspiracy theories, trolling people online, deflecting blame by discrediting an opponent, and impersonating fake accounts (Fig. 1).

Fig. 1
figure 1

The Social decision-making lab’s fake news game, shown to ‘inoculate’ players against misinformation

Trialling the game showed that exposure to the methods of persuasion being used by manipulators in the digital environment had a positive effect on enabling participants to identify and resist misinformation and disinformation irrespective of education, age, political ideology, and cognitive style (Roozenbeek and van der Linden 2019).

A growing body of scholarship about media literacy and digital literacy is expanding to include data literacy, information literacy, and critical literacy, among other ‘literacies’: a vocabulary that points to the fact that the skills of critical ‘reading’, in the widest sense, are front and centre of such public education initiatives. The 2021Routledge Companion to Media Disinformation and Populism includes articles on ‘news literacy’, ‘critical information literacy’, and a chapter co-authored by Farida Vis, a pioneer in the field of ‘visual literacy’ (founder of the Visual Social Media Lab in 2014)Footnote 9 who advises internationally on social media policy (Faulkner et al. 2021). Such articles all see changing public awareness as a key component of multiple coordinated solutions, not least because psychological tendencies have been built into the business model of a system driven by the financial and political interests it serves, rather than end-user benefits (Hdruka in this volume), especially since the monetisation of user data began in 2012. This article follows that trend in advocating for an epistemological reset that is public-facing, one that addresses not the digital environment as such, but as it is experienced by publics: i.e. the systemic interaction of old and new institutions, methods, ways of seeing, and behaviours. What is new is less change in the mainstream media or other institutions, or the advent of social media, but the way these interact to make a new live, collective, performative system that is more than the sum of its parts and like any other medium, has its own structural tendencies to misrepresent.

Thinking again about repetition as a whole offers a simple, easy-to-grasp image or model to support public awareness of this new complex public medium in which all our communicative actions are now embedded. ‘Repetition’ encompasses all types of information, media, and sources, in interaction with digital media. It encourages recipients of information to ask why they are receiving it, from whom, what other agendas or values it is related to, whether it is the expression of a person or a system, whether it is pre-emptive or reactive, who else is receiving it, who is likely not to receive it, who else knows they are receiving it, and what it might mean if they choose to click, like, or share it. In other words, seeing the internet through the lens of repetition encourages users to see the frame, and name the game. It encourages users to consider how to actively counter the way mainstream media in interaction with social media complexly persuades (Giraud and Poole, 2021). Thinking about iteration as persuasion reminds users of, for example, the need to recontextualise atomised information; to be alert to blame, or the side information is inviting us to ‘pick’ (McIntyre 2018: 113); to challenge assumed binaries with plural or polysemantic alternatives; to identify the consensus being implied, or targeted; to pay attention to timing and duration, and the wider networks with which an individual statement or position might be associated; and to reconsider or re-introduce the found, and the random. As in a library when we find a book, then pick another at random from a shelf nearby, putting our selected book in context, online users need to consider not only the other metaphorical books nearby, but the shelf, the stack, the wing, the building, and the library itself as a point of view on a world, not as the world.

Above all, thinking about iteration as persuasion prompts users to ask in whose interests it might be to believe one position over another, especially when those funding digital influence campaigns are not patent (their deliberate secrecy is a current matter before the US Supreme Court: Durkee 2021). As philosopher Julian Baggini points out in his 2017A Short History of Truth, a truth should be evaluated as an aspect of power, via the lens of whose interests it serves. This is not to discount, discredit or suggest that everything is relative and that there is no such thing as objective truth: it is to say that for the objectivity of any evidence to have power in a digital age it has to always be reviewed in the context of the networked financial and political interests who would benefit from persuasion (or not) to that belief. In other words truth cannot be separated from its social consumption, or its stakes be held to depend on purely immanent independent bases. Truth is, now, a form of agency, and should be evaluated in terms of its use-values, or as Baggini puts it, in terms of ‘who benefits from this version of “truth”: cui bono’ (Baggini 2017, p 81).

5 Conclusion: a culture of iteration

Creating an enemy, controlling the terms of a debate, modelling conflicts and problems as binaries, tit-for-tatting, personalising to discredit or blame, appealing to emotions and to imaginary consensus, the deliberate revival of old news, histories, labels and narratives (‘retraditionalising’ a term some have coined recently to describe this), the decontextualised quotation of speech and statistics, and distraction: such influence strategies are all accomplished by, and through, conditions of volume, reach, penetration, spread, sharing, dissemination, and attention: i.e. repetition. Manipulating those conditions is the pre-eminent opportunity created by a digital world. Iteration takes myriad forms, but its huge power as a blanket tool comes from four common principles. First, reification: it has the inherent psychological impact of creating the impression that something has happened, is happening, or is true. Second, self-propagation: once started by someone or something, others will continue its effects, offering deniability to its originators and the ability to create a sustained and wide impact with minimal intervention and cost. Third, apparent consensus: quantified volumes or numbers of likes, views, followers or clicks are useful not only as monetizable evidence of successful persuasion (since individual purchasing decisions or polling responses cannot be tied causally to exposure to any single piece of information) but because the impression of consensus itself powerfully persuades. Lastly, perhaps most useful of all to today’s would-be hidden persuaders (Packard 1957), is the fact that when repeated truths or falsehoods are contested, this spreads their reach (big lies work best because they stimulate denial; the main goal of disinformation is to create and sustain conflict and doubt).

Reframing truth not as an independent reference point or standard, but in terms of whose interests are being served by making a true or false claim, however variously believed, is important because there are many truths, such as climate change, that are inconvenient: in other words, whose widespread belief would entail huge cost. No-one stands to make billions overnight by taking steps to preserve an animal or plant from extinction; but many stand to lose billions overnight if national legislatures restrict their activity to that end, or changing public opinion reduces their profits (Rabin-Havt 2016). Information is typically indirectly tied to such financial interests; the invisibility of these ties is easily maintained in an iterative digital ecosystem that depends on processes of spread deracinated from their origins (Deibert 2020). Because the parties funding, motivating or promoting a certain world-view cannot always be known or traced, evaluating truth claims in terms of the financial interests that might benefit from their iteration is key. Reconsidering truth as a kind of social practice repositions that consideration as a priority.