Over the past three years, science has received greater attention than usual, due to the global health emergency caused by the spread of the novel coronavirus. From the outset, scientific knowledge offered the main point of reference for policymakers called to define measures for containing the virus and appropriately managing its impact on society. Accordingly, the scientific community came under pressure to rapidly provide reliable answers in a situation characterized by three crucial features: a fundamental lack of knowledge about the new phenomenon, the consequent necessity to base scientific choices on multiple assumptions whose reliability was controversial even among experts, and the urgent need for timely public health measures with potentially enormous socio-economic consequences (Benzi et al., 2021, p. 11).

In this scenario, some observers have noted that phrases such as ‘following the best science’ or ‘according to the available evidence’ have been used as a refrain by political leaders seeking to justify their political decisions (Devlin & Boseley, 2020). This has raised understandable concerns about transparency surrounding the use of scientific outcomes to inform community policies (Abdool-Karim, 2022; Rhodes et al., 2020). Issues of transparency may include, but are not confined to, concerns about what scientific evidence or theories (among those available) are implicated in policy design processes, how politics intertwine with science in informing effective policies, and the role and relative influence of scientists in complex political decision-making processes. Transparency is related to public engagement practices in science, a cornerstone of contemporary scientific endeavor that is key to attaining both epistemic desiderata (in terms of acquiring new knowledge) and social desiderata (in terms of enhancing the governance and fruition of science) (Ivani & Novaes, 2022). Such considerations are important, in that making political decisions purely based on scientific outputs—controversial as these may be–may promote attitudes of either under- or over-confidence in science (see Gaj & Lo Dico, 2021), both of which are detrimental to maintaining a reasonable equilibrium among potentially divergent views and the respective interests of the public, policymakers, and scientists.

At the intersection between these diverse stakeholders, models have played a leading role, entering public life as a palatable, widely discussed, and ‘domesticated’ scientific device (Biggeri & Saltelli, 2021): so much so that the phrase ‘flattening the curve’ has gone viral, becoming a sort of mantra (Montgomery & Engelmann, 2020). The evidence produced by mathematical models has been crucial to promptly designing control measures against the disease, to the extent that national responses have been described as “contingent upon fast-evolving modeling assumptions” (Rhodes & Lancaster, 2020, p. 178). Indeed, in the relative absence of strong converging evidence, models and their projections may afford a sense of control over that which is not completely under control: in this case, the unpredictable evolution of the pandemic (Rhodes et al., 2020). Thanks to COVID-19, therefore, modeling has gone public.

However, the availability of different models producing different—even conflicting—outcomes may have raised issues that are both epistemological (what is the best model to account for the phenomenon at stake?) and practical (what model can best inform safety interventions in specific contexts?) in kind. Such issues have been top of mind for policymakers as well as for the lay public, as both groups have striven to make sense of a novel and disruptive situation mainly based on scientific output. Given that science is commonly expected to offer definite and neat answers (Shanteau, 1987; Hodson et al., 2023), the multiplicity of models may have been seen as a flaw, rather than as a resource, appearing to undermine the credibility of science. To offer a concrete example, the most prominent modelsFootnote 1 used in the early phases of the pandemic to estimate the true number of daily new COVID-19 infections (both to date and projected), for instance in the United States, openly differed in terms of the number of infections computed and the pattern of change in the number of infections over time (Giattino, 2020). While this considerable disparity of output is evidence of science pluralism, it may nevertheless have generated understandable confusion on the part of the lay public. Indeed, on the one hand, the coexistence of multiple accounts of the same phenomenon may raise the doubt that epistemic accuracy in science may be compromised by the inappropriate intrusion of other than scientific values (i.e., political, economic, etc.); on the other hand, it might be viewed as hindering the implementation of decisions of public import (Carrier, 2017). In sum, the pluralistic nature of modelling may undermine the epistemic authority of science, thereby calling its trustworthiness into question (Intemann, 2023). Hence, the notion of model is deserving of scrutiny, given that it may shed light on how recent, unusually intensive, and sustained exposure to science’s inner workings may have influenced public attitudes towards science.

In this paper, I introduce my line of reasoning by illustrating selected features of models from a philosophical viewpoint, offering some conceptual insights into the nature of models in general. Then, based on my analysis of two instances of modeling, I go on to argue that science produces both inconsistent and perspectival knowledge. I propose that the intertwining of these two aspects is inherent to the scientific endeavor and underpins its pluralistic character. Nonetheless, in the extraordinary context of the recent pandemic, the unusually high level of exposure to scientific disagreement—as conveyed by the coexistence of inconsistent and perspectival models in an environment characterized by uncertainty—has led to pluralism being misunderstood, with the result of disorienting the public. Specifically, the pluralism of models, especially in highly uncertain environments, can easily be mistaken for disunity or fragmentation, which in turn is taken to denote unreliability, thus undermining representations of science among the lay community. In the final section of the paper, I suggest possible general approaches to counteracting distorted perceptions of science, thereby enhancing scientific literacy.

1 Models as Key Scientific Tools During the Pandemic

Since the onset of the pandemic, the usefulness of models for understanding and managing the spread of the disease has been hotly debated within the scientific community, as well as among the general public and in the media. Of course, epidemiology makes use of different kinds of models, each type suited to addressing research questions of a different sort. Arguably, the types most frequently used in the discipline are regression and risk factor modelsFootnote 2 (Benzi et al., 2021). The former rely strongly on a priori assumptions by researchers and focus on the association between independent variables (such as gender, age, previous illnesses) and one or more dependent variables (such as the risk of contracting a target disease). The latter revolve around the concept of risk factor, defined as a not necessary and not sufficient cause that increases the probability that an event will occur (Benzi et al., 2021, p. 10). However, the lion’s share of the instruments deployed during the pandemic have been mathematical models (Biggeri & Saltelli, 2021; Buchwald et al., 2020), whose main purpose is to offer an account of the transmission dynamics of infective agents among individuals or given populations.

To simplify somewhat, three kinds of mathematical models have been deployed in the attempt to make sense of the pandemic from different perspectives. First, explanatory models are designed to test causal claims about what has already happened; thus, their purpose is to understand past events. Although they have not been widely drawn on to analyze the dynamics of COVID-19, the few but key examples include models used to assess differences in complications among those who were infected (Adams, 2020). Far more commonly employed are projection models. These offer an account of what we may expect to happen under potential future scenarios, based on hypothesized sets of parameters whose values are selected by researchers (Adams, 2020; Rhodes & Lancaster, 2020). Accordingly, they are designed to inform us about future potential developments in the pandemic. This focus on the future is shared by forecasts, a third type of model that “combine[s] expectations about which conditions are likely to occur with estimates from projection scenarios, in order to estimate which outcomes (…) are likely to actually arise” (Adams, 2020). Understandably, projection models and forecasts have been the most popular and relied upon during the current pandemic (Adiga et al., 2020; Giordano et al., 2020), in that their purpose is to explore future scenarios in a global situation dominated by uncertainty.

The leading role played by epidemiological models within scientific and media debates legitimately raises fascinating and pressing questions about their respective characteristics and relations with reality. First and foremost, it should be noted that such models are undoubtedly heuristic in nature, in that they are designed to be attempts to obtain knowledge about a portion of reality that is (as yet) unknown. Typically, this use of the concept of model must be informed by a superordinate theory pertaining to a previously known objectual domain. Because this known domain is analogous to the unknown one, selected elements of the latter will be intentionally structured ad hoc to make the analogies between the two domains explicit. Hence, the model that results from this operation is viewed as a tool for learning about the unknown domain, based on the knowledge offered by the superordinate theory about the known domain (Galvan, 2006). Nevertheless, it should be acknowledged that, in the case of contemporary epidemiological modelling, the relationship between models and theories is not as robust as one might expect. Models appear to be relatively independent of explicitly stated theories (see Frigg, 2020), at least when theories are understood as coherent sets of statements about an objectual domain. Indeed, it often seems that the term ‘model’ might be best understood as equivalent to method, in that it conveys the idea that some kind of logic is applied (Galvan, 2006), based on pragmatic and contingent assumptions. Still, the notion of a model as a representation of an objectual domain is preserved: thus, in the context of the current pandemic, models may be generally understood as representations of complex phenomena (linked to the spread of a virus that displays specific features under certain environmental conditions) based on the application of sets of logical, theoretical, methodological, and pragmatic assumptions.

2 The Pandemic Through the Lens of Different Models: A Case of Pluralism

Both scientific and philosophical communities acknowledge that models are strongly dependent on the assumptions and hypotheses – of whatever kind these may be—put forward by those who devise them (Adiga et al., 2020; Buchwald et al., 2020; Fuller, 2021; Martini, 2021; Özmen et al., 2016; Tolles & Luong, 2020). However, as already laid out in the introductory section, the theoretical underpinnings of models are not always clear, explicitly stated, or available for discussion. This raises issues of transparency, or the lack thereof. Shining a clearer light on the nature and features of models may not only help scientists to fruitfully debate the theoretical grounds for their work, fostering transparency and critical thinking around the appropriateness of different assumptions; rather, it can be especially helpful to the public and policymakers, by enabling them to “better understand the assumptions built into the structure of these models and their predictions as well as the limited perspective any epidemic model comprises” (Fuller, 2021, p. 47).

In this section, I lay out two instances of how different sets of assumptions influence the design of models. The examples below offer an apt illustration of scientific pluralism in terms of coexisting models that represent reality in different ways. However, in no way do I intend to reduce the variability and pluralism of science to the examples of modeling below: this would be both naïve and misleading. In relation to the COVID-19 pandemic, specifically, there has been substantial disagreement among health experts on a wide range of issues, such as who is most at risk of being infected by the virus, how dangerous infection is, whether there is adequate access to diagnostic testing, how effective certain treatments are, and how effective personal and public health policies are in preventing the spread of the virus, to mention but a few (Nagler et al., 2020). Rather, what follows should be understood as paradigmatic cases selected to illustrate the hypothesis that scientific pluralism is not only engendered by the concurrence of mutually inconsistent scientific outcomes, as in the first example that we shall see. Rather, and crucially, it is rooted in the inherently perspectival nature of human knowledge, as the second example shows.

The first instance is drawn from Biggeri and Saltelli (2021). In a recent article, these authors reviewed different ways of modelling excess mortality, an indicator defined as the difference between the actual total number of deaths in a population (all-cause mortality) and the expected number of deaths (in this case, the counterfactual number of deaths that would presumably have been observed had the pandemic not occurred). This statistic has been flagged by experts as a reliable measure of the pandemic’s impact, given that—in the absence of univocal coding rules—it is the only indicator that is not affected by reporting bias. Specifically, the authors reviewed five studies whose aim was to estimate excess mortality during the first wave of the COVID-19 outbreak in Italy, focusing on the early months of 2020. Although these studies deployed different methods, their findings were broadly similar, with estimates of the number of deaths attributable to COVID-19 up to May 2020 falling between 49,000 and 53,000 (Biggeri & Saltelli, 2021). The authors compared these outcomes with findings obtained by one of them in a different study with other colleagues (Biggeri et al., 2020), in which excess mortality in the same months of 2020 had been estimated at 25,700 deaths. This is a big difference, considering the relatively convergent results of the other studies reviewed. What was responsible for this gap between estimates? According to Biggeri and Saltelli, it was due to the different assumptions underlying the design of the models. They observed that straightforward comparison with the same months in the preceding years—a procedure shared by all the reviewed studies, except the one by Biggeri and colleagues (2020)—was biased, in that the populations considered (the target 2019–2020 population and the previous three- or four-years’ populations) were not comparable. Indeed, in winter 2019–2020, the absence of an influenza epidemic initially determined a reduction in mortality compared to previous years. Then mortality rebounded, given that the population was frailer overall when the COVID-19 outbreak began. In this regard, the authors argued that “small variations around the expected value of mortality should be considered natural and not be counted as excess mortality” (Biggeri & Saltelli, 2021, p. 102). Their conclusion was that the other reviewed studies overrated the impact of COVID-19 because they did not take into account the specific context of the 2019–2020 winter season. Therefore, as earlier stated, the authors imputed the large divergence in outcomes to the different assumptions underlying the design of the models adopted in the different studies. More specifically, the assumptions underpinning the studies critically reviewed by Biggeri and Saltelli (2021) may be summarized as follows:

  • The average amount of death is stable over the years.

  • COVID-19 pandemic is an extraordinary event that disrupted the (relative) stability of the death rate.

  • The difference between the previous years’ average death rate and the number of deaths that occurred in the target period is a reliable index of excess mortality.


In contrast, the assumptions grounding the study of Biggeri and colleagues (2020) may be summarized as follows:

  • The death rate is not stable over the years.

  • Its variability is due to a range of ordinary events (such as the severity of influenza epidemics).

  • Hence, small variations around the expected value of mortality should be viewed as natural and not be counted as excess mortality.


In Biggeri and Saltelli’s (2021) view, the pessimistic narratives that had dominated among the general populationFootnote 3 had exerted an influence on scientific investigators, who incorporated this pessimistic bias into the hypotheses and methodological choices required by modelling. Here, the point of interest to us is that if extra-scientific narratives can exert an influence on scientists’ methodology, leading to disparate outcomes, this implies that modelling strongly relies on methodological assumptions, which turn out to be an expression of the specific (more or less implicit) hypotheses drawn on to account for the phenomenon under investigation.

Furthermore, Biggeri and Saltelli’s case study also points up the leading role played by experts. First, as seen above, the nature and accuracy of expert assumptions dictate key methodological decisions, such as choice of model, parametrization, and data selection. Second, the outcomes of the chosen models are also interpreted by the experts: in the case under consideration, as reliable indexes of excess mortality. Thus, we may go so far as to argue that “the starting and end points of modelling (…) are subjective expert judgements” (Martini, 2021, p. 155). Here ‘subjective’ is to be understood as ‘personal’: indeed, although the judgments of experts are contingent, they nonetheless represent scientifically-informed evaluations that draw upon epistemological and methodological foundations.

Another instance of how models vary as a function of divergent theory-ladenFootnote 4 outlooks may be pieced together using insights from the work of Pearce (1996), Fuller (2022), Broadbent (2013), and Schaffer (2005). In this case, we shall focus on the influence of the explicit assumptions that lead to the design and use of one or another kind of model, rather than on extra-scientific factors. Before addressing the core of our argument, it is worth going back a step, to briefly introduce the two kinds of models that will inform our discussion: namely, compartmental and agent-based models (Adams, 2020; Benzi et al., 2021; Tolles & Luong, 2020). These were among the most widely used paradigms during the early phases of the COVID-19 pandemic (Adiga et al., 2020), especially for prediction purposes, but they are markedly diverse in nature. What, in short, is the difference between them?

In compartmental models (CMs), the individuals in a population are partitioned into mutually exclusive groups, or compartments, based on their disease status. This means that each individual can only be in one state, or compartment; for example, in SEIR modelsFootnote 5, the compartments contain susceptible, exposed, infectious, and recovered individuals, respectively. Such models track transitions from one state to another and differences in the size of the compartments (Tolles & Luong, 2020). By contrast, agent-based models (ABMs) apply rules to each individual agent, rather than to groups of individuals within uniform compartments. Single agents are assigned probabilities of acting in specific ways, according to their characteristics (Adams, 2020). These models represent the contacts and health status of each member of a given population at the individual level. Therefore, ABM outcomes should be understood as aggregations of individually modeled processes (Iranzo & Pérez-González, 2021). Table 1 summarizes the main features of both models.

Table 1 Compartmental and agent-based models compared

Following Pearce (1996), these two kinds of models may be viewed as expressions of two distinct levels of analysis adopted by epidemiologists to analyze pandemics and predict their course: one that targets populations, which is typical of epidemiology as a branch of public health, and one that targets individuals, which is typical of a relatively recent epidemiological approach that is closer to the clinical sciences (Pearce, 1996, pp. 678-9). In light of their differences, it might legitimately be asked whether these kinds of models really target the same phenomenon, or rather, different phenomena. In attempting to answer this question, let us briefly consider the etiology of (viral) diseases.

According to Hucklenbroich, the definition of disease entities—comprising processes that exhibit an onset, a temporal duration, and an outcome in the course of an individual life—essentially depends on the identification of their primary causes (2017). The primary cause, or etiological factor, of a disease entity “is a necessary condition that is specific for this disease entity” (Hucklenbroich, 2017, p. 796, emphasis by the author). From this perspective, diseases are understood to have one cause and one only, according to a monocausal model of disease (Broadbent, 2013, p. 151). Significantly, such an approach does not envisage that a given disease actually has just one cause. Rather, it envisages that it is caused by a single factor that meets certain conditions of necessity and sufficiency. The condition of necessity affirms that cause C is a cause of every instance of disease D. On the other hand, the condition of sufficiency affirms that given certain circumstances, which together are not sufficient to cause D, every occurrence of C causes an instance of D.Footnote 6 In keeping with this line of reasoning, in the case of the present pandemic, COVID-19 may be understood as a constellation of symptoms, or syndrome, whose etiological factor (i.e., primary cause) is a specific pathogen, a virus labeled SARS-CoV-2. From an etiological viewpoint, this means that every person diagnosed with COVID-19 syndrome must satisfy the necessary condition of being infected by the virus SARS-CoV-2. Furthermore, under a given set of circumstances, which are not sufficient to cause COVID-19 syndrome, infection with the viral agent SARS-CoV-2 is sufficient to cause COVID-19 syndrome.

Now, considering that there is a unique etiological factor for COVID-19, would it be meaningful to distinguish between individual and population levels of inquiry? If so, do these levels refer to different causes or to the same set of causes, albeit approached from different points of view? In etiological terms, as suggested by Fuller (2022), inquiry at the level of populations is nothing more than inquiry based on aggregate measures of diagnosed individual cases. Indeed, the etiology responsible for the population incidence of COVID-19 is the same as that responsible for individual cases of COVID-19. More specifically, the population is made up of individuals, both non-infected and infected by COVID-19; for those who are ill, the necessary condition of being infected by the pathogen (SARS-CoV-2) must be met. Thus, the etiopathogenetic cause in play is the same at both levels, namely, in individuals and in populations. This would also be true when it comes to noncommunicable diseases, which are characterized by a more heterogeneous and diverse set of pathogens than are communicable ones. In this case, it is possible that etiological factors at the level of individuals may be re-formulated in different terms when it comes to the population, creating an apparent difference between causes affecting populations and causes affecting individuals. For example, when a public health problem such as obesity is studied in individual terms (e.g., food consumption habits), as opposed to in population terms (e.g., quality of commercial food, life conditions in industrialized countries), then the sources of the problem and its solutions may understandably be classified as different. Even so, the balance of argument favors the assumption that population and individual levels of analysis actually target the same real-world phenomenon, namely the etiological factor(s) of a disease, albeit from different standpoints. Indeed, in the more complex case of noncommunicable diseases, it is possible to describe a conjunction of causes responsible for each individual case of disease. Each conjunction may or may not share the same set of causes, but it is possible, at least in principle, to specify a complete list of etiological factors for a specific disease. Also in this case, the causes of population incidence may be understood as a subset of the list of individual etiological factors.

Given that the etiology of a disease is essentially the same whether we are considering individual cases or populations, we may now ask whether models informed by individual versus population levels of analysis differ. Individual and population explanations both seek a cause that explains the contrast between individuals or populations displaying a given set of symptoms or phenomenon and others (individuals or populations) who are not displaying these same symptoms or phenomenon. Thus, they both call for contrastive causal explanations (Schaffer, 2005; Fuller, 2022). This means that, when we are dealing with issues concerning individuals as opposed to populations, the contrast classes selected as salient will be different: in the first case, the contrast is among individuals and pertains to selected features ascribable to this level; in the second case, it is among populations and pertains to selected characteristics ascribable to this other level. For example, the questions ‘Why do some individuals contract long COVID-19?’ and ‘Why do some populations contract COVID-19 much more than others?’ are different, in that they require different kinds of answers. More precisely, these questions imply the choice of given contexts of inquiry (i.e., individual or population levels), from which to select the set of relevant alternatives, among which the salient causal factor can be found. In other words, the context of inquiry—pinpointed by a question formulated within a certain interest- and purpose-laden line of inquiry—acts as a meaningful background offering an objective basis for the selection of the cause: “[w]hat is capricious is the context. Speakers in different contexts, employing different contrasts, may disagree about ‘the cause’. What is predictable is selection given the context” (Schaffer, 2005, p. 344, added emphasis). In this sense, the contrastive approach to causality considers the selection of the cause, among many available, “as an inseparable aspect of our causal concept” (Schaffer, 2005, p. 345). Furthermore, the selection of control subjects, that is to say, healthy/non-problematic cases, is as much a part of the definition of the disease as the selection of causes (Broadbent, 2013, p. 159). In other words, defining who is ill because affected by certain conditions necessarily implies defining criteria on the basis of which someone may be defined as not ill, and so as not affected by the condition of interest. Hence, this outlook accounts for the possible coexistence of different causal selections within distinct contexts of inquiry, which in turn are bound up with the epistemic interests of those formulating the questions, namely, the researchers.

Accordingly, it may be convincingly argued that contrastive causal explanations for the population incidence of a disease differ from contrastive causal explanations for individual cases (Fuller, 2022, p. 19); in fact, different questions are associated with different and unique classes of appropriate and possible answers (Lloyd, 2015). Coming back to our two sample questions concerning COVID-19, sure enough they require different answers: concerning explanatory differences among individuals (i.e., inquiry into infections and their individual consequences) and explanatory differences among populations (i.e., inquiry into the epidemiological dynamics of population incidence), respectively. The salient contrast classes do indeed differ between the two cases. In the first case, the relevant contrast class includes individuals who did not contract long COVID-19, despite having had the acute form of the disease; in the second case, it includes those populations whose infection rates were significantly lower than the target populations of interest. Furthermore, these contrastive explanations seem to appeal to different kinds of characteristics: the former invokes individual characteristics as relevant to explaining the causal contrast, while the latter invokes features of populations. Here again, we should note that these differing types of explanation are underpinned by specific epistemic outlooks that allow us to grasp certain aspects of reality, whose salience is assumed in interest- and purpose-laden research hypotheses. They are not fully constrained by the real-world etiology of the disease, which—all the more so when it comes to communicable diseases—is given and held to be invariant across levels.

In light of our discussion thus far, it seems reasonable to propose that in the context of the COVID-19 pandemic, both individual and population outlooks—as expressed by CMs and ABMs, respectively—are focused on the same virological phenomenon (i.e., SARS-CoV-2 as the etiological factor causing the COVID-19 syndrome) as it is understood from different perspectives (e.g., individual infections vs. population incidence). In other words, we appear to be dealing with a case of different epistemic approaches to the same perspective-independent object (Fuller, 2022): indeed, CMs’ and ABMs’ different degrees of abstraction facilitate the unraveling of different aspects of reality.Footnote 7 In sum, the instance we have just analyzed suggests that the adoption of different perspectives (e.g., individual vs. population)—driven by specific epistemic and practical interests in the same phenomenon (e.g., COVID-19 syndrome)—facilitates the uncovering of different aspects of reality (e.g., the dynamics of individual infection vs. the dynamics of viral spread among populations), ultimately giving rise to different methodological choices and, accordingly, the design and use of different specific tools (e.g., CMs vs. ABMs).

3 Inconsistent or Perspectival Models?

The two examples just outlined may appear to be comparable. However, this is not the case. In relation to the case put forward by Biggeri and Saltelli (2021), the model supported by the authors rests on radically different assumptions about excess mortality compared to the assumptions uniformly shared by all the other critically reviewed models (see Morrison, 2011, p. 347). To put it simply, with respect to the others, the model proposed by Biggeri and Saltelli describes the same phenomenon (i.e., excess mortality) in a way that turns out to be contradictory—that is to say, discordant—with the description offered by the other competing models. In other words, supporting this model would mean rejecting all the others, and vice versa, given that they are rooted in reciprocally incompatible visions (i.e., assumptions) about the object of interest. They are conflicting, in that the correctness of one representation excludes the correctness of the other: both cannot be correct at the same time (Hauswald, 2021).

Things are different when it comes to comparing compartmental vs. agent-based models. In this case, it is clear that different aspects of the same phenomenon may be treated differently—for instance, stretched, omitted, or idealized compared to others (Rueger, 2005; see also Duprè, 1993)—depending on the epistemic and practical purposes driving the modelers’ work. For example, in population-based modeling, the behavior of single agents is abstracted and reduced to membership of large homogeneous classes, while in agent-based modeling the approach is more fine-grained. Such processes of abstraction, idealization, omission, or stretching of properties, are typical of modeling and are responsible for the reciprocal diversity among models. Hence, in this case, it is evident that the two kinds of models target different levels of the same system (Rueger, 2005), namely, the levels at which individual vs. population factors come into play in the pandemic. More precisely, it seems that the two models target different units of analysis: different epistemic interests direct the modelers’ attention toward diverse research questions, which require distinct classes of answers. The diversity between compartmental and agent-based models turns out to be qualitatively different from that among the models analyzed by Biggeri and Saltelli (2021). More specifically, the former diversity is underpinned by compatible views on the fundamental nature of the pandemic, albeit viewed through lenses that obscure or highlight a range of different factors (Morrison, 2011; Rueger, 2005). In other words, differences between models are due to the fact that they have different targets; thus, at the theoretical level, they concern different ideal systems. Yet, at the level of concrete particulars, the differences between them shed light on different aspects of the same phenomenon, suggesting that they are congruous and integrable (see Mitchell, 2002). They are different, but not in conflict with one another: while one model targets certain aspects of the object, the other remains silent about the aspects targeted by the former, and vice versa (Hauswald, 2021). It follows that, despite the differences between them, compartmental and agent-based models are mutually coherent and compatible, in that they offer different views of the same landscape, namely, the object of interest. On the contrary, the models analyzed by Biggeri and Saltelli (2021) offer different views of different landscapes, turning out to be mutually inconsistent.

It follows that compartmental and agent-based models bear a relationship with reality that is perspectival in nature. They may thus be understood as expressing epistemic perspectives that are focused on certain levels or aspects of the object of interest, yielding knowledge that is similarly perspectival in nature. Accordingly, “models do not deliver incompatible images of the same target system. Rather, they deliver only partial and perspectival images” (Massimi, 2018, p. 168, italics in the original). In this sense, models express partial points of view on reality, in that their underlying theoretical dimension—although it may be implicit—represents the intentional, inherently limited, outlook from which the process of knowing unfolds. Notably, the adjective ‘perspectival’ refers here to knowledge, rather than to facts. Specifically, this approach assumes that perspective-independent facts may only be known “within the (epistemic) limits afforded by rival scientific model(s)” (Massimi, 2018: p. 171). Thus, it entails the impossibility of an objective, perspective-independent epistemic vantage point, without denying the existence of a perspective-independent world.Footnote 8

To return briefly to the example in Giattino (2020) mentioned in the Introduction, the disagreement among some of the models used early in the pandemic to estimate the true number of daily new infections may be viewed as a genuine instance of perspectival pluralism, rather than as a threat to the credibility of science. Indeed, these models yielded different estimates because they all diverged from one another to some degree in terms of what they were intended to be used for, how they worked, the data they were based on, and the underlying assumptions. Nonetheless, they targeted the same reality: the adoption of different perspectives could either obscure or draw attention to different aspects of the phenomenon under investigation, but without rejecting its unitary nature in principle.

The fact that these models are perspectival makes it clear that they are also representational in nature; in this regard, the analogy with maps proposed by Giere (2006) is of great help in bringing to light the features of models. Like maps, models are partial, in that they only capture certain features of the object at stake. Their accuracy is necessarily limited, depending on the choices implemented in their design and the underlying assumptions: “the only perfect map of a territory would be the territory itself, which would no longer be a map at all” (Giere, 2006, p. 73). Furthermore, the relations, as well as the degree, of similarity between a model and its target system depend on the interests pursued and the assumptions espoused by its designers. This means that models are strongly interest relative, insofar as their accuracy and the inclusion/exclusion of features of the target system depends on the epistemic purposes for which they are designed. Finally, models are subject to pressure from social and cultural influences or, more generally, from extra-scientific factors, during both the design and data interpretation processes. As a consequence, the complexity that characterizes the object of interest can only be partially captured, and this by means of taking on viewpoints to which the resulting knowledge is tightly anchored and bound. Thus, the properties of the target system that are revealed via the adoption of a given model are necessarily accessed and grasped in a relational fashion, which depends on the particular standpoint associated with that model: “what appears as an intrinsic property of the system is actually a perspectival view of the intrinsic property, hence relational” (Rueger, 2005, p. 14).

4 Scientific Pluralism and How it can be Misunderstood

Although it is by no means the only instance, the case of models – whether inconsistent or truly perspectival—makes it clear that science is pluralistic in nature. This has been particularly evident in the extraordinary public health scenario that we are still dealing with, wherein the pandemic has set “a new standard for the speed at which new scientific information was being provided publicly (…)” (Abdool Karim, 2022, p. 283). In a sense, the pandemic may be viewed as an interesting laboratory for closely observing some of the dynamics that come into play in the relationship between science and the wider community, given that never has science’s plurality of voices been so visible on such a broad scale.

In light of the above considerations, I suggest that the usual dynamics between science and the broader community setting have recently suffered an upheaval. In normal conditions, scientists in a specific domain keep the internal debate within their community alive, given that debate is acknowledged to be the main source of progress and self-correction for science. Indeed, the game of science encourages its players, namely scientists, to hold different positions, and envisages—even encourages—the coexistence of diverging views (Carrier, 2017; Hauswald, 2021). Science is open, revisable, and dynamic thanks to the incessant dialectics among divergent ideas. Nevertheless, as time goes by, disputes normally tend to become smoothed out in the eyes of the public, via an inevitable process of simplification and stabilization that lends scientific outcomes the appearance of unanimity. “Distance lends enchantment” (Collins & Evans, 2002, p. 246), as the saying goes: the more one contemplates science from a distance, the more unanimous it appears. Despite this perception from the outside, scientific communities continue to dispute issues of interest, though usually far from the eyes of the public and the media (Carrier, 2017). Hence, it might be argued that the scientific community and the general public have differential degrees of awareness concerning the multivocalness of science and that, consequently, the latter can be less tolerant of the partial and provisional nature of scientific outcomes (Kruglanski & Webster, 1996; Hodson et al., 2023).Footnote 9

It might be hypothesized that the standard pathway just illustrated, necessarily in somewhat idealized terms, has been disrupted by the dynamics that came into play during the pandemic. Indeed, disputes among scientists – which are ordinaily confined to academia at a certain remove from the public debate—have unfolded under the public eye (Carrier, 2017). Specifically, the scientific community engaged in the study of COVID-19Footnote 10 has not been able, understandably, to swiftly deliver relatively stable and coherent outcomes (Evans, 2022), although these have been more sought after than ever given the rapid and unrelenting stream of scientific challenges posed by the pandemic (Nagler et al., 2020). As a novel source of risk, the spread of COVID-19 has been especially strongly associated with feelings of uncertainty, due to public exposure to topics usually addressed within academia, the perception of constantly evolving science in light of new evidence, and the production of disparate narratives surrounding the emergency (Abdool Karim, 2022; Capurro et al., 2021; Cinelli et al., 2020; Michelle et al., 2018; Miller, 2022). Thus, uncertainty represented a major challenge for public communications relating to pandemics (Davis, 2019). Indeed, when the public engages with scientific information, they usually expect experts to be precise and confident (Shanteau, 1987), and typically seek neatly positive or negative answers (Hodson et al., 2023). However, under ever-changing circumstances, news coverage tends to be confusing because it carries a vast array of (sometimes) contradictory expert opinions, with the aim of providing an accurate account of the evolving situation (see Carrier, 2017). This tendency was particularly noticeable with regard to modelling: media accounts fostered extremely polarized representations of epidemiological models, alternatively denoted as purveyors of hope or sources of unreliability and confusion (Capurro et al., 2021). Plausibly, the coexistence of inconsistent and perspectival models as outlined above may have played a role in generating and exacerbating polarized perceptions of scientific outcomes during the pandemic.

In an attempt to break down the components of uncertainty associated with the pandemic, Gustafson and Rice (2020) proposed four types of uncertainty. Each is connected to (positive or negative) effects on belief in the perceived credibility of, or intention to follow, scientific messaging. According to these authors, the type of uncertainty that takes the form of disaccord among stakeholders (scientists in primis) or within a salient body of evidence is that most clearly associated with negative effects (Gustafson & Rice, 2020). In other words, uncertainty driven by perceptions of collective disagreement among scientists is strongly associated with negative attitudes towards science and its recommendations. “Consensus uncertainty”—as the authors labeled this particular brand of uncertainty—describes the public response to exposure to expert disagreement and disputes among scientists (Dieckmann et al., 2017; Dieckmann & Johnson, 2019). From this perspective, we might argue that narratives on the use of divergent models by different groups of scientists may be framed both as an expression of expert disagreement and as a source of consensus uncertainty. Indeed, as we have seen, (a) models are usually based on assumptions made by experts; hence, (b) they are endorsed by scientists who share the same sets of assumptions. Therefore, (c) a plurality of models reflects disagreement among groups of experts endorsing different sets of assumptions and, thus, fosters consensus uncertainty.

My contention here is that the co-occurrence of inconsistent and perspectival models and the consequent widespread uncertainty related to expert disagreement may have generated misunderstandings and misinterpretations, especially under certain conditions. In particular, concurrent models associated with uncertainty may be seen by the public as signs of unreliability, rather than as ordinary scientific efforts to provide the best evidence via diverse (provisional) accounts in competition with one another. In particular, within an environment dominated by uncertainty, a plurality of models may be envisioned as a symptom of disunity or fragmentation, that is to say, as a sign of flawed science: understandably, a range of disparate or contradictory propositions from the scientific community may not come across positively to a wider, non-specialist audience (Carrier, 2017). This may be even more the case for those with less education and/or lower cognitive ability, who tend to interpret expert disagreement as due to incompetence rather than to the inherent complexity of the world (Dieckmann et al., 2017)Footnote 11, assuming that science is (or should be) “objective and certain” (p. 34). Similarly, those who rely on affect (i.e., emotional response) or tradition (i.e., confidence in past experiences) heuristics when confronted with fast-changing scientific information—which is often the case with modeling approaches—were more likely to react negatively to evolving science (Hodson et al., 2023). In addition, Rothmund and colleagues (2022) found that individuals who struggle to keep up with evolving science claims were characterized by low levels of cognitive ability and education, a high degree of uncertainty in distinguishing between true and false claims, and high social media intake. According to the authors, this pattern was associated with discrepancies between public and expert beliefs about the pandemic and about the scientifically-informed assessment of health-related risks (Rothmund et al., 2022). Finally, it has been suggested that even a modest amount of scientific dissent can be detrimental to public support for environmental policies (Aklin & Urpelainen, 2014). Given that environmental issues and pandemics are both large-scale phenomena whose manifestations may be (erroneously) attributed to proximal, common, and familiar causes (e.g., normal weather variability for the former, previously known etiopathogenetic factors for the latter), it is plausible to hypothesize that the negative impact of scientific disagreement identified by Aklin and Urpelainen might also apply to the case of pandemics. These remarks invite further inquiry into the complex interrelationship between a plurality of models, uncertainty, and public reliance on scientific outcomes and recommendations. Nevertheless, they suggest that pluralism in modeling can have a detrimental effect when disagreements and disputes are aired in a social arena dominated by uncertainty (see Carrier, 2017).

To prevent fostering distorted attitudes towards science, we should not simply refine our communication of scientific outcomes, as this would do little to reduce uncertainty or clarify disagreements among experts. Rather, we need to disseminate key aspects of science’s inner workings, which are likely almost entirely unknown to most of the lay community (Glick et al., 2021; Braund, 2021). In this regard, I endorse the proposals of Weisberg and colleagues (2021) and Intemann (2023) to address shortfalls in science literacy by disseminating knowledge of general scientific principles, processes, and practices, with the aim of raising epistemological awareness among lay audiences, rather than addressing the problem by teaching the content of specific target theories (e.g., evolution theory, climate change, or COVID-19 epidemiology). This is a task that should be embraced by scientists, popularizers, philosophers, and all of those with a role in public health management. Indeed, familiarity with the workings of science is a strong predictor of science acceptance, although opposition to science is often associated with identity factors such as religious or political affiliation (Weisberg et al., 2021).

It is not easy to pinpoint what aspects of scientific inquiry should be shared, given that the points to be emphasized and the way that they may best be presented will depend on the specific setting, beneficiaries, objectives in play, and on the optimal balance between these potentially competing factors (see Intemann, 2023). While a detailed proposal in this regard falls outside the scope of the present article, we may—in conclusion—sketch out some broad areas of focus with specific attention to modeling.

First, the pluralistic character of science should be brought to light and analyzed as a feature rather than as a flaw. This plurality is not to be equated with arbitrariness, in that science should be presented as inherently pluralistic: it should be clearly communicated that research communities are likely to split up into many competing factions, in order to attack an issue from different angles, thus increasing our possibilities of understanding it (Carrier, 2017). However, it should also be acknowledged that this is one of the main reasons why many practical problems—especially new ones (such as the outbreak of the pandemic)—cannot be swiftly solved by drawing on the currently available system of knowledge. Rather than a sign of corruption and unreliability, pluralism should be presented as an inescapable feature of science, understood as an endeavor that seeks to unpack the intricacies of the world, but whose outcomes are inherently limited by the restricted epistemic capacities of human beings and by pragmatic constraints. From this outlook, both inconsistent and perspectival outcomes are equally to be expected. Second, the probabilistic and provisional nature of science must be presented as one of the upshots of its fallible and perspectival nature. Again, these characteristics do not undermine the value of the scientific endeavor per se; rather, they represent an inescapable hallmark of human knowledge, of which science is one of the most sophisticated forms. The provisional nature of models is grounded in their representational character, according to which their parameters are designed and selected on the basis of heuristic and pragmatic considerations. This should shed some light on the necessary partiality of any model, understood as an idealized—and consequently incomplete—representation of an aspect of reality. In this regard, it is important to set the expectation from the outset that the model will likely change, as the evidence or practical goals evolve over time. Third, it must be made clear that scientific outcomes reflect the perspectives of the experts. This is not necessarily related to the interference of unwanted extra-scientific factors, such as bias, dishonesty, or incompetence on the part of the scientists. Rather, all human knowledge is influenced by the individual epistemic outlooks of the knowers. The representational nature of models makes this explicit: because they are grounded in the assumptions adopted by their designers, they are inherently partial, interest-relative, and value-laden, just like any other form of human knowledge.

In addition, communication should anticipate—rather than conceal—perceived conflict between divergent stances, explicitly acknowledging uncertainty and shifts as inherent features of science (Nagler et al., 2020, p. 15). There should be open discussion of why scientists may disagree (Dieckmann & Johnson, 2019; Capurro et al., 2021), using a “reasoned transparency” approach. This will imply deploying research-informed communication aimed at generating expectancy heuristics and, thus, at priming the public to expect change whereby “uncertainty is not a limitation, but a strength of the scientific process” (Hodson et al., 2023, p. 437). To facilitate those with a lower level of education and/or cognitive ability in coping with uncertainty, communication about key science topics should be as simple and coherent as possible, while not shying away from explaining the inherently ever-changing nature of science. With reference to models, a suitable presentation of their outcomes should specify, for example, their rationales and grounding assumptions, the pragmatic constraints limiting their design, and their potentially changing validity in light of new evidence or use in different contexts.

Finally, pluralism poses another issue that risks undermining the credibility of science (Carrier, 2017): its potential to hinder the solution of practical problems (Intemann, 2023). What should we do to effectively address the situation at stake? What scientific outputs, among the many available, might better inform policy action? Given that this is a serious problem, the following points should be emphasized in public communications. First, the adoption of different angles of inquiry does not mean that basic assumptions about the phenomenon are in dispute: with a view to tempering the perception of fragmentation and arbitrariness, it should be stressed that different scientific models may still share some key assumptions. Second, it should be made clear that scientific disagreement can be generative, including from a practical perspective: indeed, conflict may put scientific outputs to test by probing their practical relevance across different contexts. Such competition should ultimately reduce pluralism, which may therefore be viewed as transient, although unavoidable. Furthermore, it is important to distinguish between the ‘technical’ and ‘political’ phases of decision-making: the former is informed by the values and norms of scientific rationality, while the latter is informed by values and norms derived from democratic principles (Evans, 2022). Within this framework, the plural or incomplete character of scientific output should not be viewed as the sole determinant of practical policies; rather, it should be presented as one factor—although a highly significant one—to be considered in combination with others in arriving at complex decisions of public import.

The overall point is to make the process of perspective-taking explicit and, as insofar as possible, to reduce opacity surrounding the perspectival (in a broad sense), provisional, and interest-laden character of scientific offerings. This means not only emphasizing the partial character of knowledge. Rather, it especially entails making as explicit as possible the specific—and necessarily incomplete – assumptions underpinning models, so as to clearly link knowledge with its premises and, potentially, to expose potential contradictions.

5 Concluding Remarks

In this paper, I have noted the prominent use of epidemiological models as both heuristic and practical devices during the COVID-19 pandemic. Models like these are essentially representations of complex phenomena that draw on sets of logical, theoretical, methodological, and pragmatic assumptions. The practice of modeling varies as a function of the epistemic and non-epistemic interests that drive it; in this respect, it reflects a primary feature of science, namely pluralism. Different models may produce both reciprocally inconsistent outcomes—grounded in incompatible assumptions—and perspectival outcomes that shed light on the same objects from different (epistemic) angles—grounded in compatible assumptions. In relation to the general public’s recent overexposure to the dynamics of science, I have argued here that the coexistence of different models in an environment dominated by uncertainty—whether inconsistent or truly perspectival—may not be recognized by the public as an inherent feature of science. Rather, especially in pressurized situations such as those that abounded during the pandemic, the co-occurrence of different (inconsistent and perspectival) models may be read as a sign of disunity or fragmentation, that is to say, of unreliability. This may hold especially true under certain conditions, which are worthy of further investigation, such as fast-evolving and/or contradictory communications on the part of the scientific and media community, and high social media intake, low levels of education/cognitive ability, heavy reliance on affect and tradition heuristics, and difficulty in distinguishing between true and false claims on the part of the public. Finally, I have suggested general strategies for counteracting distorted attitudes toward science, which entail improving science literacy rather than simply refining science communications.

In conclusion, it may be argued that the pluralistic nature of science contains both a paradox and a potential pitfall. The pitfall of pluralism concerns the possibility—indeed, the likelihood—that science will produce outcomes, whose inconsistency may be detected only after a certain (misplaced) trust has been developed. This is evident in the case presented by Biggeri and Saltelli (2021): the divergent outcomes of different models concealed the adoption of inconsistent underlying assumptions about the pandemic. While this is part and parcel of how science unfolds, ‘separating the wheat from the chaff’ is hard work, especially under pressurized conditions such as those that have characterized the COVID-19 public health emergency (Miller, 2022; Abdool-Karim, 2022). The paradox concerns the genuinely perspectival character of scientific knowledge. This quality is paradoxical in that it simultaneously represents both a limit to our capacity to know the world, and the source of a possibility to expand our knowledge horizon (Galvan, 2006; Mitchell, 2002). On the one hand, the partial nature of our knowledge is linked to the requirement to adopt a particular point of view, or perspective, on an object of interest, given that any perspective that we may embrace limits our knowledge to those aspects of the object whose investigation is permitted by this perspective. On the other hand, however, incorporating a new perspective can allow us to acquire information that was missed by our previous perspective. Changing perspectives enables us to access different aspects of the object, thereby increasing our knowledge. In other words, this plurality of perspectives can, so to speak, shed light on our object from different angles, revealing unknown facets of it. The co-occurrence of inconsistent and perspectival models and their inherent representational nature make both the paradoxical and (potentially) misguiding features of scientific pluralism particularly apparent.