Recent work in philosophy has seen an explosion of interest in the phenomenon of peer disagreement, i.e. situations in which equally well informed and competent agents have contrary beliefs on a given topic. This common phenomenon gives rise to a number of philosophical conundrums. How is it possible for people with similar levels of training, experience and background knowledge, dealing with the same data set, not to come to an agreed position on a particular question? Wouldn’t the very fact of their disagreement indicate that at least one of them is less knowledgeable, i.e. less of an expert? Going back to Einhorn (1974), it has been argued that consensus is a condition on expertise and yet there is ample evidence that experts from all fields disagree.

The conundrum of expert disagreement is at its most acute when it comes to what Jennifer Lackey calls ‘controversial areas’, e.g. philosophy, politics, ethics, and religion (Lackey 2018, p. 229). It is not uncommon to argue that there is no expertise in these areas. However, fundamental disagreements threaten the natural and social sciences as well, and the conclusion that there are no experts in science flies in the face of our common practices of assigning expertise. In the case of the natural sciences, in contrast to a ‘controversial area’ such as ethics, it may be possible to argue that we can resort to ‘independent checks’ that are not subject to significant controversy (McGrath 2008, pp. 97–98) to sort out genuine from merely apparent experts. But disagreement in science is commonplace and the natural sciences are not free of deep controversies, so the question of how to identify the real experts in a particular area of science remains open in many cases.Footnote 1 While it may be possible to bypass the issue and settle the question of who scientific experts are, in a general way, by invoking reputational criteria such as track record, education, experience, publications, etc., the question of the objectivity of knowledge claims in the face of seemingly intractable disagreements remains intact.

One option, at least in Lackey’s ‘controversial areas’, is to agree that disagreements in such areas are potentially faultless and their truth can be relativized to their domains of discourse or to the perspectives that the disagreeing peers bring to the issue. However, this option is not readily available for disagreements in the sciences. Scientifically accepted theories, unlike the claims of religion, ethics and even philosophy, are assumed to have universal validity. Regardless of where we stand on the question of relativizing truth in the “controversial areas”, the very idea of relative scientific truth remains highly controversial and wholly unacceptable to scientific realists (see, e.g., Baghramian and Coliva 2019; Kusch 2020). So, any attempt to resolve the problem of deep disagreements in science by appealing to some form of relative truth invites even further philosophical controversies.

A related, much discussed, puzzle about disagreement concerns the normative question of how someone should respond when she discovers that a peer disagrees with her. Most contributors to the debate defend some version of the view that one should move closer to one’s peer′s opinion, e.g., by suspending judgment or by adopting an intermediate level of confidence between those of the disagreeing peer and one’s former self (e.g., Christensen 2007; Feldman 2006; Elga 2007). This family of views is known as conciliationism. In contrast, steadfastness holds that one should ‘stand one’s ground’ in the face of peer disagreement, i.e., continue to have the same beliefs and levels of confidence as one did before the disagreement. Although this is certainly a minority view in the literature, it does have its proponents (e.g., Kelly 2005, 2010).

The interest in peer disagreement is in part due to what this phenomenon promises to tell us about general epistemological questions, such as those relating to evidence about the reliability or otherwise of one’s own epistemic evaluations (Feldman 2009; Elga 2007; Christensen 2010; Kelly 2010). While much work has been devoted to various general and often abstract epistemological issues relating to disagreement (see, e.g., Feldman 2006; Christensen 2009; Feldman and Warfield 2010; Lackey and Christensen 2013; Frances 2014; Matheson 2014), there has been surprisingly little discussion of how, if at all, the lessons from these discussions can be applied to disagreement within science. Conversely, although philosophers of science have certainly addressed the issue of disagreement in various ways (e.g., Kuhn 1996; Feyerabend 1975; Kitcher 1993), sometimes under the labels ‘dissent’ and ‘pluralism’ (Solomon 2001; Longino 2002; de Melo-Martín and Intemann 2018), there has been less systematic discussion of this phenomenon than in epistemology proper.

One of the central aims of this special issue is to facilitate discussion on disagreement in science that brings together insights from both epistemology and philosophy of science. Several aspects of the topic go beyond merely applying lessons from epistemology to philosophy of science, or vice versa. For example, scientific disagreement is unlike many ordinary cases of disagreement in that there is often little reason to think that the disagreement is due to a simple mistake by one of the parties of the type often appealed to in the epistemology of disagreement literature (as in Christensen’s (2007, p. 193) case of calculating a 20 percent tip). Rather, if there is disagreement among two or more scientists—or groups thereof—it is most commonly grounded in a more fundamental difference in the methods, background assumptions, or even the scientific outlook—roughly, in Kuhnian ‘paradigms’. This suggests that scientific disagreements present philosophers with special challenges that haven’t yet been addressed, even in the abstract, in the epistemology of disagreement literature.

More generally, disagreements in science raise a number of important questions. How, if at all, should scientists re-evaluate their theories and models upon realizing that their scientific peers have a contrary opinion? Is there really such a thing as ‘peer disagreement’ in science—i.e. disagreement between equally well informed and equally competent scientists? Or is the notion of ‘peer disagreement’ too much of an idealization from actual scientific practice to tell us anything worthwhile about scientific controversies, as some have argued is true generally (King 2011)? What sort of things do scientists disagree about—only matter of facts, or also conceptual issues and the proper values used in scientific practice (Rowland 2017) as well as their background methodological stances (Weinberger and Bradley 2020)? Does persistent scientific disagreement support or lend credence to relativism about scientific truths, or about scientific theory evaluation (Kinzel and Kusch 2018)? Is scientific disagreement a desirable feature of scientific communities (de Cruz and de Smedt 2013), or should scientists strive to build consensus on important topics (Dellsén 2020)? What are the consequences of real or perceived disagreements in science for policy decisions (Leuschner 2018)? And what, if anything, can the public learn from facts about disagreement—or its opposite, consensus—in topics such as anthropogenic climate chance (Dellsén 2018)?

This special issue deals with questions of this kind—questions that concern, in some way or other, disagreement within the sciences. Some of the papers below look in detail at case studies of contemporary or historical scientific disagreements with the aim of unearthing general lessons about what we can learn from such episodes. Other papers deal directly with more general questions about the different types of disagreement in science, e.g. conceptual and methodological controversies that are not straightforwardly factual or empirical. Yet other contributions look at scientific disagreements ‘from the outside’, i.e. from the point of view of those not involved in the dispute, asking what lessons such outsiders—including notably ordinary laypeople—should draw from the fact that scientists disagree, or fail to disagree, on a given scientific issue. Finally, some of the papers collected here are concerned with modelling disagreements using abstract agent-based simulations to draw general lessons about, for example, how disagreements on multiple unrelated topics can arise within epistemic communities.

Let us now look in more detail at the eight individual papers collected in this special issue.

Two of these are concerned with how two of the most influential philosophers of science in the twentieth century approached scientific disagreements. Markus Seidel‘s “Kuhn’s two accounts of rational disagreement in science: an interpretation and a critique” is an in-depth analysis and critique of Kuhn’s views on rational disagreements in science. In Seidel’s view, Kuhn gave us two quite different accounts of rational disagreement, one in his Structure of Scientific Revolutions (Kuhn 1996) and another in his later work, including the influential “Objectivity, Value Judgment and Theory Choice” (Kuhn 1977).

Kuhn’s first account of how rational disagreement arises in science is based on his thesis of methodological incommensurability between different scientific paradigms. The idea here is that scientists who have adopted different paradigms are perfectly rational in choosing a theory that accords with their respective paradigms, but since these paradigms can contain elements that rationalize different, incompatible theories, scientists can disagree on which theories to accept without being guilty of any type of irrationality. As Seidel points out, however, this account of rational disagreement only accounts for cases that occur in the relatively rare episodes of ‘revolutionary science’, i.e. when scientists are debating the merits of entire paradigms, as opposed to the far more common periods of ‘normal science’ in which the paradigm is not up for grabs. Another problem with Kuhn’s account pointed out by Seidel is that the very idea that scientists in different paradigms disagree seems in tension with Kuhn’s claim that paradigms are semantically incommensurable, since genuine (as opposed to merely verbal) disagreement requires that the disagreeing scientists adopt opposing attitudes towards the same proposition.

The second Kuhnian account of rational disagreement discussed by Seidel is based on Kuhn’s influential ideas about the role of values in theory choice. Kuhn famously thought that since any scientific theory choice will be based on a list of five values (accuracy, consistency, scope, simplicity, and fruitfulness), there is no single rational way to interpret these values and weigh them against each other. As Seidel interprets this point, Kuhn is essentially arguing that theory choice is underdetermined by this list of values. This argument is quite distinct—indeed, incompatible—with the earlier argument from methodological incommensurability, since it implies that this list of values is or should be accepted among scientists independently of what paradigm they adopt. In any case, Seidel points out that Kuhn motivates this account of theory choice in part by pointing out that it effortlessly explains how scientists can come to rationally disagree on theories, in that their disagreements can arise from scientists interpreting and weighing these values in different—but equally rational—ways. However, Seidel also presents his own competing account of how rational disagreement can arise which, if successful, would undermine Kuhn’s motivation for his account of theory choice.

Another renowned twentieth-century philosopher of science whose thought has important implications for scientific disagreement is Paul Feyerabend. In Jamie Shaw’s “Feyerabend and manufactured disagreement: reflections on expertise, consensus, and science policy”, Shaw considers how Feyerabend’s views on disagreement hold up to scrutiny in an age of increasing specialization and deliberate manufacturing of disagreement, e.g. about anthropogenic climate change. Feyerabend was an uncompromising pluralist about science: on his view, the success of science required the development and fostering of a plurality of competing theories about any given phenomenon. Feyerabend thus welcomed disagreement in any area of science as necessary requirement for the growth of scientific knowledge in that area.

As Shaw points out, however, this Feyerabendian view of scientific disagreements runs into conflict with recent scholarship on how scientific disagreements are artificially manufactured so as to stifle scientific disciplines like climate science and undermine public trust in these disciplines and the theories advocated therein (e.g., Oreskes and Conway 2010; Biddle and Leuschner 2015; Leuschner 2018). Shaw argues, however, that the problem with normatively inappropriate disagreements of this ilk is not that they are manufactured—which is something Feyerabend would seem to welcome—but rather that at least one side to the disagreement, viz. the climate ‘skeptics’, don’t critically engage with the arguments and positions of the other side. These climate ‘skeptics’ are thus, in Feyerabend’s semi-technical sense of the term, cranks. The type of pluralism advocated by Feyerabend, argues Shaw, requires the kind of open and honest exchange of ideas and debate about the merits of each position that artificially manufactured disagreements rarely, if ever, allow for.

Two of the remaining papers both address an important philosophical issue through the lens of a detailed scientific case study. David M. Frank’s “Disagreement or denialism: “Invasive species denialism” and ethical disagreement in science” considers the argument made by several invasion biologists that various points of disagreement with consensus positions in their field constitute science denialism. Frank argues that while this criticism is sometimes legitimate, in other cases the disagreement is grounded in an ethical difference in opinion that should not be classified as science denialism. ‘Science denialism’, according to Frank, should be used for challenges to a scientific consensus that (i) violate epistemic norms, e.g. by cherry-picking data and constructing straw man arguments or positions, and (ii) are used in ways that make them likely to cause harm.

By appealing to this definition of ‘science denialism’ Frank argues that some challenges to scientific theories of the biological risks associated with introductions of invasive species to new areas do indeed count as denialist. In other instances, however, the disagreement between those who defend the consensus position in invasive biology and those who criticize it is ultimately grounded in a difference in opinion concerning non-epistemic values of various sorts. For example, whether a given species should be counted as ‘invasive’ is commonly taken to depend on whether its introduction to a new area causes sufficient harm to the relevant ecosystem. Since both the threshold and the definition of ‘harm’ is open to ethical dispute, it is surely legitimate to disagree even with the consensus position adopted within a specific scientific discipline, such as invasion biology, on that matter. Frank acknowledges that non-epistemic disagreements of this kind do come with certain risks, but he also suggests that there are other ways in which such disagreements can bring important epistemic and non-epistemic benefits.

Michaela Massimi, in her paper “Realism, perspectivism, and disagreement in science”, develops an account of how disagreements about justificatory principles in science can coexist with agreements about the theories that fall under these principles. Here and elsewhere (e.g., Massimi 2018a, b), Massimi advocates ‘perspectivism’ about science, which emphasizes the role of different scientific perspectives in producing scientific knowledge. Roughly, Massimi defines a ‘scientific perspective’ as the practice of a scientific community at a given time, which include (i) scientific knowledge claims, (ii) the resources to reliably make those claims, and (iii) the epistemic or methodological principles that justify (i) with reference to (ii). Simply put, a scientific perspective thus includes scientific theories, the evidence for these theories, and the epistemic principles on the basis of which the evidence is taken to support the theories.

Massimi puts these ideas to work in analyzing a historical case from theoretical physics at the turn of the twentieth century, viz. investigations into the electric charge from three different theoretical frameworks. Massimi considers how J.J. Thomson (building on Faraday and Maxwell’s framework), Hermann von Helmholtz and Theodor von Grotthus, and Max Planck each brought different scientific perspectives to bear on the existence and nature of the minimal unit of electric charge, e. According to Massimi, these physicists ended up agreeing on how to answer this question even while employing different justificatory principles on the basis of which they reached their shared conclusion. They were able to reach this agreement despite their different perspectives, suggests Massimi, because each perspective latches onto a real lawlike dependency that supports counterfactual inferences and thus enables the wielder of the perspective to draw correct, albeit fallible, conclusions from other aspects of their perspective.

The remaining four papers in this volume on Disagreement in Science form a natural grouping as they are all concerned with agent-based models of scientific and lay communities. Dunja Šešelja’s “Some lessons from simulations of scientific disagreements” argues that we should be careful in how we interpret the results provided by existing agent-based models, and that these models do not always support the type of inferences that their proponents have made. In particular, Šešelja’s considers agent-based models that appear to support the value of adopting a ‘steadfast’ approach to peer disagreement, i.e. maintaining one’s prior belief in response to known disagreement with an epistemic peer. According to some modellers, e.g. Douven (2010) and de Langhe (2013), being steadfast can increase the likelihood of converging on a true theory. However, Šešelja points out that these results do not strictly speaking support maintaining one’s prior belief; rather, if anything, they support continuing to pursue a theory with which one’s peers may disagree, where such pursuit is compatible with disbelieving the theory.

Another type of result discussed by Šešelja are those that pertain to how scientific communities should be structured so as to make inquiry as efficient as possible. In particular, various agent-based models constructed and inspired by Kevin Zollman’s work appear to show that less communication between scientists can increase the rate at which scientists successfully converge on an objectively better theory (Zollman 2007, 2010). Very roughly, this is because false beliefs spread more quickly in better connected communities. Although this ‘Zollman effect’ is quite robust across a variety of different agent-based models, Šešelja points out that this effect does not appear in certain agent-based models that incorporate the different arguments scientists may have for and against theories (Borg et al. 2018). Thus, the question remains open whether the ‘Zollman effect’ is a real phenomenon of scientific inquiry, or instead an artifact of the abstractions and idealizations made in the agent-based models in which the effect appears. Šešelja concludes that we need empirical studies in order to specify which type of model is most appropriate for a given scientific inquiry, which in turn would confirm whether the ‘Zollman effect’ is indeed really present in that type of inquiry.

Carlos Santana discusses a related purported effect in “Let’s not agree to disagree: the role of strategic disagreement in science”. Several philosophers have suggested, in different ways, that it can be epistemically beneficial for a scientific community to be composed of scientists that maintain their position on a given theory even if their total evidence suggests that a competing theory is more likely to be correct. In particular, it has been argued that scientists should be moved by motives such as whether the theory in question is adopted by other scientists, since that in turn increases the spread of theories that are being explored at a given time. This type of behavior, it is argued, can increase the theoretical diversity in the scientific community, which in turn increases the community’s chances of converging on a correct theory. In cases of this sort, scientists should perhaps not believe the theory, but instead ‘accept’, ‘adopt’ or ‘endorse’ it in some non-doxastic sense of those terms (see, e.g., Elgin 2010; Fleisher 2018; Dellsén 2020).

In his paper, Santana first criticizes this idea but then also presents a novel agent-based model that provides a qualified type of support for it. In brief, Santana argues that exhibiting the kind of ‘stubbornness’ outlined above would come with significant epistemic costs since scientists will find it hard—if not impossible—to keep their beliefs distinct from their endorsements. For example, one scientist may easily mistake another scientist’s endorsement for their belief, consequently forming their own beliefs on the basis of a (mistaken) perception of another scientists’ belief. In Santana’s view, problems of this sort should motivate us to find a different, non-stubborn, way to ensure diversity in epistemic communities. However, Santana goes on to construct an agent-based model that is designed to test whether the epistemic value of stubbornness can indeed be achieved by other means, viz. through implementing a type of social division in the community where each agent communicates with a more select group of scientists. Interestingly, Santana’s model did not validate this hypothesis; instead, it confirmed that scientific communities containing ‘stubborn’ scientists are more likely to successfully find the correct theory within a given time limit.

James Owen Weatherall and Cailin O’Connor, in their paper “Endogenous epistemic factionalization”, present another agent-based model of epistemic communities. Their aim is to investigate a type of polarization that occurs when individuals who disagree about one subject are statistically more likely to disagree about an unrelated subject as well. The common occurrence of this phenomenon is often explained as being due to some further or more general epistemic commitment, e.g. an ideology or identity that sanctions both beliefs. However, Weatherall and O’Connor present a quite different type of explanation for this phenomenon by appealing to the intuitive idea that agents will be inclined to distrust the evidence presented by other agents with whom they already disagree.

In Weatherall and O’Connor’s model, individual agents update their beliefs by Jeffrey conditionalization, which allows for agents updating on uncertain evidence. Moreover, an agent’s confidence in the evidence shared by another agent is a function of the difference between the credences assigned by the two agents on two (or more) separate issues, such that the agent is less certain of the evidence if the differences between their credences in the relevant claims are greater. Weatherall and O’Connor then measure the amount of the relevant kind of polarization, i.e. what they call ‘factionalization’, with a standard measure of correlation (the Pearson correlation coefficient r). Their results indicate that ‘factionalization’ will occur naturally—i.e., without assuming any other, nefarious, influences on belief. This effect occurs not just when the agents start out disagreeing on one issue and subsequently come to disagree on another issue, but also when the agents start out with randomly assigned credences on both issues and update by the mechanism described above. Furthermore, the same effect occurs when the model is extended to consider three (rather than just two) unrelated claims simultaneously.

Finally, David Anzola examines the role of disagreement in the emergence of a new discipline of computational social science, in which agent-based models are used to study social phenomena. As Anzola points out, a distinctive feature of this new discipline is that it can be defined in terms of its use of a particular method, viz. agent-based modelling, rather than any particular set of problems, theories or phenomena. Since this method is not used in other disciplines that fall under social science, this creates a tension—a disagreement of sorts—between it and its nearby fields. However, Anzola describes how the method of agent-based modelling has also been thought of as providing a ‘middle-ground’—a type of conciliatory position—between qualitative and quantitative methods in social science. Although agent-based modelers have in practice so far aligned themselves more with the quantitative methods, and generally steered clear of the theoretical commitments typically involved in qualitative research (e.g. radical constructivism and relativism), there is no in-principle reason why agent-based modelling could not incorporate aspects of qualitative methods as well as quantitative methods.

Anzola also examines a philosophically interesting disagreement within computational social science regarding whether agent-based models ought to be abstract and minimal—Keep It Simple, Stupid (KISS)—or instead concrete and empirically calibrated—Keep It Descriptive, Stupid (KIDS). According to received wisdom, KISS favors simple agent-based models that facilitate understanding at the expense of prediction, while KIDS conversely favors empirically accurate models that facilitate reliable prediction from these models. Anzola suggests that computational social scientists do not really try to eliminate this divide between KISS and KIDS, which would presumably involve finding a conciliatory position between these extremes. Rather, the two approaches live happily side by side in computational social science. In other words, the proponents of each of these approaches seem to have adopted a ‘steadfast’ attitude towards the other side, without the discipline breaking down or suffering as a result. This might thus be a practical manifestation of the common philosophical idea that being ‘steadfast’ can be beneficial for epistemic communities. Anzola’s observation also undermines Kuhn’s influential thesis that the normal operation of the sciences requires methodological uniformity and shared values within the discipline, since KISS and KIDS certainly represent different methodologies and theoretical values within what appears to be a single scientific discipline.

The papers collected in this special issue indicate the diversity of philosophical questions arising from the well-known phenomenon of disagreement in science. They also open up further avenues of investigations on scientific disagreement and related topics.

Specifically, an intriguing larger question raised by Seidel’s analysis of Kuhn’s ideas is whether or to what extent philosophical accounts of scientific rationality should aim to explain the phenomenon of scientific disagreement. There has been little explicit discussion of scientific disagreement thus far in the literature on scientific inference and theory choice, suggesting that this might provide a fruitful perspective on such issues. Similarly, Shaw’s contention that disagreements in science should be welcomed according to a Feyerabendian outlook on science raises important questions about how to balance the importance of open and honest exchange between scientists and the danger of manufactured dissent that undermines public trust in science, e.g. regarding the reality of anthropogenic climate change. Massimi’s paper suggests that perhaps this tension can be alleviated in some cases by analyzing the disagreement as concerning a difference of ‘perspective’ rather than as a straightforward factual dispute about which scientific theory to endorse. And Frank’s contribution points to a larger issue about the role of ethical and political values—or more generally, non-epistemic values—in scientific disagreement. Although there have been some recent studies on what role such values in fact play (e.g., Beebe et al. 2019), there is a related and largely open question about what role such values should play.

The four papers on agent-based models also point to important directions for future research. For example, both Šešelja’s and Santana’s contributions have implications for the relevance of the distinction between belief in a theory, on the one hand, and the pursuit, acceptance, or endorsement of the theory, on the other hand. In different ways, Šešelja and Santana end up providing support for the idea that not all scientific disagreements should be construed in terms of belief, which in turn raises the important general epistemological question of how to characterize the alternative form of propositional attitude involved in pursuing, accepting or endorsing a theory. Weatherall and O’Connor’s explanation of how epistemic communities can become endogenously factionalized, i.e. divided into groups of agents with remarkably similar views on different topics, raises an important normative question of whether this process could be rational from each agent’s point of view—and if so, under what conditions that would be so. Similarly, Anzola’s contention that the different methodological approaches of KISS and KIDS in fact live happily side by side in computational social science raises the normative questions of whether this ‘live and let live’ attitude is beneficial for the discipline all things considered.

These are just some of the questions for further research that are raised by the papers collected in this volume. There will doubtless be significant disagreement about how these types of questions should themselves be answered—but that, as we are now acute aware, is par for the course.