1 Introduction

The value-free ideal (VFI) for science has been a central philosophical ideal for scientific practice since at least the 1960s. Although there are precursors, the particular version of the ideal, “only epistemic values in the practice of scientific inference,” came to predominate philosophy of science during the Cold War and has been influential on scientific practice (Douglas, 2009, Chap. 3). While the ideal had been occasionally challenged in philosophy of science in the latter part of the 20th century, it was not until the 21st century that a more sustained critique of the ideal, qua ideal, was mounted.

Philosophers of science involved in the debate now predominantly reject the ideal as an appropriate or helpful ideal, but there are still some who support it (e.g., Betz, 2017; Hudson, 2021; Rezaee & Behesht, 2023). Further, replacement ideals have been challenging to formulate and none have received wide acceptance (see Douglas, 2021a; Holman & Wilholt, 2022 for overviews). The friction around replacement ideals has been exacerbated by the fact that relinquishing the VFI opens up debate about what the place of science in society should be. For example, both in arguments for the ideal (e.g. Betz, 2017) and against the ideal (e.g. Douglas, 2009), the importance of science for democratic decision-making has been emphasized. Some have argued (going back to DuBois as noted in Bright (2018)) that value-freedom in science is crucial for the use of science in democratic decision-making, whereas others have argued that because science is so influential in public policy-making, values must be a part of the responsible conduct of science and openness about those values is crucial to democratically responsible utilization of that science (Douglas, 2021a). How science is to play a role in democratic societies, particularly in public policy-making, is underlying this debate. Additional concerns about what makes science trustworthy for the general public or what justifies public funding for science are also part of the ongoing debates.

We will argue in this paper that this is no accident. The VFI coalesced at a particular time with a particular view of how science should relate to the broader society that supports it. This view, often referred to as the “social contract for science,” was an implicit understanding of the relationship between science and society (Guston, 2000, Chap. 2). As such, it was not an explicitly signed contract (who would represent the parties in such an agreement?), but rather an understanding of the terms of public support for science, of the nature of the science produced, and of the societal benefits from science. We find evidence of the components of this implicit social contract in the science-policy debates after WWII, and, later, in philosophical discussions in the mid-20th century. Interestingly, the VFI was not a central part of the discussions that solidified the mid-20th century social contract for science (not being part of the science policy discussions), but rather a product of those discussions and the resulting contract.

We will argue that the mid-20th century social contract which underlies the values in science debate has three central conceptual components: (1) a distinction between pure (or basic) and applied science, (2) a conception of scientific freedom that provided a general removal from social responsibility for scientists pursuing pure (or basic) science, and (3) the linear model for public funding of science. We will not argue that any of these conceptual components were ever fully or universally instantiated in practice. The linear model (which relied upon the basic/applied distinction) only got named once it became a target for critique and was only sometimes reflected in practice, but nevertheless was conceptually influential in many areas (Asner, 2004; Edgerton, 2004). Freedom from societal responsibility when pursuing basic research structured policy and became a baseline framework, but issues with the idea soon surfaced because the central ideas were not apt for the actual pursuit and utilization of science in a democratic society (Douglas, 2021b). Nevertheless, the components of the social contract were potent conceptual resources that shaped U.S. science policy, and ultimately through the influence of the U.S., global science policy.Footnote 1 We will also show how these three conceptual components supported arguments for the VFI in the 1960s.

Because the VFI in its mid-20th century form is a result of the mid-20th century social contract, holding onto the old social contract and still adhering to the conceptual structures that generated it make it very difficult to relinquish the VFI. Only with the contract reformulated will a different ideal seem a good and plausible replacement. And there are good reasons to revise the social contract for the 21st century, as many have noted (Ball, 2019; Gibbons, 1999; Guston, 2000). Revising the social contract will be central to generating an appropriate ideal for values in science. Providing a full alternative to the mid-20th century social contract must await future work.

2 The social contract for science before and after WWII

Although the social contract for science did not coalesce until after WWII, components of it were part of the public debate about science in the decades leading up to it. We describe here how first debates about the nature of pure vs. applied science entangled with debates about the societal responsibilities of scientists, and finally brought in issues of how science should be funded.

In the second half of the 19th century, the pure vs. applied distinction emerged in its 20th century form, i.e., that pure science was empirical science pursued for the sake of truth alone and applied science was science pursued for the sake of some utilitarian goal (Bud, 2012).Footnote 2 A robust debate about the relationship between pure and applied science, amidst appeals for valuing pure science, ensued (Douglas, 2014; Gooday, 2012). By WWI, this debate was far from settled, and the war, with its use of poison gas and the horrific impact of the Haber process on the ability to fix nitrates for explosives, colored subsequent debates on the distinction (Douglas, 2014; Kline, 1995). The purity of some science was argued (most notably by Bertrand Russell) to be centrally important, that it was imperative that science be pursued for the sake of truth alone and that such pursuits be distinguished from applied efforts (and both the benefits and harms such applications brought with them) (Sargent, 2011). John Dewey and others disagreed, arguing that all science was both pure and applied, requiring both pursuing truth and finding application in empirical testing (Douglas, 2014, p. 59). Marxist-leaning scientists went further and eschewed any distinction between pure and applied science because all proper science should serve the needs of the people (Nye, 2011, pp. 191–192). These debates about pure and applied science took place in a context of increased concern about the societal impact of science, for example about the use of chemistry to produce chemical weapons in WWI (Slotten, 1990). What were the societal responsibilities of scientists given the potent ability of science to both harm and help?Footnote 3

By the 1930s, these debates began to influence the third aspect of the social contract to come, public funding of science. In the U.S. prior to WWII, there was no general source of scientific funding for academic scientists. Only specific scientific efforts were pursued by the government, within the government (coastal surveys, census taking, standards for weights and measures, etc.) (Dupree, 1986). Public funds were also distributed through the land grant system in the U.S. to universities, but only for those working with agricultural communities on particular problems for those communities. The first large national lab in the US was created by the National Advisory Committee on Aeronautics (NACA) in WWI—building a wind tunnel to help with airplane design (ibid., p. 334).

Against this backdrop, a debate regarding public funding for scientific research crystalized just as WWII began, instigated by leftist proposals to coordinate scientific efforts around public problems (both rejecting the pure vs. applied distinction and embracing a positive account of societal responsibility for science). Other traditionally liberal scientists, such as Michael Polanyi and Percy Bridgman, insisted on (1) a distinction between pure and applied science, (2) the special value of pure science as embodying the pursuit of truth, and (3) that scientists should be left free from any societal concerns in deciding which pure research projects to pursue (Nye, 2011). Any efforts by government or society to direct the efforts of scientists would only interfere and damage pure science. It was through this debate that the post-WWII social contract for science took shape.

The 1939 publication of J.D. Bernal’s The Social Function of Science in the U.K. served as a focal point for the debate between the two camps (Bernal, 1939). Bernal argued for scientists working in concert with governments to direct science towards the public good. Bernal also suggested that scientists’ efforts should be shaped by public needs. Although he did not claim that scientific research could be strictly planned by bureaucrats, he thought that research agendas should take into account public issues and thus public values. His work was widely discussed and deeply influential, both by those who agreed with him and those who found his vision of the relationship between science and society objectionable (Nye, 2011, Chap. 6).

As the war proceeded, discussions about the future of science policy began, most notably in the U.S. Senator Harley Kilgore’s efforts to shape post-war science funding began as early as 1942, and were influenced by the New Deal belief in the power of coordinated federal action (Kleinman, 1995, p. 77). His first bills on science funding called for research directed towards public needs and a system for the distribution of funds that took into account geographic equity concerns, and thus were in line with aspects of Bernal’s approach.Footnote 4 Both the idea that research should be directed (even loosely) to public needs and that funds should be distributed across the states was an affront to those who thought the funds should go simply to the “best scientists” (who were concentrated in elite institutions, mostly on the coasts–e.g., Harvard, MIT, Johns Hopkins, Columbia, Berkeley, CalTech).

In response to such calls for public needs to direct scientific funds as recommended by Bernal and Kilgore, some scientists (led by John Baker and Michael Polanyi, and joined later by Percy Bridgman) banded together to found the Society for Freedom in Science (SFS). SFS members argued for a view of science where scientists (particularly those doing basic research, as “pure science” was increasingly called) should have complete freedom to choose their own research agendas, wherever they might lead, driven by the inherent curiosity of scientists. Central to the arguments of the SFS were (1) the idea of a distinction between basic and applied science, and (2) the removal of any responsibility for the societal impacts of scientific work when pursuing basic science (Bridgman, 1947; McGucken, 1978).

For the SFS, scientists pursuing basic or pure science were pursuing truth for its own sake, and should be particularly valued for doing so. In a debate that played out in the pages of Science during the final years of WWII and triggered by Bridgman’s introduction of the group to the U.S. (Bridgman, 1944), American scientists argued over the importance of the pure vs. applied distinction. Scientists such as Alexander Stern asserted that any threat to pure science was “a growing danger to intellectual freedom throughout the civilized world,” because focusing on the material gains to be made through the pursuit of science, as Marxists were wont to do, undermined “the pursuit of truth and the passion for understanding [that] give a dignity and nobility to man.” (Stern, 1944, p. 356) Although other scientists objected to Stern’s and Bridgman’s strong distinction between pure/basic and applied science (Alexander, 1945; Pearson, 1944; Robin, 1944), Stern responded with an impassioned defense of the distinction, writing that “science has nothing to do with usefulness.” (Stern, 1945, p. 38) John Baker, co-founder of SFS, agreed (Baker, 1945).

A crucial aspect of the distinction between pure/basic and applied science, particularly after the horrific nature of nuclear weapons was revealed in August 1945, was that a different set of social responsibilities came with the pursuit of pure/basic research than with applied research. As Bridgman argued in 1947, scientists pursuing knowledge for knowledge’s sake alone should not be considered responsible for the societal impacts of their work (Bridgman, 1947). To burden scientists with such responsibility would not only place on them a responsibility not imposed in other fields of work, but would hamper their pursuit of truth. As Bridgman wrote:

“The challenge to the understanding of nature is a challenge to the utmost capacity in us. In accepting the challenge, man [sic] can dare to accept no handicaps. That is the reason that scientific freedom is essential and that artificial limitations of tools or subject matter are unthinkable.” (ibid., p. 153).

For Bridgman, imposition of societal responsibility for the impact of science on society was just such an “artificial limitation” on science, and thus to be rejected.Footnote 5 Those doing the work of applying science (i.e., applied science) in particular areas could instead shoulder the responsibility for the societal impact of science. As Vannevar Bush put it, in the pursuit of basic science, “the free play of free intellects” was essential, with no other constraints (quoted in Sarewitz, 2016, see also Rohe, 2017).

Yet the arguments of the SFS left an open question. If scientists pursuing basic research were to follow their own curiosity wherever it led, without thought for the societal impact of knowledge production, why should the public fund these scientists? What was the public to gain that would justify utilising the public purse to support such scientific work?

While SFS folks like Bridgman and Baker argued in rather abstract terms about the value of the pursuit of truth for its own sake, public funding in substantial amounts required something stronger. Vannevar Bush’s, 1945 report, Science: The Endless Frontier, provided a more potent answer to this question: that basic research provided the basis for applied research, which in application produced societal good. In addition to the idea of the importance of basic research for eventual application, Bush argued that basic research was what required public support, as this was the research that would not be funded by industry with private money; hence the need for special public funds for funding basic research. Industry could support work that would have a reasonably short probable payoff, but the long term investment in basic knowledge required public funds, avoiding the industry challenge of accountability to shareholders. Further, Bush argued that WWII had depleted the “stocks” of basic research from which society could draw, and thus basic research needed an infusion of public dollars, as well as ongoing long-term support (Bush, 1945). The public would eventually be repaid their investment in wonderful new consumer products, improved public health,Footnote 6 and in military security that would result from eventual application of basic knowledge.Footnote 7

This is the linear model for science funding (as it later came to be known): Public funds are placed at the start of a pipeline of knowledge production, and sent to basic scientific research efforts (Balconi et al., 2010). Scientists themselves decide how those funds should be distributed (to the best projects from a scientific perspective, with no thought to eventual application). Scientists using such funds would remain at their home institutions (e.g., academic scientists at universities, industrial scientists in private industrial labs), and the funds would arrive through the instrument of the contract research grant. Once the basic research was completed and published, scientists and engineers working for industry could apply it for the benefit of their company and of society as a whole.

Although Bush never called this funding model “the linear model,” it provided a justification for public funding of scientific research that was (and still is) central to science policy in the U.S. and in many OECD countries (Balconi et al., 2010). Funding statistics are still provided in terms of basic and applied science. Policies have been generated in an attempt to accelerate the pipeline from basic to applied to use of science.Footnote 8 In practice, the contract research grant through organizations like the National Science Foundation (NSF) and the National Institute of Health (NIH) funneled unprecedented levels of public funds to academia. Even within the explicitly applied and mission oriented research of the Department of Defense, the linear model was hugely influential. The military believed in the importance of pursuing basic research (for later applications) to such an extent that it created special programs for supporting basic research by the 1960s. Massive overhead fees for contract work (weapons development) were provided to fund basic research science to be pursued by the military contractor. This actually distributed more public funds to basic research than the NSF for some years in the 1960s (Asner, 2004; Hounshell, 2004).

And even as public funds for basic research expanded, the sense of societal responsibility when pursuing this work did not. Recall that this model for funding also came with a view on the societal responsibilities of scientists when pursuing basic research—beyond doing good science (which was adjudicated within science only), they had none. Bridgman argued forcefully in the post-WWII context that scientists should bear no responsibility or accountability for the societal impacts of their work, that the pursuit of scientific truth was so challenging, the work could bear no handicaps such as even considering the eventual impact of one’s work. Members of the SFS agreed (McGucken, 1978, p. 48).

This view of the importance of freedom for science—the autonomous nature of the scientific community (from society, from moral concerns, from politics) as a central good to be protected—remained central from the 1940s into the 1950 and 1960s. Such autonomous status was argued to be essential if science was to be able to find truth, the key value of science. For example, in a report from a Congress for Cultural FreedomFootnote 9 conference on “Science and Freedom” held in Hamburg, Germany in 1953 with opening talks by SFS founders Polanyi and Baker, Edward Shils reported in the Bulletin of Atomic Scientists that “the conception of the autonomous scientific community” was the theme of the conference (Shils, 1954). Discussants agreed that scientists needed to decide which projects to pursue themselves, and that government financial support was required but should not direct the efforts of scientists. Discussions about what kinds of institutional structures afforded funding without political influence were central. Shils noted that some German scientists raised issues of moral concerns regarding science and its methods, but he dismissed such efforts as a distraction from the core issues (ibid.). Even though German scientists tried to insist that “the scientist had to be concerned for the consequences of his work,” Shils portrayed this part of the debate as merely producing “clamor and the pursuit of hares,” i.e. not of central importance (ibid. p. 153). Moral restrictions on science were not discussed by the SFS and were largely set aside by those focused on scientific freedom. Footnote 10

In later statements about the responsibilities of scientists for the social impact of science, applied scientists were thought to be the primary bearer of societal responsibilities, being close to application; but applied scientists were also less free, because of the institutional settings in which they worked (e.g. industrial research labs or defense labs). The position and responsibilities of scientists pursuing basic research was thought to be exactly opposite—more free, and less societally responsible. Even with the rise of concerns in the 1970s over human subject research, the development of chemical agents for use in warfare such as Agent Orange, and debates about the use of recombinant DNA, the idea that if one was doing basic research, one had less societal responsibility still held sway. This can be seen in the 1975 AAAS report on “Freedom and Responsibility in Science.” (Edsall, 1975) Although motivated in part by societal concerns over the impact of science, the report still divided its discussion of responsibility in science into two general parts: basic science (pp. 6–23) and applied science (pp. 23–30). (ibid.) The discussion of responsibilities for “basic scientists” focused on doing work without fraud or manipulation, properly sharing work, and properly giving credit to work (pp. 6–12). The report discussed several areas where ethical restriction of research could be justified (pp. 12–23), but generally argued for as limited as possible restriction on the grounds of scientific freedom and the benefits of knowledge foregone. Ethical restrictions were seen as an imposition on scientific freedom that were sometimes (rarely) justified, and basic scientists were to obey such restrictions (in limited cases). Within the space of restriction, however, concern with societal impact was not the job of scientists pursuing basic research.Footnote 11 That researchers’ choices might bring with them responsibilities for societal impacts was not discussed with respect to basic science, but rather with respect to applied science (pp. 26–29).

Thus by the 1970s, the scientific community was beginning to grapple with the complexity of societal responsibility in science, but still doing so within the terms of the mid-20th century social contract. Within some limitations on their research choices, scientists pursuing basic science were thought not to have responsibilities to society other than pursuing good science; scientists pursuing applied science (using basic science to pursue particular applications) were the ones who had to think about societal impact and bear the responsibility for such impact. This view of the relationship between freedom and responsibility for societal impact of one’s work continued through the end of the 20th century (Douglas, 2021b).

In sum, the mid-20th century social contract for science was built out of three conceptual pieces:

  1. 1)

    a distinction between basic (or pure) and applied research.

  2. 2)

    an idea of scientific freedom with no societal responsibility for impacts of the science when pursuing basic research.

  3. 3)

    the linear model for science funding, from basic to applied to public good.

These three components together constructed an implicit social contract for science, such that if one was pursuing basic research, one could expect public support and funding (or at least access to substantial funding opportunities) in exchange for eventual positive public impact through applied science. But if one was doing basic science, one was not responsible for such impact, because that responsibility was someone else’s job, usually those further down the pipeline of application (following the linear model). This social contract isolated scientists from any sense of societal responsibility for their work. As we will show in the next section, the components of the social contract for science served as key presuppositions in philosophical work on the VFI.

3 The social contract and the value-free ideal

The components of the social contract for science together create the conditions for the VFI. This is primarily because the social contract for science rejects societal responsibility for basic research scientists, under the conception of freedom in science for basic research. This understanding of the relationship between science and society informed debates about values in science by the 1960s.

Consider, for example, Isaac Levi’s “On the Seriousness of Mistakes,” his 1962 response to Rudner’s (1953) paper on the necessity of value in science.Footnote 12 In his opening discussion, Levi argues that the statistical procedures of interest to Rudner are important for both “theoretical and practical problems.” (Levi, 1962, p. 47) Levi argues, however, that the decisions about what is to be taken as true should not be conflated with decisions about “the technological and policy making aspects of scientific activity.” (ibid.) The idea that some science is solely concerned with discovering truth (basic science) whereas other science is geared towards application and use (applied science) is central to the framing of Levi’s argument. He embraces the distinction between basic and applied science, and eschews societal responsibility in inference for basic science, clearly reflecting the prevalent social contract for science at the time. His discussion focuses solely on the impact of “caution” in inference internal to scientific practice (ibid., p. 63).

Or consider Carl Hempel’s classic essay on “Science and Human Values”. When laying out the argument from inductive risk in his “rules of acceptance” for science (and coining the term “inductive risk”), Hempel distinguishes between pure and applied science (Hempel 1960/1965, pp. 92–93). While applied science clearly did need to address the acceptability of uncertainties in terms of social values, the situation was different for basic or pure research. Hempel wrote that in the case of “pure scientific research, where no practical applications are contemplated, the question of how to assign values … becomes considerably more problematic.” (p. 93) To address this problem, Hempel suggests attending to what will become known as epistemic values, “an increasingly reliable, extensive, and theoretically systematized body of information about the world.” (ibid.) Thus, in the face of epistemic uncertainty, scientists pursuing basic research were to focus on epistemic considerations only.Footnote 13

Hempel’s emphasis on the importance of social and ethical values for inference in the applied sciences but epistemic values for pure science is a clear expression of the VFI within the social contract. Recall that the VFI requires that no social or ethical values be involved in scientific inference. A complete account of scientific inference includes not just assessment of the relationships among evidence and theory, or how strongly evidence supports theory, but whether the available evidence is strong enough for a claim. This requires the consideration of whether evidence is sufficient for a claim. The VFI claims that this assessment of both evidential strength and evidential sufficiency should be done without reference to social and ethical values. The argument from inductive risk (AIR), often discussed as one of the strongest critiques of the VFI, rests in part on the idea that scientists do have some social responsibility to consider during their inference practices, namely that scientists should consider the impact of error (an always present risk in empirical science) on foreseeable social concerns. This is why social and ethical values, according to AIR, are needed in scientific inference—because scientists have basic moral responsibilities to consider the impact of their work on society (Brown, 2020; Douglas, 2003, 2009; Havstad, 2022). The mid-20th century contract rejects precisely this responsibility for basic science, claiming scientists pursuing basic research have no responsibilities for the societal impact of their work.

If scientists should not be concerned with the societal impact of their work, it follows that they should not consider social and ethical values when weighing evidential sufficiency. The removal of societal responsibility considerations for basic research (a central component of the social contract for science) means that basic research scientists have no responsibility to consider the societal consequences of error when making scientific inferences. This is a bulwark of the VFI for science. Within the frame of the mid-20th century social contract for science, AIR has no purchase and social values have no relevance for inference in basic research.

Indeed, within the social contract the role for values in science is even more limited in basic science than the VFI requires. Because the social contract isolates scientists from societal concerns (so that they can pursue the free play of free intellects without worrying about social impacts), there is no important place for social and ethical values in basic research scientific practice generally, including in the direction of scientific attention. The choices of what to study, how to study it, and when the study should be deemed completed are to be made with reference to epistemic considerations only. Social concerns are not relevant to basic research, which is solely concerned with the pursuit of truth about the world, wherever it may lead. It was through the application of basic research (by applied scientists and engineers) that the societal good that justified public expenditure would eventually be revealed. Thus, the VFI, as it was formulated by the 1960s, was more relaxed concerning the influence of social and ethical values than suggested by the social contract, focused as it was on the role of values in scientific inference.Footnote 14 Nevertheless, the general tendency towards isolating scientists doing basic research from society in the social contract was central to the VFI.

In sum, within the frame of the mid-20th century social contract for science, social and ethical concerns were not to play a role in scientific inference in basic research, because basic research scientists are not responsible for the societal impacts of their work, and thinking about those impacts would only be distracting from the pursuit of truth. The VFI was a reflection of this, and arguments for it at the time relied upon the social contract for science in framing arguments. The only role social and ethical values could play would be distorting or biasing ones, steering scientists away from accurate science. The mid-20th century social contract for science thus sets the conditions for the VFI as an obvious corollary. In exchange for public financial support for basic research, scientists were to pursue empirical truths without concern for social impact. Social impact was something that applied scientists were to consider and be responsible for, not basic research scientists. Social and ethical values would be nothing but distorting factors within basic research. Aiming for value-freedom would thus be the most valuable thing a basic research scientist could do.

4 Rejecting the components of the social contract and broader implications

The mid-20th century social contract for science is no longer tenable. Each of the components of the social contract for science has fallen under serious criticism if not outright rejection in the past few decades. We will discuss central problems for each here. Then we will turn to the broader contexts in which the mid-20th century social contract has been vitally important, namely science advising, science education, and science communication (Branch & Douglas, 2023). In each of these areas, the impact of the social contract is also being rejected. A revised contract is thus clearly needed.

For the purposes of the VFI, changes in the understanding of scientific freedom and social responsibility are most central. Bridgman had argued that demanding that scientists be responsible for all impacts of their work was an unfair burden (Bridgman, 1947). Douglas (2003) argued for a more limited set of responsibilities—that scientists should be responsible for the foreseeable impacts of their work (rather than all impacts), and that this responsibility was not a special burden for scientists but rather in line with general moral responsibilities for all agents. Scientists have received no special dispensation to be freed from this basic general moral responsibility (Douglas, 2003). Recent statements by scientific societies have generated not just a general moral responsibility for scientists along these lines, but a professional responsibility to consider societal impacts in all their work (AAAS, 2017; ISC, 2021). It has been suggested that wrestling with concerns over dual-use research, for which substantial risk of harm (due to weaponization potential) could arise in any area of research at any time, led to this broadening (Douglas, 2021b). Even before dual-use concerns became a potent issue, however, the need for more societal responsibility in scientific research, even basic research, grew out of demands for protections for human and animal subjects since 1970. The concerns with societal impact that were part of the 1975 Edsall report discussed above have only grown, and the limited but clear restrictions on some aspects of scientific work argued for in the Edsall report have not proved sufficient for the burgeoning calls for broader societal responsibility. In the 21st century, the idea that basic research involves freedom from societal responsibility has been completely overturned (ibid.). If the pursuit of all scientific research, including basic research, involves the consideration of societal impact (whether from general responsibilities to not be reckless or professional responsibilities as articulated by scientific societies), then a key premise of the AIR challenge to the VFI holds, and the VFI must be rejected.

The other components of the mid-20th century social contract for science have also been heavily critiqued, if not as roundly rejected. Edgerton (2004) suggests that the linear model was not properly named until it was being criticized, a trend which began around 1970 (Edgerton, 2004, p. 34). Critique accelerated in the 1980s (ibid.). Defenses of the linear model are still made, but they are modest defenses, that sometimes the linear model does tell us something about the direction of research, rather than understanding the linear model as the basic blueprint and justification for funding structures (Balconi et al., 2010). And yet, it has not yet been well studied which systems for science funding are most effective for which purposes. There have been particular studies of some systems (starting with Project Hindsight and Project Traces in the 1960s), but no systematic study of the impact of public funding of science on public goods (Pirtle & Moore, 2019). As Kitcher noted in 2001, “[Vannevar] Bush had no detailed empirical studies of inquiry under different conditions of organization…. Indeed, the necessary studies are still lacking.” (Kitcher, 2001, p. 140) Despite some work that raises concern about the assumption that basic research funding leads to societally valuable breakthroughs (e.g., Nicholson & Ioannidis, 2012), we are not in much of a better position more than two decades later. The linear model continues to be both a widely shared presumption and an object of criticism in this context of uncertainty; it has not been as decisively rejected as the conception of scientific freedom from responsibility.

The distinction between basic and applied science that underlies the linear model (ordering the components of the model) has also been subject to criticism. Stokes (1997) argues for a more complex terrain. Douglas (2014) argues for the arbitrariness of the distinction. Schauz (2014) provides an in-depth historical analysis of the term, showing both its importance in structuring 20th century science policy and the problems that it generated. Shaw (2022a) argues that the distinction between basic and applied science is only useful in cases of “urgent science,” when external time pressures demand scientific information quickly. There remain defenders of a general basic vs. applied science distinction (e.g., Roll-Hansen, 2017), and others have noted its political potency, even as it falls out of favor in discourse (Pielke, 2012). Yet the main reason for the continued use of the term seems not be that it is conceptually clear or incisive, but rather from a lack of alternatives. (ibid.)

Rejecting the old social contract also has implications for the understanding of science in society beyond the VFI. In particular, the old social contract structured the relationships between science and society beyond the concerns of responsibilities of scientists and funding of science. Ideals and norms for science advice, science education, and science communication (for example) were also shaped by the old social contract.

Consider science advice. In line with the social contract’s separation of science from society, the ideal science advisor was “independent.” They were expected to eschew political goals (other than protecting scientific integrity), to be unswayed by power, and to be immune to social values. The ideal of an independent science advisor was most clearly articulated by Donald Price in “The Scientific Estate.” (Price, 1965) He argued that the relationship between science and government was best understood through four “estates” coming together to create the “spectrum from truth to power”. Science, at the far end from power and politics, was focused on truth alone, a clear reflection of the social contract’s norms. This distance generated the sense of independence, and should be embraced by the science advisor. This independence included independence from societal values, reflecting the VFI’s rejection of these types of values when evaluating evidence.

By the 1970s, the independence anchoring the science advisor model became unsustainable. The veil of non-partisanship and value-freedom that shielded science advisors was becoming thin as public disputes around important political issues challenged science advisor political neutrality (Jasanoff, 1990). But rather than acknowledge the inherently value-laden nature of science advising, new models focused on the collective management of advice, such as an emphasis on ‘balance’ for advisory panels and transparency of process and results, as enshrined in the 1972 US Federal Advisory Committee Act (ibid.). This would not eliminate the problems with the independent and value-free science advising ideal, and would lead to ongoing debates about the role of scientific expertise in governance (e.g. Pamuk, 2021; Turner, 2014). Science advisors still struggle with the ideal of independence, and alternative ideals have only recently come to the fore (Douglas, 2021c). The old social contract makes it difficult to articulate better ideals for science advice.

Similar influence of the social contract and its resultant failings can be seen in science education and science communication, both of which were considered important for generating broad public support for science. Public support for science in general was considered essential for recruitment of new scientists into scientific careers, for engaging properly with scientifically based public policy issues in the broader democratic debate (generally trusting scientific expertise), and for ensuring basic science funding. Key to all these goals was for the public to grasp both the value of scientific facts and particular scientific facts themselves. The norms for science education within the mid-20th century social contract emphasized these goals. A year after the successful launch of Sputnik (1957), the US passed the National Defense Education Act (NDEA) making science literacy — a combination of knowledge about scientific facts and a positive attitude towards science — a priority. The postwar growth of standardized testing (Miller, 1983) reinforced the emphasis on scientific facts for literacy goals, reflecting the social contract and the embrace of the VFI at the time (Branch-Smith, 2019; Claxton, 1997).

More recent science education curricula have revised the learning goals and experience in the classroom to move beyond the mere learning of science theories and facts. What should count as essential for science education goals remains contested, however, as debates continue about how to teach the nature of science, how much open-ended inquiry should be part of the curriculum, and how much the accepted findings (facts) of science should be centered (Osborne et al., 2003).

Outside of the classroom, science communication during the Cold War also aimed to bolster public understanding and support for science. The ‘Sender-Message-Receiver’ model was used to structure the process of communication, aiming to transmit information unidirectionally from scientific experts to science communicators and then to the public (Broks, 2014). Any rejection of science was viewed optimistically as something that could be corrected by simply providing more information about science, a view which seemed to be confirmed by the low rates of scientific literacy (first measured in the US in 1957). The Deficit Model of science communication came to predominate, reflecting institutional anxiety towards a non-expert public damaging science with their ignorance and non-epistemic values, thus reinforcing the attractiveness of insulating science in accord with the social contract and the VFI (Branch-Smith, 2019).Footnote 15 If science was pursued in a pure fashion outside of societal concerns and produced unquestionable truths as a result, then the public’s rejection of science would clearly be the public’s fault.

By the 1990s criticism of the Deficit Model began to mount. More recently, the focus has shifted away from fact-based literacy deficits to other types of deficits (e.g. interest, attitudes, and most recently, trust) where the model’s ability to redefine deficit is seen as a testament to its resilience (Bauer, 2016). In all these cases, the problem is never with science itself. The deficit is always situated within the public sphere. Seeing the Deficit Model as a product of the social contract for science, and revising the contract, could allow for better models of scientific communication to emerge (see, e.g., Fraser et al., 2021; Hyland-Wood et al., 2021; Irwin, 1995).

In sum, in both the narrower contexts of responsible science and science funding and in the broader contexts of science advising, science education, and science communication, the mid-20th century social contract for science’s ideals have been under substantial pressure or are outright failing. Revising the social contract is crucial work, and doing so will shift our views of what ideals for values in science we should hold.

5 Conclusions

The mid-20th century social contract for science, composed of the distinction between basic and applied science, the presumption of freedom from societal responsibility in the pursuit of basic research, and the linear model justifying public funding of basic research, enabled the VFI to be articulated and broadly accepted. If one was pursuing basic research, even with the support of public funds, one was ideally isolated in a purely epistemic bubble. It was under this conception of scientific research that the VFI took its potent 20th century form and came to dominate philosophy of science.

The mid-20th century social contract is no longer tenable in the 21st century. As noted above, the current conception of scientific freedom is one that comes with societal responsibility for the impacts of one’s research, even basic research. Social and ethical values are thus a crucial part of good scientific practice. The VFI is now a poor ideal. Yet we cannot solve the “new demarcation problem” of what should be a good ideal for values in science without also attending to, and revising or replacing, the old social contract that made the VFI work (Holman & Wilholt, 2022).

Indeed, with the core components of the old social contract undermined or rejected, the contract itself needs to be substantially revised or replaced. We will need an account of what constitutes scientific freedom and responsibility, and some ontology of the kinds of scientific endeavors we might fund (and how to allocate funds and evaluate success for funding projects). As noted above, “freedom from responsibility” has been replaced with “freedom with responsibility,” but the specific terms of societally responsible science remain to be developed fully. What should scientists be responsible for, and just as important, what are they not responsible for? Which structures need to be reconfigured, which dismantled, and which created anew in order to facilitate properly responsible science in the 21st century? Answering these questions is beyond the scope of this paper, but this is the kind of work that needs the attention of philosophers of science and will be central to settling debates about the role of values in science.

The funding systems for science also need to be rethought. Basic (vs. applied) science is no longer a tenable basis either for a shield from responsibility or as a justification (through the linear model) for public funding, but pursuing research for curiosity sake is still valuable. In structuring our funding systems, what are the relevant kinds of scientific research and how should our funding systems support them?Footnote 16 Again, the work of philosophers of science who can attend to the epistemic, ethical, and political challenges of funding systems will be crucial.

We must reformulate the social contract for science in order to replace the VFI. While there have been substantial and important critiques of the components of the mid-20th century social contract, a fully developed replacement has been elusive. A new ideal for values in science will depend on how components of the social contract are revised or replaced, just as the old components made the VFI seem like an obvious result. As we have shown here, this is an ambitious and challenging project, but one made necessary by the demise of the old social contract and resulting arguments against the VFI. It is a project to which philosophers of science should be central.