1 Introduction

There is growing consensus that the value-free ideal of science is problematic and that social, ethical, and other contextual values can play necessary and beneficial roles in scientific research (e.g., Anderson, 2004; Brown, 2013; Douglas, 2009; Dupré, 2007; Elliott, 2017; Intemann, 2015; Intemann and de Melo-Martin, 2010; Kourany, 2010; Lacey, 2005; Longino, 1990; Wylie and Nelson, 2007). Yet if contextual values, such as social, ethical, or political values sometimes play such roles in science, what implications does this have for science communication directed toward non-experts? To what extent (if any) should science communicators be transparent about such values and the roles they play throughout the research process? This question is particularly pressing in cases where there is significant resistance among some non-experts to accept certain scientific claims or adopt science-based policies or recommendations (de Melo-Martín, 2023; de Melo-Martín & Intemann, 2018; Goldenberg, 2021). There are concerns that making such values explicit is impractical (Schroeder, 2021) and may even decrease public trust of science, particularly among those who are inclined to think that scientists are biased if they are influenced by non-epistemic values (John, 2018) or among those who do not share the values of scientists (Odenbaugh, 2003; Schroeder, 2021). This paper defends a tempered account of the norm of value transparency in science communication on the grounds that it can help promote warranted trust in experts better than obscuring such values. Section 2 identifies and explains four central conditions of epistemic trustworthiness supported by the philosophical literature and what they mean for how we understand warranted epistemic trust. Section 3 argues that transparency about values in science communication can help provide non-experts with good reasons that each of those four conditions of epistemic trustworthiness is met. Section 4, however, considers three strong arguments that such transparency either impractical or likely to undermine (rather than promote) one or more trust conditions. I show that while these concerns point to important ways in which the norm of value transparency needs to be tempered, they fail to provide sufficient reason to think that value transparency ought to be abandoned as a norm for science communication to promote warranted epistemic trust. Finally, Sect. 5 examines how a qualified norm of transparency about values might be understood in science communication.

2 Understanding warranted epistemic trust in scientific experts

Non-experts depend on experts to produce and share knowledge that can guide decisions about which actions to take, what treatments or products to use, or which science-related policies to support (Intemann, 2023; Rolin, 2021). Dependency on experts is increasingly necessary as scientific knowledge becomes more complex and specialized, with greater reliance on intricate technological devices and cognitive tools such as models (de Melo-Martin and Intemann, 2018). This makes it difficult for those who are not experts in a particular field to fully understand the state of knowledge within that field. As a result, it is important for non-experts to trust experts.

Although trust in experts is often necessary, trust is risky because it makes us vulnerable (Baier, 1986, 235). Ideally, we want to promote trust that is well-placed, or warranted. That is, we want to ensure that laypersons have good reasons for trusting scientists or scientific institutions. As many have argued, trust is more than mere reliability (Baier, 1986; Hawley, 2012, 2014; Jones, 1996). Thus, it is important for non-experts to find experts not just reliable, but trustworthy (Douglas, 2009; Grasswick, 2010; Hardin, 2006; Hardwig, 1991; Longino, 2010; Rolin, 2021; Scheman, 2001; Scheman, 2011).

To be trustworthy about a particular area of knowledge, experts must possess the appropriate expertise in relation to that area of knowledge (Almassi, 2012; Hendriks, Kienhues, and Broome, 2016; Mayer, Davis, and Schoorman, 1995), or be in a position to know (Goldenberg, 2023). Finding experts to be trustworthy also requires believing that experts, or the institutions governing them, uphold standards and methods that are epistemically reliable, including standards for scientific integrity (Hendriks et al., 2016; Irzik & Kurtulmus, 2019; John, 2018). In addition, experts are trustworthy when they are honest and sincere qua scientists (Almassi, 2012; Baier, 1986; Irzik and Kalthamus 2019). Finally, finding experts to be epistemically trustworthy requires that they have a good will or good intentions (Almassi, 2012; Frost-Arnold, 2013; Rolin, 2020; Mayer, Davis, and Schoorman, 1995), such as caring about the wellbeing of the publics they serve (de Melo-Martín & Intemann, 2018; Goldenberg, 2021), or having the right attitude about the possible epistemic consequences of their work for the public (Wilholt, 2013). When, for example, scientists or institutions have financial interests that conflict with public interests or wellbeing, this can lead non-experts to believe they are less trustworthy (de Melo-Martín & Intemann, 2018; Goldenberg, 2016; Goldenberg, 2021). Similarly, when scientists disregard the interests or values of particular groups (by, for example, historically failing to obtain informed consent, or systemically neglecting their needs in developing new innovations), those groups may understandably find scientists or scientific institutions untrustworthy (de Melo-Martín & Intemann, 2018; Grasswick, 2010). There is some disagreement about whether the good will requirement involves a general commitment to the public good (e.g., Goldenberg, 2023) or alignment with the interests of particular non-experts (see e.g., Irzik and Kurtalmus 2019). In any case, it clearly requires having a commitment to the interests of others (not just their own).

Epistemic trustworthiness, then, is thought to involve four key featuresFootnote 1:

  1. (i)

    epistemic competence, i.e., the expert is in a position to know;

  2. (ii)

    epistemic reliability, i.e., the expert conforms to reliable standards for producing knowledge;

  3. (iii)

    honesty, i.e., the expert is disposed to tell the truth and be sincere; and.

  4. (iv)

    a good will, i.e., the expert shares a commitment to the public interest or cares about those whom their work impacts or about their wellbeing.

Non-experts have warranted epistemic trust in experts to the extent that they have good reasons to believe that each of these conditions of epistemic trustworthiness are met.Footnote 2

On this account, warranted epistemic trust comes in degrees. To the extent that non-experts have good reasons to think that an expert, group of experts, or expert institution meets all four of these conditions, then epistemic trust is fully warranted. To the extent that one or more of these conditions is lacking, warranted epistemic trust is undermined and trust may be withdrawn.

Of course, a non-expert might be mistaken about whether one or more of these conditions are met, in which case one’s trust can be misplaced. Such mistakes can happen when, for example, someone wrongly attributes expertise or epistemic credentials to those who do not in fact have them, or when one is unaware that an expert might have conflicts of interests or ulterior motives that conflict with the public wellbeing.

One goal of science communication is to help facilitate warranted trust by giving laypersons good reasons to think the conditions of epistemic trustworthiness are metFootnote 3 (Intemann, 2023). This goal is particularly important because if warranted trust is lost, it can be very difficult to regain (Baier, 1986; Grasswick, 2010). Thus, science communicators will want to avoid communicating in ways that would give non-experts reasons to doubt the commitments experts have (to epistemic reliability, to honesty, or to the public good), which could result not only in withholding trust, but also in distrust. The question is: to what extent does transparency about values in science communication promote or undermine warranted trust in experts? In the next section, I consider this question.

3 The case for value transparency

Many scholars (including many scientists) have urged greater transparency in science and science communication for a variety of reasons. Transparency about scientific decision-making is thought to help facilitate reproducibility of research, hasten knowledge production and innovation through the sharing of resources, manage potential conflicts of interests, ensure research accountability, address the inevitable role of values in science, and promote public trust (de Melo-Martin and Intemann, 2018; Douglas, 2009; Elliott, 2017; NAS, 2018; Nosek et al., 2015). Yet the concept of “transparency” is complex, potentially involving different actors (e.g., individual scientists, scientific institutions, or organizations), audiences (e.g., other scientists, policymakers, the public, those impacted by different kinds of research), or mechanisms for communication (e.g., published articles or reports, expert testimony or recommendations, science journalism, social media) (Elliott, 2022, p. 347). Moreover, there are different things that science communicators might be transparent about, including methodologies, results, deliberations, or disagreement in generating reports, uncertainties, values in decision making, or values in policy recommendations (Elliott, 2020, p. 347). For my purposes, I will focus on the extent to which experts (which might include scientists, scientific organizations, or institutions) ought to be transparent when communicating with non-experts about relevant value judgments in decision-making that occur while conducting research and in making science-based policy recommendations. This is what I will refer to as “value transparency” for brevity.

There are at least prima facie reasons to think that value transparency in science communication could promote all four of the conditions necessary for warranted epistemic trust. Consider the condition of epistemic competency. As mentioned, warranted epistemic trust in experts requires that experts possess expertise in the knowledge domain that is being communicated. When experts give testimony or communicate about subjects in which they have no relevant expertise, they engage in what has been called “expert trespassing.” (Gerken, 2018, 2022; Hardwig, 1994). If non-epistemic values play roles in scientific decision-making, particularly in generating results or recommendations, there is a risk that scientists will be going beyond their domain of expertise. This is because scientists are not experts about such values. Value transparency is a way for experts to acknowledge the limits of their expertise. It can serve to demonstrate that scientists are being open about assumptions at stake that are beyond the scope of their expertise and to allow those assumptions to be scrutinized. In doing so, experts may avoid expert trespassing.

Second, value-transparency may enhance the grounds for finding experts to be epistemically reliable. Making value judgments transparent demonstrates an openness to having those assumptions evaluated and potentially corrected or revised in future research. This can give non-experts evidence that scientists are not dogmatic, open to scrutiny, and willing to be responsive to criticism if their assumptions are shown to be problematic.

Third, and perhaps most obviously, if values play a role in science, then a commitment to honesty would seem to require scientists to be transparent about such values and articulate the ways in which their decision-making may depend on those values. Transparency and openness are sometimes taken to be indicators of honesty and sincerity, while a lack of transparency may cast doubts about what experts are hiding or whether they are being truthful.

Fourth, value transparency also has the potential to demonstrate that experts are indeed being guided by a good will. Much publicly funded research, even basic research, is aimed not just at producing knowledge, but producing knowledge that is likely (ultimately) to enhance public well-being in some way, such as promoting public health, addressing diseases, protecting air or water, sustaining ecosystems, developing renewable energies, improving infrastructure, or addressing national security. Communicating values involved in scientific decision-making has the potential to justify decisions in relation to such values that are centrally tied to public wellbeing and widely shared, at least in some cases. For example, when researchers communicate that they find a vaccine safe and express the benefits and costs that inform that judgment, they often appeal to public health and protecting vulnerable populations, which could be evidence of a commitment to the public interest. Even when values involved may be more controversial, transparency can demonstrate that experts respect and care about the autonomy of non-experts by allowing them to decide for themselves whether to accept to reject expert claims partially based on those value judgments, or at least show that they respect those who may not share the same values.

Thus, there is a prima facie case that value transparency in science communication can facilitate warranted epistemic trust insofar as it helps communicate reasons to believe experts possess the four conditions necessary for epistemic trustworthiness. Despite this, however, there are several challenges to the prima facie case that value transparency is likely to promote the conditions for warranted epistemic trust. I will explore the arguments that have been offered against value transparency in the next section.

4 The case against value transparency in science communication

In what follows, I will evaluate three central objections to the claim that value transparency is likely to promote warranted epistemic trust in experts. Several have argued that value transparency is (1) impractical and thus ineffective for promoting warranted trust; (2) likely to undermine warranted trust because of widespread false beliefs about how science works; and (3) likely to undermine warranted trust because the values that non-experts hold will often conflict with those held by scientists. I contend that although these arguments reveal important complexities about science communication, they fail to undermine the prima facie case for value transparency.

4.1 The argument that value transparency is impractical

One of the arguments in the previous section was that value transparency might help give good reasons for non-experts to think that experts are epistemically reliable because it may reassure them that they are open to critical scrutiny of their value judgments. Several have argued that transparency about values is only likely to be useful in this way if the non-expert is able to use that information to evaluate how those values influenced (or did not influence) an expert’s conclusions (de Melo-Martín and Intemann, 2016; de Melo-Martín & Intemann, 2018; Schroeder, 2021). Yet such detailed analysis by non-experts is unlikely to be feasible.

Consider disclosures of conflicts of interest. By themselves, such disclosures might erode warranted trust (e.g., in an author or authors of a study). That is, a disclosure alerts non-experts that an author may have interests that might bias their results (or compromise their epistemic reliability). Such disclosures could only be useful in promoting trust among non-experts if they allowed non-experts to evaluate whether the conflicting interests negatively influenced the findings of the study. But disclosures are not very effective in doing this because many of the decisions that could be influenced by conflicts of interest are themselves opaque or would require additional expertise to identify and assess (de Melo-Martín & Intemann, 2009). Similarly, even if values are made transparent, non-experts will not be in a position to evaluate all of the ways that values influenced the results, what alternative value choices could have been made, and how those would have influenced the results (de Melo-Martín and Intemann, 2016; de Melo-Martín & Intemann, 2018; Schroeder, 2021). As Schroeder (2021) argues, this might be possible in cases of very simple science, where values operate in ways that are easy to explain, the alternative values are obvious and few, and how they would have led to different results is rather straightforward. But in more complicated science, where the ways in which values operate may be numerous and related to rather sophisticated expert decision-making (such as complex statistical analysis) it is doubtful that non-experts would be able to make such evaluations. Indeed, in many of these cases it is not clear that scientists are aware of and can articulate all of this, even if they wanted to.

The argument that value transparency is impractical, however, presupposes that we cannot promote value awareness among experts and that value transparency would necessarily require that an expert provide all of the details about every value judgment made throughout the research process. It is not clear that either of these assumptions is correct. First, as Schroeder (2021, 550) acknowledges, there may be many reasons to promote awareness about such value judgments among experts. Thus, value transparency may not be sufficient for promoting epistemic trust but could be done in coordination with some efforts to make scientists more aware of how values may operate.

Second, value transparency, properly understood, need not impose a requirement to communicate in detail every single decision made throughout research and the role that values played. Attempting to communicate such details would be not just an onerous requirement but one that would conflict with other goods of communication. The norm of value transparency would need to be tempered by other goals of science communication, such as communicating in ways that are accessible or promote understanding (Intemann, 2022). This is already the case in the medical profession. While truth-telling is viewed as an ethical obligation for health professionals, it cannot be understood as a requirement to tell a patient everything about their condition or how to treat it (Kirklin, 2007; Sullivan et al., 2001; Tuckett, 2004). This would be overwhelming, confusing, frustrating, and fail to support the patient’s ability to make an autonomous decision about their treatment.

Thus, value transparency in science communication should be understood with similar qualifications. As mentioned earlier, non-experts depend on expert information to make informed decisions about what to believe, how to act, and what science-based policies or recommendations they should support. Preliminarily, then, the norm of value transparency may be qualified as the claim that science communicators ought to be transparent about those value judgments in scientific decision-making that might be significantly relevant to their decisions. I will return to what this might involve in Sect. 5. For now, however, it is important to note that while this argument establishes that the norm of transparency may not be sufficient for warranted epistemic trust and may need to be further qualified, this argument does not establish that transparency is not an important part of promoting such trust. This brings us to a second related argument against transparency.

4.2 The argument from false folk philosophy of science

A second concern is that value transparency in science communication is not just ineffective but likely to undermine warranted epistemic trust in experts in practice, given that non-experts widely believe that scientists ought to be value-free and that non-epistemic values are an indicator of bias. As John (2018) argues, laypersons tend to trust experts when they believe that expert institutions have some standards that ensure the epistemic competency, integrity, and reliability of scientists. However, it is also widely believed that science and its practioners must be value-free. While science studies scholars may be eminently aware that science necessarily or appropriately involves values, many non-experts (and even some scientists) are not. These false “folk philosophy of science” views are quite widespread (Gerken, 2022; John, 2018; Kovaka, 2021). It is generally believed, for example, that scientific objectivity requires freedom from contextual values, that all of science uses a step by step “scientific method” to empirically test hypotheses (which are also free of contextual values), and that hypotheses should only be accepted when they are “proven true” (about which there would be little uncertainty or disagreement) (Kovaka, 2021). In a context where these false assumptions about the nature of science are widespread, there is a risk that value transparency will lead to a widespread perception that experts lack objectivity and thus are neither epistemically competent nor reliable, undermining warranted epistemic trust (John, 2018, p.81).

Yet if laypersons believe scientists are not trustworthy because they have false beliefs about what it means for scientists to be epistemically competent or reliable, their judgment is unjustified. Thus, the argument from false folk philosophy of science at most shows that transparency can undermine trust, not that it can undermine warranted trust. Of course, if people withdraw trust when they should not, they would fail to believe scientists when in fact they should. But this does not seem to call for eliminating the norm of value transparency. As before, this suggests that transparency must be accompanied by additional interventions that help correct false beliefs about how science works (Douglas, 2017; Kovaka, 2021). As John acknowledges, correcting such false beliefs, combined with education and outreach programs, may be beneficial on several grounds and may promote warranted trust in the long term (John, 2018, p. 82).

John’s concern, however, is that this would take too long and that the need for public trust in scientists can be urgent and critical– as in the case of climate change or any other area of science where science-based policies or recommendations are time sensitive and there is a need to garner large-scale public support for them quickly. If this is correct, it is important to consider the alternatives to transparency. Is the claim that science communicators ought to obscure or hide value judgments involved in science to secure the trust of non-experts in the short-term?

There are several reasons to think that obscuring values in science communication is likely to be worse than promoting transparency (along with interventions to correct false folk beliefs) for promoting warranted trust, particularly in the long term. First, communication that obscures values is likely to reinforce those false beliefs about how science works by allowing them to go unchallenged. Allowing such misperceptions to remain is still likely to lead some non-experts to unwarrantedly withdraw trust from experts, even when values are not involved. For example, when non-experts think that science only involves the testing of hypotheses against empirical evidence (and that any unpredicted observation would be grounds for rejecting a hypothesis), they may fail to trust experts whose work that doesn’t conform to that expectation. The construction of climate models, for instance, is not best understood as the development of hypotheses that are empirically confirmed or disconfirmed (Parker, 2009). Rather, they can be seen as useful tools (particularly when they are taken as model ensembles) that can be used to promote particular aims, such as making predictions about a certain scope of phenomena during a specified (usually long-term) timescale (Parker, 2009). Indeed, as Kovaka (2021) argues, the fact that some climate skeptics have the false belief that climate models should be treated as hypothesis may help explain why such skeptics reject some climate science. Some climate skeptics have seized on the failures of General Circulation Models (GCMs) to make highly accurate short-term predictions about sea-level rise, temperature fluctuations, or the weather in particular regions, and take these empirical failures as evidence that the models are unreliable and the experts that promote them are not trustworthy (e.g., Haskins, 2017; see Kovaka, 2021). Yet these are not the kinds of predictions that GCMs are intended to make, and they do not disconfirm their long-term predictions about average global temperature. Therefore, reinforcing false views about how science works can still lead non-experts to think that experts are not epistemically reliable (because they are going about science improperly) and withdraw trust (unwarrantedly) regardless of whether values are involved.

Second, obscuring values to maintain the perception that science is value-free would essentially trick non-experts into thinking that the conditions of trustworthiness are met when in fact they are not. That is, it would provide reasons for thinking experts are epistemically competent and epistemically reliable for reasons that are in fact false while also making non-experts believe they are honest and sincere when they would not be. Insofar as this is the case, what is being promoted is essentially unwarranted trust in science, which is ethically dubious and epistemically precarious. It is ethically dubious because it treats non-experts as a means to an end, failing to respect their autonomy or their value as rational beings. It is epistemically precarious because if non-experts were ever to realize that values play a role in scientific decision-making, they would likely conclude that scientists are untrustworthy on all four grounds. They would think they are epistemically incompetent or unreliable for relying on values, but also dishonest and lacking respect for non-experts in trying to hide it. Such breaches of trust, as noted earlier, are difficult to repair once they occur resulting in a lack of trust for the long-term.

Moreover, in some cases it is likely that some non-experts would realize that values are in fact playing a role in scientific decision-making and recommendations. Value judgments are always easier to identify when they are not one’s own (Longino, 1990) and it is likely that when value assumptions are playing a role in either scientific decision-making or in recommending policies, some non-experts may notice. For example, when scientific agencies and institutions made recommendations during the COVID-19 pandemic (recommendations for isolation, masking, or vaccinations), many non-experts were all too aware that while these recommendations might promote the value of public health, they conflicted with their individual values or interests (e.g., economic interests, interests related to their children’s education, or personal liberty) (Angeli et al., 2021). Thus, non-experts who continue to have the false belief that science ought to be value-free are still likely to withdraw trust if they detect that values are involved. More concerningly, they might even have good reason to do so if they have good reason to believe that scientists are not honest or are covering up the role of values when communicating with the public. In the case of COVID-19, it gave non-experts who had some sense that values were involved reason to think public health experts or institutions were being dishonest, dogmatic, or advancing a hidden agenda.

Thus, obscuring or hiding values involved in scientific decision-making is potentially more problematic than value transparency combined with efforts to correct false assumptions about science. Obscuring values risks reinforcing false beliefs about how science works, making it more difficult for the conditions of warranted trust to be achieved or maintained. It also risks promoting unwarranted rather than warranted trust in experts, which is ethically problematic and epistemically precarious. I will now turn to final argument for the claim that value transparency could undermine warranted trust in experts.

4.3 The argument from value disagreement

There is a further argument, however, that value transparency in science communication might undermine warranted epistemic trust in science, and in particular the belief that scientists have a good will or are committed to the “public interest”. The “public” is not monolithic and can obviously have diverse and often conflicting interests. Given that some non-experts are quite likely to disagree with whatever values influence scientific decision-making, calling attention to those values through transparency may well cause those who disagree with those values to reject scientific conclusions or recommendations. That is, transparency about values may give lay persons good reason to question whether scientists are genuinely committed to their interests, in cases when the values of scientists and laypersons differ, or the values are not widely shared among laypersons (Odenbaugh, 2008, 595). If our goal is to gain public support for scientific claims, policies, or interventions, then value transparency may not be pragmatically effective in gaining such support, particularly when those values are more controversial (Odenbaugh, 2008, 595).

This concern is consistent with some preliminary empirical research by Elliott et al. (2017), that examined whether transparency about scientists’ values influenced laypersons perceived credibility of the scientists’ assessment of the state of the science or their policy recommendations. They found that value transparency was most likely to decrease the perceived credibility of scientists when respondents did not share the scientists’ values (Elliott et al., 2017, 13). Colgna et al. (2022) also found that the perceived trustworthiness of scientists depended on laypersons’ own values or whether they were inclined to support the policy in question.

While value disagreements exist that might cause at least some stakeholders to disagree with value judgments made by scientists, we should be cautious about concluding that value transparency would undermine warranted trust. First, the empirical evidence for this is limited and potentially flawed. Few studies have been done on whether transparency about values, as opposed to other things such as transparency about uncertainties, or lack of consensus) impacts the perceived trustworthiness of scientists among laypersons. Even among the few studies done, the evidence is contested. For example, Hicks and Lobato (2022) did not find that scientists were perceived to be less credible when they were transparent about their values even when those values were different from those respondents held.

Moreover, even if a growing number of studies were to show that value transparency reduced either perceived credibility or perceived trustworthiness among laypersons, this is distinct from warranted trust. What is often being measured are laypersons’ attitudes and perceptions of the credibility or trustworthiness of scientists. However, as we have seen, perceptions of trustworthiness (or untrustworthiness) can occur for a variety of reasons, and not all of them are rational or well-grounded. Thus, even if empirical evidence suggests that value transparency can lead to lack of perceived credibility, this it does not mean that withdrawing trust would be warranted. In addition, how we ought to respond to a perceived lack of credibility would still be an open question.

Finally, suppose we accept the preliminary empirical evidence that value transparency erodes some public epistemic trust of experts is strong. It does not follow that a norm of value-transparency ought to be abandoned, or that experts ought to obscure the role of values in science or science-based policy recommendations. The evidence suggests that the problem exists when the values that are expressed by scientists are not widely held. The lesson that might be drawn then, is not that scientists should obscure the values that play a role in their science, but that scientists ought to base their conclusions on value judgements that are widely held or that can be supported by those with diverse values (Odenbaugh, 2008). Schroeder (2021), for example, has argued that scientists should ground trust by appealing to values that have been accepted through some democratic processes that non-experts would accept as fair. Or, in making science-based policy recommendations, experts might appeal to the ways in which particular policies have the potential to support multiple different values (Goldenberg, 2021). For example, science communicators might explain how particular policies to mitigate or adapt to climate change might support a variety of values (such as protecting the environment, creating jobs, or securing energy independence). Thus, transparency about values need not undermine warranted trust, so long as they still demonstrate a good will, or a commitment to the public interest.

Of course, at this point, one might wonder why we should care about warranted epistemic trust at all if what we are really concerned with is getting non-experts to rely on what experts say and recommend in making decisions about what to believe and how to act. A focus on securing reliance, however, is likely misguided. First, if the ethical conditions for trustworthiness are not met, then this is also likely to impact non-experts willingness to rely on expert claims or their recommendations. If non-experts believe that scientists lie or do not care about anyone’s interests but their own, this is likely to give them good reason to not rely on what they say or recommend. Thus, securing reliance is also likely to require giving non-experts good reasons to think the other conditions of trustworthiness are met. Second, promoting warranted epistemic trust is important, not only to getting people to believe or act in accordance with expert testimony. There are additional ways in which experts need the trust of non-experts (Goldenberg, 2023). Experts need the trust of non-experts to generate long-term support for public funding of science, the pursuit of certain new areas of science or technologies that may carry risks (such as artificial intelligence, gene editing, or nanotechnologies) (Goldenberg, 2023; Resnik, 2009). Experts need non-experts to accept new technologies or interventions even when there are significant uncertainties, and the risks and benefits are not fully understood (Frewer 2001). Non-experts also need to be willing to engage with experts to both make experts aware of social or material conditions that might be relevant to the scientific problems they are trying to solve and to increase understanding of science and scientific possibilities. If experts and science communicators only focus on what will get non-experts to rely on experts for belief and action in a particular case, they may neglect the conditions that are needed for the kinds of trusting relationships that are needed to facilitate public support of experts.

The argument from value disagreement, then, does not establish that value-transparency undermines non-experts’ warranted epistemic trust in experts, but it does show again that value-transparency is not sufficient for promoting such trust. It will also require that experts either rely on values that are widely held or in some way the result of democratic processes, as opposed to their own personal values.

Thus, none of the three arguments undermine the prima facie case that value transparency in science communication can help facilitate laypersons’ warranted trust in scientists or scientific institutions. What they do show, however, is that there are several ways in which the norm of value transparency ought to be qualified. Specifically:

  1. a.)

    Value transparency can only function as a norm insofar as experts are aware of the values involved in their decision-making, which may require increasing experts’ understanding of how implicit value judgments operate.

  2. b.)

    Value transparency cannot require science communicators to articulate every single complex decision in which values might be involved, as well as how alternative values might have led to different results.

  3. c.)

    Value transparency is not sufficient for promoting warranted epistemic trust and must be accompanied by both efforts to correct false beliefs about how science works (particularly that it ought to be value-free to be objective) and by efforts to ensure that values relied upon by science appeal to public wellbeing or demonstrate an acknowledgement and respect for values that might differ.

In the next section, I will examine the implications these qualifications have for science communicators in practice.

5 Implications for value transparency in science communication

I have argued that that there is a prima facie case that a norm of value transparency in science communication can help promote warranted trust in scientists and scientific institutions. I have also argued that a proper understanding of this norm requires the key qualifications listed above. What are the practical implications of a qualified norm of value transparency for science communication?

First, what exactly must be communicated? As stated earlier, value transparency requires experts to be transparent about value judgments in decision-making that occurs while conducting research and in making science-based policy recommendations when communicating with non-experts. This is consistent with Gerken’s (2022) recommendation that experts communicate the scientific justification of their expert findings. Insofar as values may be relevant to that justification, they should be communicated. Various philosophers of science, however, have identified several ways in which contextual values might play a role in decision-making during the research process, including how funding decisions are made, how research questions are framed, choice of conceptual frameworks or ontologies, selection of methodologies, background assumptions in making evidentiary assessments, calculation of inductive risks, or recommendations for evidence-based policies or actions. It would likely be confusing for laypersons if science communicators detailed every way in which contextual values played a role in a particular case or area of science. This is particularly the case when identification of the values involved might require a detailed or technical understanding of the methodologies, concepts, and uncertainties at every stage of research. Afterall, other goals of science communication include clear and accessible messaging or guidance, which might conflict with a detailed account of all of the evidence or how it was attained or what assumptions were made. Therefore, science communicators will need to balance these considerations in particular cases. How much needs to be communicated may depend on the audience, context, and other goals of science communication. Yet it is helpful to keep in mind what it takes to promote warranted trust and specifically demonstrate a commitment to public interest or non-experts’ wellbeing. Non-experts (regardless of their values) have an interest in understanding the state of the science or science-based recommendations in ways that will allow them to exercise their autonomy and make decisions about how to act or what policies to support or adopt.Footnote 4 Thus, facilitating trust will require experts to be transparent about the values that may be relevant to non-experts’ ability to assess the science and science-based recommendations (Intemann & de Melo-Martín, 2023). In communicating about the toxicity of substances, for example, it may be particularly important to explain that there is a chance of error and explain the values assumed in decisions about the kinds of error assumed to be most acceptable (e.g., whether risks to human health or risks to overregulation are being given more weight). Communicators need not always go into every detail of decision points that involved inductive risk (although this might be appropriate, for example, in communicating in some contexts with other scientists). In short, science communicators should at least be transparent about those assumptions that might reasonably make a foreseeable difference to laypersons assessments of the science or in making decisions about what they ought to do or support. Of course, scientists and scientific institutions can only be transparent about value assumptions they are aware of, and this may not always be the case. But this is also why science studies scholars also have an important role to make in helping scientists identify and make more explicit values that may be operating in their decision-making.

Second, while some value transparency may be important to facilitating warranted trust, it may need to be done in conjunction with other strategies that minimize the risk that it will cause some non-experts to withdraw trust from experts. Given that people do tend to have misconceptions about how science works, these misconceptions also need to be corrected – not only in particular moments of science communication, but in an ongoing basis through education and outreach efforts. Public engagement with science has often focused on increasing public understanding of the science, but equally important is helping laypersons understand how science works (Douglas, 2017; Potochnik, 2017). Some scientific outreach and engagement efforts should clearly focus on how science works and correcting commonly held false assumptions about this and this may be one area where philosophers of science could play an important role. One challenge is that funding for engagement and outreach activities is often tied to particular research projects and aimed at disseminating understanding or literacy about that particular area of science (rather than how science works in general). Devoting more funding towards these general efforts might be desirable, but having science communicators emphasize how research in a particular area reflects the ways in which science is generally pursued would also be helpful. Science communicators should also look for ways to communicate how science works when they are communicating about aspects of their work (including value judgments) that are likely to conflict with those false folk philosophy of science views. For example, in communicating about climate models, communicators might be explicit about the purpose of such models (what they do and do not tell us) and why, for example, certain value assumptions have been made in relation to risks or uncertainties.

Third, it is important to communicate about values in science in ways that also acknowledge potential value disagreements. Science communicators will need to anticipate and make clear value assumptions about which there may be reasonable grounds of disagreement. While this may indeed cause non-experts to reject some expert conclusions or recommendations, it would facilitate warranted trust by communicating respect for non-experts and giving them the opportunity to evaluate how different value judgments might have produced different evidence or conclusions (McKaughan & Elliott, 2013). This will not likely be enough to persuade laypersons to change their values or support a policy that conflicts with their values, but it may be the kind of communication that prevents people from withholding trust from scientists despite their value disagreement. Science communicators can also acknowledge potential value disagreements and facilitate trust by qualifying expert testimony to indicate where values involved may be contested and go beyond their domain of expertise. This is consistent with Gerken’s expert trespassing guideline (Gerken, 2022, 167). Of course, experts will also be able to help address potential value disagreements by relying on values that are widely held, as opposed to their own personal values.

6 Conclusion

Science communication, like many scientific practices, has the potential to facilitate or undermine warranted epistemic trust in scientists and scientific institutions. I have focused specifically on whether value transparency – or transparency about the role of contextual values in scientific decision-making and science-based policy recommendations – can promote non-experts’ warranted epistemic trust in experts. Promoting such trust requires providing non-experts with good reasons to believe that scientists and scientific institutions are (i) epistemically competent, (ii) epistemically reliable, (iii) honest, and (iv) have good will or are committed to working in the public interest. As we have seen, value transparency by itself is not sufficient to promote warranted trust and requires that we also increase experts’ awareness of how values play a role in their decision-making as well as non-experts’ understanding of how science works. While value transparency may risk causing some to withdraw their trust on experts (unwarrantedly) in the short-term (before widespread views about how science works have changed) it is likely to better promote warranted epistemic trust in the long run than obscuring values. Obscuring value judgments essentially promotes unwarranted trust, which, especially if discovered, risks long-term damage to warranted trust and thus likely to generate problems in securing public support of future policies or actions, no matter how urgently needed. Value transparency also risks alienating laypersons who hold values different than those endorsed or assumed by scientists. This risk can be minimized, however, if scientists aim to appeal to widely accepted values or a diversity of values. If that is not practically possible, then science communicators should still be transparent about the values at stake. Doing so is likely to demonstrate both moral reliability and commitment to the public interest and will facilitate warranted epistemic trust, even if this does not translate into the acceptance of particular recommendations that conflict with those non-experts’ values. While more research is needed to examine what value transparency might require in different communication contexts, I have aimed to show that some degree of value transparency, particularly when value judgments are relevant to non-experts’ ability to assess the state of some area of science or evaluate what policies or actions they should adopt or support, can help promote warranted epistemic trust.