1 Introduction

The value-free ideal of science (VFI) is the view that non-epistemic values, such as moral and social values, are not allowed to influence scientific judgment in the core activities of scientific inquiry. The core activities are often understood as practices where knowledge claims are evaluated and either rejected or accepted and communicated to others. Philosophers have pursued two interrelated questions examining the legacy of the VFI.Footnote 1 One question is: what roles (if any) moral and social value judgments can legitimately play, or should play, in the core activities of scientific inquiry. Another question is: if moral and social value judgments can play legitimate roles in the core activities of scientific inquiry, which moral and social values are appropriate for these roles. The former question is sometimes called the Proper Roles Question and the latter the Proper Values Question (Rolin, 2021).Footnote 2

In this contribution to the topical collection on the Legacy of the Value-Free Ideal of Science, I approach the legacy from a different angle. I ask which principles should guide non-experts when they place their epistemic trust in scientific experts, and what scientific experts can do to merit non-experts’ epistemic trust. If by “trustworthiness” is meant someone’s actual trustworthiness and by “credibility” others’ perception of it, then trustworthiness and credibility can come apart. An account of how experts can earn epistemic trustworthiness is useful in three ways. It helps philosophers understand (i) what experts can do to build and maintain epistemic trustworthiness, (ii) what they can do to establish credibility in the eyes of non-experts, and (iii) what non-experts need to know to identify epistemically trustworthy experts. My aim is to understand what roles (if any) moral and social values have in building and maintaining epistemic trustworthiness and demonstrating it to diverse publics.Footnote 3

My approach is motivated by tensions between two competing ways of understanding how trustworthiness, credibility, and the VFI are connected. According to the first view, to be trustworthy and establish credibility in the eyes of citizens, scientific experts should aim to be as neutral as possible with respect to moral and social values. To find out how widespread this view is, Elliott et al. (2017) gathered data with an online survey examining how citizens view scientists who publicly acknowledge values in comparison to those scientists who do not do so. They found that endorsing moral and social values publicly is risky for scientific experts as it is likely to diminish their credibility in the opinion of citizens. As Elliott and his colleagues summarize their findings: “Our results provide at least preliminary evidence that acknowledging values may reduce the perceived credibility of scientists within the general public, but this effect differs depending on whether scientists and citizens share values, whether scientists draw conclusions that run contrary to their values, and whether scientists make policy recommendations” (Elliott et al., 2017, 1).Footnote 4 That explicit value judgments may diminish experts’ credibility from citizens’ point of view is not surprising given the predominance of the VFI in common thinking about science. As Branch (2022) argues, the VFI has informed much of science education and communication until recently, and this explains in part why citizens continue to treat experts’ advocacy of the VFI as a social indicator of epistemic trustworthiness even though the connection between epistemic trustworthiness and the VFI has been questioned in the philosophical literature.Footnote 5

The second view on trustworthiness, credibility, and the VFI points to reasons that have led many philosophers to question the connection between epistemic trustworthiness and the VFI. According to the second view, scientific experts do not need to strive for value neutrality to be epistemically trustworthy and establish credibility in the eyes of non-experts. Trustworthiness and credibility require that appropriate moral and social values constrain and guide scientific activities in proper ways (see e.g., Brown, 2020; Kourany, 2010). The general public is a divided group of people, and different subgroups have different expectations when they consider placing their epistemic trust in scientific experts. While some people may be suspicious of scientific experts without any grounds, some others may have good reasons to distrust experts. As Scheman (2001) argues, members of marginalized or subordinate social groups often distrust scientific experts because there is a historical connection between science and social injustice. Distrust in scientific profession can be rational when, for example, scientific theories have been used to justify systematic racism and non-white people have been subjected to harmful medical experiments without their informed consent (Bowen et al., 2022; Whyte, 2021; Whyte & Crease, 2010). Dark episodes in the history of science amount to a reason to doubt whether scientific experts are benevolent and willing to understand the world from the perspective of marginalized or subordinate social groups. Given such doubts, members of marginalized or subordinate social groups are likely to turn to scientific experts for advice only when the experts share their social experiences and value perspectives. For example, willingness to engage in community-based research practices (Smith, 1999) or to share knowledge with marginalized and subordinate social groups (Grasswick, 2010) can be seen as signs of trustworthiness. The second view rejects the assumption that scientific experts’ attempt to stay value neutral is a social indicator of epistemic trustworthiness. Instead, it suggests that when experts lack epistemic trustworthiness for certain social groups, the former can try to earn the latter’s epistemic trust by responding to the latter’s concerns and knowledge needs.

This study defends and develops the second view further by analyzing the notion of epistemic trustworthiness. Like other philosophers (e.g., Almassi, 2012, 2022), I argue that epistemic trustworthiness is not only domain specific but also a relational property. Experts cannot possess it independently of a trusting party. To be epistemically trustworthy is to be epistemically trustworthy for someone. I also argue that epistemic trustworthiness depends, among other things, on the proper use of appropriate moral and social values in scientific inquiry. While many philosophers stress that epistemic trustworthiness has a moral dimension in addition to an epistemic one (e.g., Frost-Arnold, 2013; Hardwig, 1985, 1991; Wilholt, 2013), philosophy offers little guidance for situations where scientific experts need to earn epistemic trustworthiness in the eyes of diverse publics instead of taking their epistemic trust for granted. To fill this lacuna in the philosophical literature on public trust in science, I introduce an impact assessment model that helps philosophers understand what scientific experts can do to build and maintain epistemic trustworthiness in relation to those social groups who have reasons to distrust scientists. The impact assessment model goes beyond a call for better science communication by engaging a public and avoids the pitfall of merely paying lip service to the well-being of the public.

To understand the legacy of the VFI, it is necessary to focus on the moral dimension of epistemic trustworthiness. The legacy of the VFI is an understanding of how epistemic trustworthiness depends on moral and social values if it does not require scientists to strive for value freedom. I argue that an impact assessment model helps philosophers make progress on this issue. The model tells what steps scientific experts can take to assess the impact of scientific research from a moral point of view and in collaboration with those who are impacted.

That I focus on the moral dimension of epistemic trustworthiness, should not be taken to mean that the epistemic dimension is less important. Throughout the discussion I assume, like many other philosophers (e.g., Anderson, 2011; Goldman, 2006; Hardwig, 1985, 1991), that experts’ epistemic trustworthiness depends on their competence and expertise. I also believe that their epistemic trustworthiness depends on how well scientific communities and institutions function epistemically. For example, scientific communities should be governed by norms that aim for the objectivity of scientific knowledge (Longino, 1990, 2002), and they should conduct scientific controversies so that a consensus emerging from these debates meet certain epistemic standards for proper acceptance (Miller, 2013). Ideally, scientific institutions oversee that scientific experts are evaluated in fair and reliable ways. Epistemic trust can hardly be rational under social conditions in which the institutional markers of competence and expertise (e.g., degrees, titles, and positions in formal organizations) fail to track these epistemic qualities (Rolin, 2020).

My discussion is organized in the following way. In Sect. 2, I explain why epistemic trustworthiness depends on moral and social values in addition to epistemic ones. I focus on the values of sincerity and honesty. In Sect. 3, I continue to explore the moral dimension of epistemic trustworthiness by discussing the good will account (Baier, 1986) and the commitment account of trustworthiness (Hawley, 2019). In Sect. 4, I distinguish two types of cases: (i) cases where scientific experts’ sincerity, honesty, and commitment to the well-being of others is the default, and (ii) cases where at least one of these values cannot rationally be treated as the default. I focus on the latter type of cases where the challenge is to understand what scientific experts can do to fulfill a commitment to the well-being of those who are impacted by the application of scientific knowledge. To answer this question, I introduce an impact assessment model outlined in Akwé: Kon Guidelines that scientific experts can follow to earn epistemic trustworthiness.

2 Epistemic trustworthiness, sincerity, and honesty

2.1 Trust and reliance

Before moving forward, it is important to clarify what I mean by epistemic trust and trustworthiness. Like other philosophical analyses (e.g., Goldberg, 2020; Rolin, 2020), I distinguish two ways of understanding trust, one, trust as mere reliance on another person to do something, and two, trust as reliance plus an additional factor, for example, an expectation that the trusted person acts out of good will toward the trusting party (Baier, 1986), or a belief that the trusted person has a commitment to do what she is trusted to do (Hawley, 2019). In what follows, I argue that while epistemic trust can be understood either as reliance on a testifier as a source of knowledge or as trust that involves more than mere reliance, the latter notion of trust is appropriate in many non-expert/expert relations.

Trust is epistemic when trusting a person functions as a reason to believe in the person’s testimony. In a relation of epistemic trust, one person A trusts another person B to have good reasons to believe that p, and A’s (domain specific) trust in B is a reason for A to believe that p (Hardwig, 1991, 697; see also Hardwig, 1985). When A is epistemically dependent on B, epistemic trust is an alternative to A’s remaining ignorant of p. Epistemic dependence occurs when A has an interest in B’s knowledge, but A is not in a position to access the knowledge on her own. This is the case in many non-expert/expert relations. Assessing the reliability of expert statements is something that only experts can do. Non-experts face the problem of understanding how they can learn from experts when they themselves are not able to evaluate the reliability of expert statements (Anderson, 2011; Goldman, 2006). As epistemic dependence is hard to escape completely and ignorance is not always an attractive option, epistemic trust seems to be a persistent feature of modern society and its relation to scientific experts. Relations between citizens and policymaker, on the one hand, and scientific experts, on the other, often involve a degree of epistemic trust. Relations among scientists can also involve a degree of epistemic trust as no one can be an expert on all topics even within one’s own disciplinary domain (Andersen & Wagenknecht, 2013).

Given the ubiquitous nature of epistemic trust, there is much philosophical interest in understanding when epistemic trust in scientific experts is warranted (de Melo-Martín & Intemann, 2018; Goldenberg, 2021; Oreskes, 2019; Rolin, 2020). There seems to be an agreement that A’s epistemic trust in B is well-placed when B is trustworthy in the domain in which B is relied on as a source of knowledge, A has good reasons to believe that B is trustworthy in the domain, and A places her trust in B because of these reasons. Good reasons are often thought to involve evidence of B’s epistemic trustworthiness, or at least the absence of reasons to doubt it.

This view gives rise to the question of what counts as evidence of someone’s epistemic trustworthiness. Many philosophers believe that such evidence should track both epistemic and moral criteria of epistemic trustworthiness (Anderson, 2011; Frost-Arnold, 2013; Goldenberg, 2021; Hardwig, 1991; Intemann, 2023; Rolin, 2020; Wilholt, 2013). Epistemic criteria require, among other things, that an epistemically trustworthy person has a reasonable degree of expertise in a relevant domain. Things are less clear for moral criteria. There is a controversy over the question of whether an epistemically trustworthy person needs to act sincerely and truthfully. In the remaining parts of this section, I review this controversy.

2.2 Sincerity and honesty

Sincerity and honesty are familiar principles of research ethics. They are meant to guide scientists in the core activities of scientific inquiry when they assess knowledge claims and either reject or accept them and communicate to others. Sincerity requires them to strive for consistency in what they think and say, and how they act. Honesty forbids them from fabricating, falsifying, or misrepresenting data. Instead, they should report data, reasoning, results, methods, and procedures truthfully, and disclose conflicts of interest publicly. Honesty should guide all scientific communication, including communication to colleagues, research funding agencies and non-experts (Resnik, 2020).

To see why sincerity and honesty are thought to be relevant to epistemic trustworthiness, it is useful to understand how they relate to experts’ role as a testifier. In the philosophical literature, there are two approaches to understanding what it takes for a testifier to be epistemically trustworthy. Like Frost-Arnold (2013), I label these approaches the moral account of trust and the self-interest account of reliance. According to the moral account of trust, the rationality of epistemic trust depends not only on the competence of the testifier but also on her moral character (Hardwig, 1991, 700). To be trustworthy, a testifier B must be honest in offering testimony; otherwise, the hearer A does not have a reason to believe in what B says. As Hardwig argues, A’s good reasons to believe in the content of B’s testimony depend on whether B is speaking truthfully in the situation (1991, 700). Moreover, a trustworthy testifier is capable of “adequate epistemic self-assessment” (1991, 700). This means that one must be honest with oneself when it comes to the domain and degree of one’s expertise. Overconfidence weakens one’s epistemic trustworthiness as it increases the risk of advancing bold statements without adequate grounds.Footnote 6

Things are different for the self-interest account of reliance. According to this account, trust is a matter of mere reliance on the competence and the self-interest of the testifier (Frost-Arnold, 2013, 301). When A needs to decide whether she can rely on B as a source of knowledge, the question to consider is whether there are incentives for B to behave in a reliable way. A’s reliance on B is thought to be warranted when A has a reason to believe that incentives will guide B’s behavior in the right direction. Scientific communities and institutions are responsible for ensuring that B is rewarded for speaking truthfully and punished for deceiving others. According to the self-interest account of reliance, A does not need to have evidence of B’s moral character because it is sufficient to assume that B, being a self-interested agent, will respond to incentives and disincentives in prudential ways. Epistemic reliability is thought to depend on the institutional structures of science and not merely on the qualities of the testifier (for a similar view, see John, 2018, 78).

By emphasizing the importance of incentives, the self-interest account of reliance has certain advantages. While incentives may not guarantee that scientists are honest and sincere, they can make it easier and more rewarding for scientists to be honest and sincere than otherwise. Also, the self-interest account helps understand why it is often rational to rely on a testifier whom one does not know personally. Reliance on a stranger can be rational insofar as scientific communities and institutions have right kind of incentives and disincentives in place.

Yet, I argue that the self-interest account of reliance cannot fully replace the moral account of trust. As Hardwig argues, prudential considerations alone are not sufficient to guarantee that someone will be a reliable testifier (1991, 705; see also Frost-Arnold, 2013). This is because dishonesty is difficult to detect, and even when it is detected, the punishment for it is not severe enough to function as an effective deterrent. In Hardwig’s view, “Institutional reforms of science may diminish but cannot obviate the need for reliance upon the character of testifiers” (1991, 707). And he adds that “There are no ‘people-proof’ institutions” (1991, 707).

Another limitation to the self-interest account of reliance is that increased control and monitoring can create distrust among scientists. As Frost-Arnold argues, excessive monitoring of scientists’ behavior may be counter-productive because some scientists interpret it as a sign of distrust and disrespect, and they do not try to live up to the expectation of those who do not respect them (2013, 307). This is bad news for the self-interest account as it depends on the ability of institutions to keep scientists in the right track by following their performance closely. For these reasons, it is unlikely that the moral account of trust can be fully replaced by the self-interest account of reliance. Nevertheless, the former can be supplemented by the latter. The self-interest account of reliance can compensate for the weaknesses of the moral account, and vice versa.

Thus far I have argued that both the institutional structures of science and the epistemic and moral qualities of scientific experts matter when it comes to building and maintaining epistemic trustworthiness. Next, I respond to two objections, the first one challenging the view that sincerity is relevant to epistemic trustworthiness, and the second one the view that honesty is the best policy to guide communication to non-expert publics. I label the first objection an argument from the collective nature of scientific knowledge and the second an argument from a false folk philosophy of science.

2.3 Objections and responses

An argument from the collective nature of scientific knowledge. John argues that a relation of epistemic trust between two individuals is not the only way to model how non-experts learn from experts (2018, 75). Non-experts learn from experts via a variety of media, for example, newspaper articles, internet, TV and radio programs, textbooks, panel reports, and science and technology museums. In many cases, non-experts learn from a consensus view that has been passed to them by an intermediary (76). That there is a consensus view in a scientific community does not necessarily mean that all or most community members believe in the view. A consensus can be formed so that some community members let a certain view stand as the position of the community even when they do not fully agree with it personally (78–79). Such a consensus can be expressed by a group of scientists or by an individual scientist. In either case, it is possible that a consensus has been formed in a collective decision-making process where some disagreements and doubts have been pushed to the background. When there is such a consensus, it is appropriate to say that scientific knowledge is collective knowledge.

The collective nature of scientific knowledge has an interesting implication for the value of sincerity. If non-experts learn from a consensus view, it is not self-evident that the value of sincerity should guide science communication in every situation. When a testifier presents a consensus view, she may decide to suppress her personal view if the latter departs from the former. By doing so, the testifier is insincere, if by sincerity one understands “the virtue of fitting one’s public claims to one’s private attitudes” (John, 2018, 78). As John defines it, “to be sincere, an agent’s public assertions must accurately reflect what she herself believes” (78). Yet, despite insincerity there seems to be nothing wrong in offering a testimony about a consensus view rather than a personal view. Quite the contrary: when scientists act in the role of an expert, they are often expected to offer a state-of-the-art view on the topic. To do so, it is not necessary for the expert to be sincere.

An argument from a false folk philosophy of science. John (2018) argues that honest communication is not always effective communication when scientific experts reach out to non-experts. Honesty requires experts to report not only research results and methods but also uncertainties in the results as well as limitations of the methods (82). But reporting uncertainties and limitations honestly runs the risk that an attempt to communicate the results of scientific research to the public is undermined. The risk is realized when a non-expert public subscribes to a false folk philosophy of science according to which uncertainty is a sign of epistemic failure. A more effective way to communicate research results is to simplify the message and leave uncertainties and limitations unreported. Thus, there is a trade-off between honest and effective communication. Honest communication may not be effective and effective communication may require dishonesty. Yet, John grants that honesty should be the default in science communication (84). Effectiveness can be given a priority over honesty only when the audience’s prior understanding of science requires it.

Let me respond to the two objections. In response to the first objection, I argue that the collective nature of scientific knowledge does not require scientists to compromise their sincerity. When scientific experts address non-experts, they can sincerely tell whether they intend to convey a consensus view or their personal view in case the latter is different from the former. Moreover, that science communication is often mediated by third parties, such as science journalists and educators, does not mean that relations of epistemic trust are irrelevant to understanding expert/non-expert relations. For non-experts to learn from a consensus view requires a degree of epistemic trust in scientific communities that guide the processes whereby consensus views are formed. While it takes some philosophical work to articulate what it means to place one’s epistemic trust in scientific communities, it is uncontroversial to claim that epistemic trust is often directed at them (Wilholt, 2016).

In response to the second objection, I argue that dishonest (but effective) communication involves risks that are not less serious than risks involved in honest communication.Footnote 7 Suppressing uncertainties in science communication can backfire when the risk of error is realized. When non-experts are disappointed in experts, their epistemic trust in experts is likely to be diminished. As Intemann (2023) argues, science communication can serve many goals (e.g., to inform the public, empower policymakers, motivate action, generate interest in science, promote scientific literacy), and its effectiveness depends on the goal it aims to achieve. When the goal of communication is to increase understanding about science, honesty is likely to be a good policy. As John (2018) himself acknowledges, honesty about uncertainties and limitations can amount to effective communication when the goal is to provide non-experts with an opportunity to learn about how science works.

To summarize, I have argued that sincerity and honesty are needed to establish the epistemic trustworthiness of scientific experts. To be warranted, epistemic trust does not always require that one has evidence of scientific experts’ sincerity and honesty. Absence of evidence suggesting insincerity or dishonesty is often sufficient for epistemic trust to be warranted.Footnote 8 That individual experts should be sincere and honest does not mean that the institutional structures of science are less important. Scientific communities and institutions are also responsible for upholding these values. In this section, I have emphasized the role of incentives and disincentives in guiding scientists toward sincere and honest behavior. But scientific communities and institutions can do more than impose sanctions for insincere and dishonest behavior. They can influence their members by communicating their values explicitly, for instance, in public websites, official publications, and speeches in the events of the community. They can allocate resources for research ethics education, and they can set a policy that requires scientists to apply for an ethical review, if not always, at least in some cases. Research funding agencies can require scientists to assess ethical issues in their proposed research projects and scientific journals can require them to disclose conflicts of interest.

Insofar as the values of sincerity and honesty are necessary for epistemic trustworthiness, one might ask whether they pose a challenge to the VFI. At first glance, the answer seems to be “yes” because the two values are meant to guide scientists when they assess knowledge claims and either reject or accept them and communicate to others. However, the defenders of the VFI could argue that its purpose is not to ban research ethical principles in the core activities of scientific inquiry. Instead, the VFI is meant to protect these activities from other moral and social values. In the next section, I argue that epistemic trustworthiness has yet another moral dimension that poses a challenge to the VFI.

3 The good will and the commitment account of trustworthiness

3.1 The good will account of trustworthiness

Sincerity and honesty are not the only moral values that are thought to be relevant to epistemic trustworthiness. In this section, I examine the good will account of trustworthiness, an influential view suggesting that scientific experts’ epistemic trustworthiness depends on them having good will toward those who are epistemically dependent on them or whose lives are impacted by the application of scientific knowledge. This view owes a lot to Baier’s (1986) analysis of trust. According to Baier, a relation of trust involves more than A’s reliance on B to do something or to take care of something. It involves an expectation that B has good will toward A, and that B is moved by good will to act in trustworthy manner (Almassi, 2012, 46). As Baier argues, if A trusts B and B lets A down, then A is justified in feeling betrayed, and not merely disappointed (1986, 235).

Given Baier’s account of trustworthiness, epistemic trustworthiness turns out to be a relational property. A’s epistemic trust in B involves not merely reliance on B as a source of knowledge; it also involves an expectation of good will on the part of B toward A. Insofar as one is epistemically trustworthy, one is epistemically trustworthy for someone. As Almassi argues, an expert is trustworthy with respect to citizens only when the expert recognizes the citizens’ epistemic dependence on her and takes the fact that they count on her as a compelling reason for striving to be trustworthy (2012, 46). Almassi develops this view further by arguing that when epistemic trustworthiness requires good will, it is reasonable to expect that epistemically trustworthy experts are responsive to non-experts in positive ways (2022, 3). Differently positioned trusting (or distrusting) non-experts are likely to have different expectations toward experts. It is not enough for them to have evidence of general reliability; they also need evidence of relationally responsive trustworthiness (4). This view is also shared by Goldenberg who claims that “The publics need to determine that the expert is properly motivated, that they have the interests of the publics at heart” (2021, 124).

For my argument, it is important to highlight two implications of the good will account of trustworthiness. One implication is that there is no one-size-fits-all model for earning epistemic trustworthiness. The reason for this that different trusting (or distrusting) publics have different expectations when they decide which experts to trust. While some publics assume without questioning that experts have good will toward them, some others expect to see evidence of good will toward their social group. Such an expectation can be rational, as Scheman (2001) argues. The impact assessment model that I will discuss in Sect. 4 builds on this insight. The model suggests that efforts to build and maintain epistemic trustworthiness must be tailored for each public separately. What it takes for scientific experts to earn epistemic trustworthiness can vary from one social context to another.

Another implication is that the good will account poses a challenge to the VFI, the view that moral and social values are not allowed to influence scientific judgment in the core activities of scientific inquiry. To understand how the good will account undermines the VFI, one needs to visit the inductive risk argument (IRA). This is what Wilholt does when he defends the relevance of Baier’s good will account of trustworthiness to epistemic trust in science (2013, 248). According to the IRA, accepting or rejecting a hypothesis involves uncertainties, and a moral value judgment is necessary for a decision concerning an acceptable level of uncertainty. When a scientist accepts or rejects a hypothesis, she makes a moral value judgment, either implicitly or explicitly, concerning the potential consequences of error to those who are likely to be impacted by the application of knowledge. As Wilholt argues, epistemic trustworthiness depends on an expert’s willingness to understand her moral responsibility for inductive risks and to make sound moral value judgments concerning these risks. This means that non-experts’ epistemic trust in scientific experts is properly understood as “trust in the moral sense” (250). To make morally sound judgments of inductive risks, an expert must assess them from the perspective of those who bear the risks. This requires an expert to have good will toward them, or at least not to be fully “disinterested” (250). Mere indifference is not sufficient to ground epistemic trustworthiness because an indifferent expert is not concerned with people who might be harmed by errors. Wilholt concludes that non-experts’ epistemic trust in scientific experts often involves an expectation that the latter has the right attitude toward the possible consequences of knowledge claims (251).

Wilholt’s argument is developed further by Irzik and Kurtulmus (2021) who introduce the distinction between “basic epistemic trust” and “enhanced epistemic trust.” In some cases, non-experts’ reliance on scientific experts can be adequately described as basic epistemic trust. To be warranted, basic epistemic trust requires an expert to communicate her views sincerely and honestly and that her views are the output of reliable scientific research (4733). But in some other cases, warranted epistemic trust requires an expert to strive to meet a higher standard of epistemic trustworthiness. This is the case when scientific knowledge is put into use and public welfare is at stake. A higher standard of epistemic trustworthiness is called for by enhanced epistemic trust. In addition to basic epistemic trust, enhanced epistemic trust demands that an expert makes decisions regarding the distribution of inductive risks in agreement with non-experts’ assessments of those risks (4734). As Branch summarizes the idea, “Enhanced epistemic trust is basic epistemic trust plus consideration for public welfare given inductive risk” (2022, 7). And she argues that enhanced epistemic trust is not compatible with the VFI (8). This is because a higher standard of epistemic trustworthiness requires experts to apply moral and social values in the core activities of scientific inquiry, that is, in their assessment of acceptable inductive risk.

Thus far I have discussed the view that sometimes scientific experts should aim for a higher standard of epistemic trustworthiness that involves acting out of good will toward those who are epistemically dependent on them or whose lives are impacted by the application of scientific knowledge. Next, I discuss a criticism of the good will account of trustworthiness.

3.2 The commitment account of trustworthiness

Hawley (2019) argues that the good will criterion sets too high a standard for trustworthiness by requiring the trusted person to act out of the right kind of motive. What makes the good will criterion demanding is that it compels the trusted person to respond to the expectations of those who trust her. But expectations may be inappropriate or unreasonable, and it would be odd to judge someone untrustworthy for not being capable or willing to respond to such expectations (22). Trustworthiness cannot be a matter of responding to other people’s expectations, independently of whether such expectations are appropriate or reasonable. As Hawley argues, “we will be pulled in different directions by the whims of those around us, unable to be trustworthy at all” (74). Therefore, acting out of good will should not be seen as a necessary condition of trustworthiness (20).

Hawley (2019) proposes an alternative account of trustworthiness to specify when someone is trustworthy (or untrustworthy), and when an attitude of trust (or distrust) is appropriate in the first place. Like Baier (1986), Hawley believes that trust involves more than mere reliance on another person to do something (2019, 2). But unlike Baier, Hawley believes that to trust someone is to rely upon that person to fulfil a commitment (9). To be trustworthy is to live up to one’s commitments and to be untrustworthy is to fail to do so (23). Given this view, untrustworthiness can arise not only from ill will or indifference but also from well-intended over-commitment (73). Thus, trustworthiness involves avoiding unfulfilled commitments (74). In Hawley’s view, distrust involves an expectation of unfulfilled commitment (9). In other words, to distrust someone to do something is to believe that she has a commitment to doing it, and yet not rely upon her to meet that commitment (9).

Hawley (2019) argues that the commitment account of trustworthiness avoids problems inherent in the good will account. While the good will account is excessively demanding, the commitment account makes trustworthiness feasible as long as one avoids taking on too many commitments. The commitment account does not require one to respond to other people’s expectations, independently of whether the expectations are appropriate or reasonable. As long as one lives up to one’s commitments, it does not matter what one’s motives are (76). The commitment account suggests that attitudes of trust and distrust are appropriate when there are commitments and inappropriate in the absence of commitments.

Hawley (2019) does not discuss non-expert/expert relations, but her general account of trustworthiness can be applied to the epistemic trustworthiness of experts. Given the commitment account, epistemic trustworthiness does not depend on experts’ motivations. The crucial thing is that they live up to their commitments. While it goes without saying that scientific inquiry involves epistemic commitments, the IRA suggests that it involves a moral commitment to assess inductive risk so that one considers the well-being of those who are impacted by scientific knowledge. Hawley’s account shifts the focus from just having good will to fulfilling a commitment to good will. While this may seem to be a small change, it has implications for what experts can do to earn epistemic trustworthiness. The impact assessment model that I will discuss in Sect. 4 is a tool for demonstrating a commitment to good will toward local communities.

I argue that the commitment account of trustworthiness is consistent with the two implications of Baier’s account. This can be seen by replacing the good will condition with a requirement for a commitment to good will. The first implication is that there is no one-size-fits-all model for earning epistemic trustworthiness because epistemic trustworthiness is a relational property. This is still the case when epistemic trustworthiness is thought to require that experts fulfill a commitment to good will toward those who are impacted by the application of knowledge.

The second implication is that epistemic trustworthiness requires scientific experts to exercise moral judgment in a way that poses a challenge to the VFI. I argue that when epistemic trustworthiness is understood as a matter of fulfilling commitments, it still poses a challenge to the VFI. This is because epistemic trustworthiness requires scientific experts to live up to a moral commitment to the well-being of those who are impacted by the application of scientific knowledge (as the IRA suggests). The moral commitment needs to be demonstrated in the core activities of scientific inquiry, and therefore, it is not compatible with the VFI.

To summarize, both the good will and the commitment account of trustworthiness highlight a moral dimension of epistemic trustworthiness. An epistemically trustworthy expert should either act out of good will toward others or live up to a commitment to consider their well-being. To use Hawley’s (2019) terms, to place epistemic trust in a scientific expert is to believe that she has epistemic and moral commitments, and to rely upon her to meet these commitments. Epistemic trustworthiness requires also that one avoids taking on commitments one cannot fulfil.

4 An impact assessment model

“The word itself, ‘research’, is probably one of the dirtiest words in the indigenous world’s vocabulary. When mentioned in many indigenous contexts, it stirs up silence, it conjures up bad memories, it raises a smile that is knowing and distrustful.” (Smith, 1999, 1).

In many cases, scientific experts’ sincerity, honesty, and commitment to the well-being of others is the default. This means that an expert is assumed to live up to these values and commitments unless there is a reason to suspect it. To be warranted, epistemic trust does not always require one to have evidence of experts’ moral integrity. What matters is the absence of evidence that would undermine the default.

But things are very different in cases where scientific experts’ commitment to the well-being of others cannot rationally be treated as the default. For example, historical connections between science and social injustice give members of Indigenous communities a reason not to rely on scientific experts to meet the commitment. In such cases, the challenge is to understand what it takes for scientific experts to demonstrate their commitment. How can they go about assessing inductive risks and other risks from the perspective of people whose epistemic trust they cannot take for granted? To answer this question, I discuss an impact assessment model and argue that it offers a way to fulfill a commitment to the well-being of others.

4.1 Akwé: Kon guidelines

The model of impact assessment can be found in Akwé: Kon Guidelines published by the Secretariat of the Convention on Biological Diversity in 2004.Footnote 9 The 25-page long document outlines voluntary guidelines for the conduct of impact assessment regarding policies, developments, and research projects proposed to take place on sacred sites and on lands and waters traditionally occupied or used by Indigenous communities. The guidelines reflect the work of Indigenous scientists and scholars on community-based participatory research methods (e.g., Smith, 1999). While there are many research ethical guidelines for research involving Indigenous people, Akwé: Kon Guidelines is especially fit to help philosophers understand what earning epistemic trustworthiness can mean in practical terms. This is because the guidelines focus on identifying and preventing potentially adverse impacts on the culture and livelihoods of Indigenous communities. As the guidelines offer detailed recommendations, I highlight merely five major points.

First, the guidelines recognize that there are such things as sacred sites, customary laws, and traditional knowledge. Their aim is to “respect, preserve and maintain traditional knowledge relevant for the conservation and sustainable use of biological diversity, and to promote its wider application” (CBD, 2004, 1). The document aims to provide guidance on “how to take into account traditional knowledge, innovations and practices as part of the impact-assessment processes and promote the use of appropriate technologies” (2). The recognition of sacred sites, customary laws, and traditional knowledge is crucial for impact assessment as it makes it possible for scientific experts to identify harms that could otherwise remain invisible to them. It also signals a departure from extractive or exploitative research which fails to recognize the meanings that places, knowledges, and artifacts have for Indigenous people.

Second, the guidelines aim to integrate the assessment of cultural, environmental, and social impacts into a holistic picture of the consequences of putting knowledge into use (including the consequences of errors). Cultural impact assessment is defined as a process of evaluating the likely impacts of a proposed project or development on the way of life of a community (CBD, 2004, 6). It includes cultural heritage impact assessment which is concerned with the likely impacts on the physical manifestations of a community’s cultural heritage, including sites, structures, and remains of archaeological, architectural, historical, religious, spiritual, cultural, ecological, or aesthetic value or significance (7). Environmental impact assessment is a process of evaluating the likely environmental impacts, considering interrelated socio-economic, cultural, and human health impacts, both beneficial and adverse (7). Social impact assessment aims to evaluate the likely impacts that may affect the rights as well as the well-being of people. Such impacts can be measured in terms of various socio-economic indicators, such as income distribution, physical and social integrity and protection of individuals and communities, employment levels and opportunities, health and welfare, education, and availability and standards of housing and accommodation, infrastructure, and services (7).

Third, the guidelines introduce a collaborative framework for preparing an impact assessment.Footnote 10 According to the guidelines, all those who are likely to be affected by the use of scientific knowledge should be involved. The relevant parties involved in the assessment process may include the proponent of the development, one or more governmental agencies, Indigenous communities, other stakeholders, and scientific experts conducting the assessment (CBD, 2004, 8). This said, scientific experts face the often-difficult task of reaching out to Indigenous communities. For example, they need to decide which languages and which public means of notification they use (e.g., print, electronic and personal media, including newspapers, radio and TV, mailings, village/town meetings) (9). The guidelines advise scientific experts to reach out to organizations representing affected Indigenous communities. Those who represent these communities, need to have a mandate to do so. Moreover, “Notification and public consultation of the proposed development should allow for sufficient time to allow the affected Indigenous community to prepare its response” (10). Like any other research project with human participants, the impact assessment requires prior informed consent of Indigenous communities, not only of individual community members but also of communities as collectives (21).Footnote 11

Fourth, the guidelines advise scientific experts to establish effective mechanisms for Indigenous community participation, including the participation of women, the youth, the elderly and other vulnerable groups (CBD, 2004, 8–9). Importantly, the guidelines recommend that local experts and their expertise are recognized, and they are engaged in the process at the earliest opportunity (11). It is also important to establish an agreed process for recording the views and concerns of participants (11). There must be sufficient resources to carry out the process.

Fifth, the guidelines ask scientific experts to establish a process whereby Indigenous communities have the option to accept or oppose a proposed development. Also, for possible adverse cultural, environmental, and social impacts resulting from the use of knowledge, there must be a plan detailing how harms are avoided or mitigated, and how this process is managed and monitored (CBD, 2004, 12). The process needs to end with an agreement, and it should be possible to appeal it (13).Footnote 12

I argue that when a process that follows the impact assessment model is conducted successfully, it is likely to help scientific experts earn epistemic trustworthiness in the eyes of the participating community. This is because the process is a concrete way to live up to the commitment to the well-being of the community. In the next section, I discuss two objections to the model.

4.2 Objections and responses

An argument from unreasonable burden on scientists and Indigenous communities. One possible objection is that the impact assessment model imposes an unreasonable burden on both scientific experts and members of Indigenous communities, including Indigenous scientists and scholars who are urged to participate in this capacity. The process is time-consuming and requires financial and other resources. This is a problem especially when scientific experts are under pressure to produce results quickly, for example, due to urgent health concerns (e.g., epidemic or pandemic). The burden of participation can also lead to research fatigue among Indigenous people. West (2020) who is a Sámi scholar in Finland calls attention to the Sámi people’s research fatigue:

For centuries, the Sámi have been researched, inspired by ideologies and trends of the time, regardless of whether or not the Sámi themselves have benefited from the findings. In many cases, the studies have not only burdened the Sámi communities, but have caused direct suffering and anxiety. These studies include the Sámi racial hygiene studies and the excavations of Sámi graves as examples of studies that have caused trauma to their subjects or relatives and have increased both mistrust and suspicion toward scholars.

As West argues, the burden of research participation is emotional in addition to being time-consuming and financial. The burden can become a practical obstacle to many research projects, thereby slowing down the progress of science.

An argument from undesirable politization of science. Another objection is that the impact assessment model leads to politization of science. For example, Schroeder (2021) argues that an attempt to earn epistemic trustworthiness by aligning one’s values with the values of some members of the public runs the risk that science becomes politicized in ways that are harmful to society. He is worried that “it will be too easy to write off any differences in scientific conclusions as traceable to differing values—too easy for environmentalists to assume that any time pro-environment and pro-industry scientists reach different conclusions, it must be due to different underlying, legitimate value judgements” (553). In his view, it is not reasonable to expect that each member of the public or each social group have access to “’personalized’ science” (556).

Let me respond to the two objections. To respond to the first objection, I start with a discussion of the burden on scientific experts and then move on to discuss the burden on Indigenous communities. I argue that Akwé: Kon Guidelines does not impose an unreasonable burden on scientific experts. The guidelines are voluntary and designed with an eye on projects that are likely to intervene in the lives of Indigenous or other local communities. In some countries, the guidelines have actually been used in planning the management and use of conservation and wilderness areas and in natural resource research. This shows that the burden is reasonable if there is a moral and political commitment to carry out the process. Moreover, the impact assessment model directs attention to the likely impacts of the project or proposed development, and as such it is less demanding and risky than some other types of participatory research projects which invite citizens to participate in various stages of the research process (Koskinen, 2023). Instead of seeing impact assessment as a burden on scientific research, it can be seen an opportunity to live up to moral and social value commitments in a way that goes above and beyond merely paying lip service to the interests and well-being of Indigenous communities.

As to the burden that Akwé: Kon Guidelines imposes on members of Indigenous communities, the members of Indigenous communities retain the right to decide whether they want to participate and when the burden of participation becomes unbearable. If they believe that a proposed project involves more burdens than benefits, they can decline to participate or withdraw from the project.

In response to the second objection, I argue that Akwé: Kon Guidelines offers advice for scientific experts when science is already politicized. The guidelines show a way for a structured dialogue between scientific experts and members of the local community, which is often the best way to identify and critically discuss value judgments implicit in science that is politicized. It is also important to keep in mind that interactive processes like impact assessment cannot avoid risks. For example, there is the risk that the research project is curbed and the risk that the outcome fails to please some members of the local community. But when these risks are compared to the risks of not conducting any impact assessment at all, they are likely to be less severe. Choosing not to carry out an impact assessment can lead to a long-term social and political conflict. Moreover, scientific experts run the risk of losing the epistemic trust of the community. In many cases, risks involved in impact assessment are worth taking if scientific experts can establish epistemic trustworthiness in the eyes of the local community.

To summarize, I have argued that the impact assessment model provides an effective method to earn epistemic trustworthiness in the eyes of those social groups whose members associate science with social injustice. While benevolence is a value in all non-expert/expert relations, some non-expert groups can legitimately expect experts to make an extra effort to demonstrate it. The impact assessment model is appropriate for situations where scientific experts need to rebuild epistemic trustworthiness that has been questioned. The guidelines do justice to the view that epistemic trustworthiness depends, among other things, on fulfilling a commitment to the well-being of others and is thereby a relational property. Each process is unique as it will be conducted in collaboration with a community or a group of people who are affected by the project at hand. This is in line with the view that there is no one-size-fits-all solution to building and maintaining epistemic trustworthiness. There are likely to be limitations to the applicability of the impact assessment model. For example, if scientific, public health, or other institutions do not support the process and its different parties, the model may not be feasible. Thus, it is good to keep in mind that the impact assessment model is not the only way to earn epistemic trustworthiness. Other community-based and community-led research practices are also effective ways to build epistemic trustworthiness (Jordan et al., 2005; Wylie, 2022).

Insofar as the legacy of the VFI is an understanding of how epistemic trustworthiness depends on moral and social values, the impact assessment model contributes to this legacy. The model suggests that for scientific experts to earn epistemic trustworthiness, their moral and social value commitments need to be put into action.

5 Conclusions

I have argued that epistemic trustworthiness has a moral dimension in addition to an epistemic one. Scientific experts’ epistemic trustworthiness depends on their sincerity, honesty, and commitment to the well-being of people who are impacted by the application of scientific knowledge. These moral and social qualities need to be upheld by scientific communities and institutions, not only by individual scientists. These qualities amount to an alternative to the VFI. An attempt to keep moral values and commitments out of the core activities of scientific inquiry does not help scientific experts secure epistemic trustworthiness; instead, it undermines their epistemic trustworthiness and credibility in the eyes of some citizens. Epistemically trustworthy scientific experts understand that they are morally responsible for potential harms caused by errors, and that they should assess risks from the point of those who are affected by the application of their expertise. Assessing risks from the perspective of others requires, if not good will toward them, at least a commitment to their well-being.

Insofar as scientific experts’ epistemic trustworthiness depends on their moral qualities, non-experts should consider these qualities when they place their epistemic trust in scientific experts. In many cases, scientific experts’ moral values and commitments can be taken for granted, and there is no need to have evidence of them. But in some cases, scientific experts’ ability and willingness to fulfill a commitment to the well-being of others can be questioned on the basis of the past not so admirable track record of science. In such cases, scientific experts face the difficult task of demonstrating their epistemic trustworthiness to a suspicious public.

I have discussed an impact assessment model that helps philosophers understand how scientific experts can live up to their moral and social value commitments by assessing inductive risks, as well as other risks, from the perspective of those who bear the risks, and in collaboration with them. The model defines a process that can generate concrete evidence of a commitment to the well-being of the community if carried out successfully. The process is not without risks, but the risks are likely to be outweighed by the benefit of earning epistemic trustworthiness.