A person sometimes forms moral beliefs by relying on another person''s moral testimony. In this paper I advance a cognitivist normative account of this phenomenon. I argue that for a person''s actions to be morally good, they must be based on a recognition of the moral reasons bearing on action. Morality requires people to act from an understanding of moral claims, and consequently to have an understanding of moral claims relevant to action. A person sometimes fails to meet this requirement (...) when she relies on another person''s moral testimony, and so there are moral limits on such reliance. (shrink)
Engineers are traditionally regarded as trustworthy professionals who meet exacting standards. In this chapter I begin by explicating our trust relationship towards engineers, arguing that it is a linear but indirect relationship in which engineers “stand behind” the artifacts and technological systems that we rely on directly. The chapter goes on to explain how this relationship has become more complex as engineers have taken on two additional aims: the aim of social engineering to create and steer trust between people, and (...) the aim of creating automated systems that take over human tasks and are meant to invite the trust of those who rely on and interact with them. (shrink)
A person presented with adequate but not conclusive evidence for a proposition is in a position voluntarily to acquire a belief in that proposition, or to suspend judgment about it. The availability of doxastic options in such cases grounds a moderate form of doxastic voluntarism not based on practical motives, and therefore distinct from pragmatism. In such cases, belief-acquisition or suspension of judgment meets standard conditions on willing: it can express stable character traits of the agent, it can be responsive (...) to reasons, and it is compatible with a subjective awareness of the available options. (shrink)
Jennifer Lackey’s case “Creationist Teacher,” in which students acquire knowledge of evolutionary theory from a teacher who does not herself believe the theory, has been discussed widely as a counterexample to so-called transmission theories of testimonial knowledge and justification. The case purports to show that a speaker need not herself have knowledge or justification in order to enable listeners to acquire knowledge or justification from her assertion. The original case has been criticized on the ground that it does not really (...) refute the transmission theory, because there is still somebody in a chain of testifiers—the person from whom the creationist teacher acquired what she testifies—who knows the truth of the testified statements. In this paper, we provide a kind of pattern for generating counterexample cases, one that avoids objections discussed by Peter Graham and others in relation to such cases. (shrink)
This paper develops a philosophical account of moral disruption. According to Robert Baker, moral disruption is a process in which technological innovations undermine established moral norms without clearly leading to a new set of norms. Here I analyze this process in terms of moral uncertainty, formulating a philosophical account with two variants. On the harm account, such uncertainty is always harmful because it blocks our knowledge of our own and others’ moral obligations. On the qualified harm account, there is no (...) harm in cases where moral uncertainty is related to innovation that is “for the best” in historical perspective or where uncertainty is the expression of a deliberative virtue. The two accounts are compared by applying them to Baker’s historical case of the introduction of mechanical ventilation and organ transplantation technologies, as well as the present-day case of mass data practices in the health domain. (shrink)
Technology is a practically indispensible means for satisfying one’s basic interests in all central areas of human life including nutrition, habitation, health care, entertainment, transportation, and social interaction. It is impossible for any one person, even a well-trained scientist or engineer, to know enough about how technology works in these different areas to make a calculated choice about whether to rely on the vast majority of the technologies she/he in fact relies upon. Yet, there are substantial risks, uncertainties, and unforeseen (...) practical consequences associated with the use of technological artifacts and systems. The salience of technological failure (both catastrophic and mundane), as well as technology’s sometimes unforeseeable influence on our behavior, makes it relevant to wonder whether we are really justified as individuals in our practical reliance on technology. Of course, even if we are not justified, we might nonetheless continue in our technological reliance, since the alternatives might not be attractive or feasible. In this chapter I argue that a conception of trust in technological artifacts and systems is plausible and helps us understand what is at stake philosophically in our reliance on technology. Such an account also helps us understand the relationship between trust and technological risk and the ethical obligations of those who design, manufacture, and deploy technological artifacts. (shrink)
This paper develops a philosophical account of moral disruption. According to Robert Baker (2013), moral disruption is a process in which technological innovations undermine established moral norms without clearly leading to a new set of norms. Here I analyze this process in terms of moral uncertainty, formulating a philosophical account with two variants. On the Harm Account, such uncertainty is always harmful because it blocks our knowledge of our own and others’ moral obligations. On the Qualified Harm Account, there is (...) no harm in cases where moral uncertainty is related to innovation that is “for the best” in historical perspective, or where uncertainty is the expression of a deliberative virtue. The two accounts are compared by applying them to Baker’s historical case of the introduction of mechanical ventilation and organ transplantation technologies, as well as the present-day case of mass data practices in the health domain. (shrink)
In this chapter, we consider ethical and philosophical aspects of trust in the practice of medicine. We focus on trust within the patient-physician relationship, trust and professionalism, and trust in Western (allopathic) institutions of medicine and medical research. Philosophical approaches to trust contain important insights into medicine as an ethical and social practice. In what follows we explain several philosophical approaches and discuss their strengths and weaknesses in this context. We also highlight some relevant empirical work in the section on (...) trust in the institutions of medicine. It is hoped that the approaches discussed here can be extended to nursing and other topics in the philosophy of medicine. (shrink)
This paper defends the view that trust is a moral attitude, by putting forward the Obligation-Ascription Thesis: If E trusts F to do X, this implies that E ascribes an obligation to F to do X. I explicate the idea of obligation-ascription in terms of requirement and the appropriateness of blame. Then, drawing a distinction between attitude and ground, I argue that this account of the attitude of trust is compatible with the possibility of amoral trust, that is, trust held (...) among amoral persons on the basis of amoral grounds. It is also compatible with trust adopted on purely predictive grounds. Then, defending the thesis against a challenge of motivational inefficacy, I argue that obligation-ascription can motivate people to act even in the absence of definite, mutually-known agreements. I end by explaining, briefly, the advantages of this sort of moral account of trust over a view based on reactive attitudes such as resentment. (shrink)
This paper explores the role of moral uncertainty in explaining the morally disruptive character of new technologies. We argue that existing accounts of technomoral change do not fully explain its disruptiveness. This explanatory gap can be bridged by examining the epistemic dimensions of technomoral change, focusing on moral uncertainty and inquiry. To develop this account, we examine three historical cases: the introduction of the early pregnancy test, the contraception pill, and brain death. The resulting account highlights what we call “differential (...) disruption” and provides a resource for fields such as technology assessment, ethics of technology, and responsible innovation. (shrink)
This paper explores the role of moral uncertainty in explaining the morally disruptive character of new technologies. We argue that existing accounts of technomoral change do not fully explain its disruptiveness. This explanatory gap can be bridged by examining the epistemic dimensions of technomoral change, focusing on moral uncertainty and inquiry. To develop this account, we examine three historical cases: the introduction of the early pregnancy test, the contraception pill, and brain death. The resulting account highlights what we call “differential (...) disruption” and provides a resource for fields such as technology assessment, ethics of technology, and responsible innovation. (shrink)
Modern health data practices come with many practical uncertainties. In this paper, I argue that data subjects’ trust in the institutions and organizations that control their data, and their ability to know their own moral obligations in relation to their data, are undermined by significant uncertainties regarding the what, how, and who of mass data collection and analysis. I conclude by considering how proposals for managing situations of high uncertainty might be applied to this problem. These emphasize increasing organizational flexibility, (...) knowledge, and capacity, and reducing hazard. (shrink)
The adoption of web-based telecare services has raised multifarious ethical concerns, but a traditional principle-based approach provides limited insight into how these concerns might be addressed and what, if anything, makes them problematic. We take an alternative approach, diagnosing some of the main concerns as arising from a core phenomenon of shifting trust relations that come about when the physician plays a less central role in the delivery of care, and new actors and entities are introduced. Correspondingly, we propose an (...) applied ethics of trust based on the idea that patients should be provided with good reasons to trust telecare services, which we call sound trust. On the basis of this approach, we propose several concrete strategies for safeguarding sound trust in telecare. (shrink)
This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI (...) practitioners through the vehicle of an AI application. I conclude with four critical questions based on the discretionary account to determine if trust in particular AI applications is sound, and a brief discussion of the possibility that the main roles of the physician could be replaced by AI. (shrink)
This paper advances a new criterion of a vulnerable population in research. According to this criterion, there are consent-based and fairness-based reasons for calling a group vulnerable. The criterion is then applied to the case of people with serious illnesses. It is argued that people with serious illnesses meet this criterion for reasons related to consent. Seriously ill people have a susceptibility to “enticing offers” that hold out the prospect of removing or alleviating illness, and this susceptibility reduces their ability (...) to safeguard their own interests. This explains the inclusion of people with serious illnesses in the Belmont Report’s list of populations needing special protections, and supports the claim that vulnerability is the rule, rather than the exception, in biomedical research. (shrink)
Trust should be able to explain cooperation, and its failure should help explain the emergence of cooperation-enabling institutions. This proposed methodological constraint on theorizing about trust, when satisfied, can then be used to differentiate theories of trust with some being able to explain cooperation more generally and effectively than others. Unrestricted views of trust, which take trust to be no more than the disposition to rely on others, fare well compared to restrictive views, which require the trusting person to have (...) some further attitude in addition to this disposition. The same methodological constraint also favours some restrictive views over others. (shrink)
Some recent accounts of testimonial warrant base it on trust, and claim that doing so helps explain asymmetries between the intended recipient of testimony and other non-intended hearers, e.g. differences in their entitlement to challenge the speaker or to rebuke the speaker for lying. In this explanation ‘dependence-responsiveness’ is invoked as an essential feature of trust: the trustor believes the trustee to be motivationally responsive to the fact that the trustor is relying on the trustee. I argue that dependence-responsiveness is (...) not essential to trust and that the asymmetries, where genuine, can be better explained without reference to trust. (shrink)
According to assurance views of testimonial justification, in virtue of the act of testifying a speaker provides an assurance of the truth of what she asserts to the addressee. This assurance provides a special justificatory force and a distinctive normative status to the addressee. It is thought to explain certain asymmetries between addressees and other unintended hearers (bystanders and eavesdroppers), such as the phenomenon that the addressee has a right to blame the speaker for conveying a falsehood but unintended hearers (...) do not, and the phenomenon that the addressee may deflect challenges to his testimonial belief to the speaker but unintended hearers may not. Here I argue that we can do a better job explaining the normative statuses associated with testimony by reference to epistemic norms of assertion and privacy norms. Following Sanford Goldberg, I argue that epistemic norms of assertion, according to which sincere assertion is appropriate only when the asserter possesses certain epistemic goods, can be ‘put to work’ to explain the normative statuses associated with testimony. When these norms are violated, they give hearers the right to blame the speaker, and they also explain why the speaker takes responsibility for the justification of the statement asserted. Norms of privacy, on the other hand, directly exclude eavesdroppers and bystanders from an informational exchange, implying that they have no standing to do many of the things, such as issue challenges or questions to the speaker, that would be normal for conversational participants. This explains asymmetries of normative status associated with testimony in a way logically independent of speaker assurance. (shrink)
Trust is a kind of risky reliance on another person. Social scientists have offered two basic accounts of trust: predictive expectation accounts and staking accounts. Predictive expectation accounts identify trust with a judgment that performance is likely. Staking accounts identify trust with a judgment that reliance on the person's performance is worthwhile. I argue that these two views of trust are different, that the staking account is preferable to the predictive expectation account on grounds of intuitive adequacy and coherence with (...) plausible explanations of action; and that there are counterexamples to both accounts. I then set forward an additional necessary condition on trust, according to which trust implies a moral expectation. When A trusts B to do x, A ascribes to B an obligation to do x, and holds B to this obligation. This Moral Expectation view throws new light on some of the consequences of misplaced trust. I use the example of physicians’ defensive behavior/defensive medicine to illustrate this final point. (shrink)
Some of the systems used in natural language generation (NLG), a branch of applied computational linguistics, have the capacity to create or assemble somewhat original messages adapted to new contexts. In this paper, taking Bernard Williams’ account of assertion by machines as a starting point, I argue that NLG systems meet the criteria for being speech actants to a substantial degree. They are capable of authoring original messages, and can even simulate illocutionary force and speaker meaning. Background intelligence embedded in (...) their datasets enhances these speech capacities. Although there is an open question about who is ultimately responsible for their speech, if anybody, we can settle this question by using the notion of proxy speech, in which responsibility for artificial speech acts is assigned legally or conventionally to an entity separate from the speech actant. (shrink)
Trust is a kind of risky reliance on another person. Social scientists have offered two basic accounts of trust: predictive expectation accounts and staking (betting) accounts. Predictive expectation accounts identify trust with a judgment that performance is likely. Staking accounts identify trust with a judgment that reliance on the person’s performance is worthwhile. I argue (1) that these two views of trust are different, (2) that the staking account is preferable to the predictive expectation account on grounds of intuitive adequacy (...) and coherence with plausible explanations of action; and (3) that there are counterexamples to both accounts. I then set forward an additional necessary condition on trust, according to which trust implies a moral expectation. The content of the moral expectation is this: W hen A trusts B to do x, A ascribes an obligation to B to do x, and holds B to this obligation. This moral expectation account throws new light on some of the consequences of misplaced trust. I use the example of physicians’ defensive behavior to illustrate this final point. (shrink)
In this paper we raise the question whether technological artifacts can properly speaking be trusted or said to be trustworthy. First, we set out some prevalent accounts of trust and trustworthiness and explain how they compare with the engineer’s notion of reliability. We distinguish between pure rational-choice accounts of trust, which do not differ in principle from mere judgments of reliability, and what we call “motivation-attributing” accounts of trust, which attribute specific motivations to trustworthy entities. Then we consider some examples (...) of technological entities that are, at first glance, best suited to serve as the objects of trust: intelligent systems that interact with users, and complex socio-technical systems. We conclude that the motivation-attributing concept of trustworthiness cannot be straightforwardly applied to these entities. Any applicable notion of trustworthy technology would have to depart significantly from the full-blown notion of trustworthiness associated with interpersonal trust. (shrink)
In this paper, I examine the ethics of e - trust and e - trustworthiness in the context of health care, looking at direct computer-patient interfaces (DCPIs), information systems that provide medical information, diagnosis, advice, consenting and/or treatment directly to patients without clinicians as intermediaries. Designers, manufacturers and deployers of such systems have an ethical obligation to provide evidence of their trustworthiness to users. My argument for this claim is based on evidentialism about trust and trustworthiness: the idea that trust (...) should be based on sound evidence of trustworthiness. Evidence of trustworthiness is a broader notion than one might suppose, including not just information about the risks and performance of the system, but also interactional and context-based information. I suggest some sources of evidence in this broader sense that make it plausible that designers, manufacturers and deployers of DCPIs can provide evidence to users that is cognitively simple, easy to communicate, yet rationally connected with actual trustworthiness. (shrink)
In this paper we raise the question whether technological artifacts can properly speaking be trusted or said to be trustworthy. First, we set out some prevalent accounts of trust and trustworthiness and explain how they compare with the engineer’s notion of reliability. We distinguish between pure rational-choice accounts of trust, which do not differ in principle from mere judgments of reliability, and what we call “motivation-attributing” accounts of trust, which attribute specific motivations to trustworthy entities. Then we consider some examples (...) of technological entities that are, at first glance, best suited to serve as the objects of trust: intelligent systems that interact with users, and complex socio-technical systems. We conclude that the motivation-attributing concept of trustworthiness cannot be straightforwardly applied to these entities. Any applicable notion of trustworthy technology would have to depart significantly from the full-blown notion of trustworthiness associated with interpersonal trust. (shrink)
Assurance theories of testimony attempt to explain what is distinctive about testimony as a form of epistemic warrant or justification. The most characteristic assurance theories hold that a distinctive subclass of assertion (acts of “telling”) involves a real commitment given by the speaker to the listener, somewhat like a promise to the effect that what is asserted is true. This chapter sympathetically explains what is attractive about such theories: instead of treating testimony as essentially similar to any other kind of (...) evidence, they instead make testimonial warrant depend on essential features of the speech act of testimony as a social practice. One such feature is “buck-passing,” the phenomenon that when I am challenged to defend a belief I acquired through testimony, I may respond by referring to the source of my testimony (and thereby “passing the buck”) rather than providing direct evidence for the truth of the content of the belief. The chapter concludes by posing a serious challenge to assurance theories, namely that the social practice of assurance insufficiently ensures the truth of beliefs formed on the basis of testimony, and thereby fails a crucial epistemological test as a legitimate source of beliefs. (shrink)
The arrival of synthetic organs may mean we need to reconsider principles of ownership of such items. One possible ownership criterion is the boundary between the organ’s being outside or inside the body. What is outside of my body, even if it is a natural organ made of my cells, may belong to a company or research institution. Yet when it is placed in me, it belongs to me. In the future, we should also keep an eye on how the (...) availability of synthetic organs may change our attitudes toward our own bodies. (shrink)
Moral dependence is taking another person's assertion or "testimony" that C as a reason to believe C (where C is some moral claim), such that whatever justificatory force is associated with the person's testimony endures or remains as one's reason for believing C. People are justified in relying on one another's testimony in non-moral matters. The dissertation takes up the question whether the same is true for moral beliefs. My method is to divide the topic into three somewhat separate questions. (...) First, there is the epistemological question, what if anything gives me reason to believe that another person's moral claim is likely to be true. Second, there is the psychological question, whether moral dependence is, in fact, part of the rational explanation of why people hold the moral beliefs they do. Third, there is the moral question, whether a person can be a good moral agent while being morally dependent. I answer these questions as follows. First, in response to the epistemological question, I argue that there is a justification for moral dependence based on identifying people who are good moral deliberators. I also argue that there is an unreliable justification for moral dependence based on cooperation and trust. This latter, trust-based justification is unreliable because it is possible to trust and cooperate with those who are morally bad. Second, in response to the psychological question, I argue that moral dependence is part of the rational explanation of moral belief. This is true even though there is some reason to hold that a testimonial justification cannot rationally explain moral belief when there is also a non-testimonial justification available for that same belief. I also argue that moral dependence rationally explains moral development, both because it explains how children come to believe the particular things they do, and also because it can explain how children come to employ new forms of moral justification. Third, in response to the moral question, I argue that autonomy limits moral dependence, but that relying on moral testimony can also bring one to be more aware of what is morally important. (shrink)
In this paper, I consider a form of skepticism that has a permissive conclusion, according to which we are rationally permitted to suspend judgment in an area, or to have beliefs in that area. I argue that such a form of skepticism is resistant to some traditional strategies of refutation. It also carries a benefit, namely that it increases voluntary control over doxastic states by introducing options, and therefore greater freedom, into the realm of belief. I argue that intellectual preferences (...) and dispositions provide decisive reasons that can settle our doxastic states in such cases. (shrink)
As a moral philosopher, the perspective I will take in this chapter is one of argumentation and informed judgment about two main questions: whether individuals should ever choose to conduct human embryonic stem cell research, and whether the law should permit this type of research. I will also touch upon a secondary question, that of whether the government ought to pay for this type of research. I will discuss some of the main arguments at stake, and explain how the ethical (...) conflict over these questions differs from the political conflict over them. I will be guided throughout by the assumption that the unique scientific and clinical promise of human embryonic stem cell research is significant. Those who have doubts about this assumption should consult other chapters in this volume in which the issue is addressed directly. I begin with one of the basic facts relevant to the ethical issue of stem cell research: you and I, along with everybody else we know, developed out of clumps of primordial cells, which happen to be the very same clumps that serve as the source for human embryonic stem cells in the laboratory. Let us call these “source cells” for short, since they can be used in this way. Each individual has developed into whatever she is now out of a one-celled animal, which then became a blastocyst, a multi-celled human embryo. These blastocysts are partly made up of an inner mass of cells, and the body of every adult person has developed out of this inner mass. It is this very same clump from the inner part of the blastocyst that consists of source cells for human embryonic stem 1 cell research. These cells can be extracted and grown into a laboratory specimen of extraordinary interest to scientists. Before discussing the significance of the fact that all humans originate from these source cells, it is useful to begin by asking some perhaps rather simple-minded questions about how any one of us knows this fact to be true in the first place. How do I know that I developed from a single cell, and then a blastocyst? In my own case, the main way I know this is that other people have told me so.. (shrink)