This paper argues against the view that trolley cases are of little or no relevance to the ethics of automated vehicles. Four arguments for this view are outlined and rejected: the Not Going to Happen Argument, the Moral Difference Argument, the Impossible Deliberation Argument and the Wrong Question Argument. In making clear where these arguments go wrong, a positive account is developed of how trolley cases can inform the ethics of automated vehicles.
The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and fairness in healthcare. (...) In this paper, we provide the building blocks for an account of algorithmic bias and its normative relevance in medicine. (shrink)
Suppose a driverless car encounters a scenario where harm to at least one person is unavoidable and a choice about how to distribute harms between different persons is required. How should the driverless car be programmed to behave in this situation? I call this the moral design problem. Santoni de Sio defends a legal-philosophical approach to this problem, which aims to bring us to a consensus on the moral design problem despite our disagreements about which moral principles provide the correct (...) account of justified harm. He then articulates an answer to the moral design problem based on the legal doctrine of necessity. In this paper, I argue that Santoni de Sio’s answer to the moral design problem does not achieve the aim of the legal-philosophical approach. This is because his answer relies on moral principles which, at least, utilitarians have reason to reject. I then articulate an alternative reading of the doctrine of necessity, and construct a partial answer to the moral design problem based on this. I argue that utilitarians, contractualists and deontologists can agree on this partial answer, even if they disagree about which moral principles offer the correct account of justified harm. (shrink)
In his excellent essay, ‘Nudges in a post-truth world’, Neil Levy argues that ‘nudges to reason’, or nudges which aim to make us more receptive to evidence, are morally permissible. A strong argument against the moral permissibility of nudging is that nudges fail to respect the autonomy of the individuals affected by them. Levy argues that nudges to reason do respect individual autonomy, such that the standard autonomy objection fails against nudges to reason. In this paper, I argue that Levy (...) fails to show that nudges to reason respect individual autonomy. (shrink)
The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible use of affirmative algorithms; (...) that is, algorithms trained on diverse datasets that perform better for traditionally disadvantaged groups. Whilst such algorithmic decisions may be unfair, the fairness of algorithmic decisions is not the appropriate locus of moral evaluation. What matters is the fairness of final decisions, such as diagnoses, resulting from collaboration between clinicians and algorithms. We argue that affirmative algorithms can permissibly be deployed provided the resultant final decisions are fair. (shrink)
Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine learning. We also discuss (...) how work in embodied and situated cognition could provide a valu- able perspective on future research. (shrink)
Suppose that an autonomous vehicle encounters a situation where (i) imposing a risk of harm on at least one person is unavoidable; and (ii) a choice about how to allocate risks of harm between different persons is required. What does morality require in these cases? Derek Leben defends a Rawlsian answer to this question. I argue that we have reason to reject Leben’s answer.
Suppose that an autonomous vehicle encounters a situation where imposing a risk of harm on at least one person is unavoidable; and a choice about how to allocate risks of harm between different persons is required. What does morality require in these cases? Derek Leben defends a Rawlsian answer to this question. I argue that we have reason to reject Leben’s answer.
Is there a moral difference between euthanasia for terminally ill adults and euthanasia for terminally ill children? Luc Bovens considers five arguments to this effect, and argues that each is unsuccessful. In this paper, I argue that Bovens' dismissal of the sensitivity argument is unconvincing.
This paper presents a dilemma for the additive model of reasons. Either the model accommodates disjunctive cases in which one ought to perform some act $$\phi $$ just in case at least one of two factors obtains, or it accommodates conjunctive cases in which one ought to $$\phi $$ just in case both of two factors obtains. The dilemma also arises in a revised additive model that accommodates imprecisely weighted reasons. There exist disjunctive and conjunctive cases. Hence the additive model (...) is extensionally inadequate. The upshot of the dilemma is that one of the most influential accounts of how reasons accrue to determine what we ought to do is flawed. (shrink)
The proper function of the heart is pumping the blood. According to what we call the type etiological view, this is because previous tokens of the type HEART were selected for pumping the blood. Nanay :412–431, 2010) argues that the type etiological view is viciously circular. He claims that the only plausible accounts of trait type individuation use proper functions, such that whenever the type etiological view is supplemented with a plausible account of trait type individuation, the result is a (...) view that uses proper functions to explain proper functions. We refine this objection, and argue that Nanay at most establishes a potentially benign definitional circularity. However, we show that the type etiological view’s reliance on types nevertheless generates a vicious regress. Hence the type etiological view is false. We reject dispositional and modal alternatives to the type etiological view for the reason that they either cannot accommodate malfunction, or do so at the cost of proliferation; and then formulate a novel token etiological view that overcomes both problems because it makes no reference to trait types. (shrink)