Abstract
Suppose that an autonomous vehicle encounters a situation where (i) imposing a risk of harm on at least one person is unavoidable; and (ii) a choice about how to allocate risks of harm between different persons is required. What does morality require in these cases? Derek Leben defends a Rawlsian answer to this question. I argue that we have reason to reject Leben’s answer.
I am extremely grateful to Chris Bertram, Noah Goodall, Jason Konek, Derek Leben, Niall Paterson, and Richard Pettigrew for their comments on earlier drafts. I am also grateful to audiences at the Philosophy and Theory of Artificial Intelligence Conference at the University of Leeds, and the Artificial Ethics Symposium at the University of Southampton.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
I use ‘AV’ to mean Level 5 autonomous vehicles in accordance with the Society for Automotive Engineers autonomous vehicle classification scheme. These vehicles require no human intervention or supervision in any circumstances that might arise on the road.
- 2.
- 3.
Leben does not use the term ‘leximin’. He writes ‘[there] is one part of the Maximin procedure, that, to my knowledge, has not been worked out sufficiently well by Rawls or anybody else, and is perhaps the only original contribution that I have to make to the moral theory itself […] It seems clear that agents in the original position would also consider the next-lowest payoffs, since they have an equal chance of being the next payer, and are interested in maximising her minimum as well’ (2017: 110). The iterated form of maximin described by Leben is called leximin, and it has featured in moral philosophy (e.g. Otsuka 2006: 119–121; Hirose 2015: 29) and welfare economics (e.g. Sen 1976; Hammond 1976).
- 4.
A further disanalogy is that whilst survival is a primary good, it is not obvious that the probability of survival is a primary good. So, the parties in Leben’s original position are choosing between alternative gambles concerning a primary good. I am grateful to Richard Pettigrew for this point.
- 5.
Note that, with complete information, leximin mandates saving the greater number in many-versus-one cases (Hirose 2015: 164–5). So, Leben advocates using leximin given the information available, but leximin would not mandate randomising if complete information about the survival probabilities were given.
- 6.
The axioms: let \( \prec \) denote strict preference, \( \sim \) denote indifference and \( { \preccurlyeq } \) denote weak preference. Completeness holds that for any two lotteries \( A \), \( B \), either \( A \prec B \), \( B \prec A \) or \( A \sim B \). Transitivity holds that if \( A\,{ \preccurlyeq }\,B \) and \( B\,{ \preccurlyeq }\,C \) then \( A\,{ \preccurlyeq }\,C \). Continuity holds that, if \( A\,{ \preccurlyeq }\,B\,{ \preccurlyeq }\,C \), then there exists a probability \( p \in \left[ {0,1} \right] \) such that \( \left[ {pA + \left( {1 - p} \right)C} \right] \sim B \). Independence holds that if \( A \prec B \), then for any \( C \) and \( p \in \left[ {0,1} \right] \), \( \left[ {pA + \left( {1 - p} \right)C} \right] \prec \left[ {pB + \left( {1 - p} \right)C} \right] \). My argument makes use of the Archimedean Property, which is sometimes assumed instead of completeness. But if either completeness or the Archimedean Property is assumed, the other is entailed by the von Neumann-Morgenstern Expected Utility Theorem.
- 7.
The lotteries in square brackets should be read, e.g. ‘\( A \) with a probability \( 1 - \varepsilon \) and \( C \) with a probability \( \varepsilon \)’.
- 8.
References
Bonnefon, J., Shariff, A., Rahwan, I.: The social dilemma of autonomous vehicles. Science 352(6293), 1573–1576 (2016)
Goodall, N.: Ethical decision making during automated vehicle crashes. Transp. Res. Record J. Transp. Res. Board 2424, 58–65 (2014)
Hammond, P.J.: Equity, Arrow’s conditions, and Rawls’ difference principle. Econom. J. Econom. Soc. 44(4), 793–804 (1976)
Harsanyi, J.C.: Cardinal utility in welfare economics and in the theory of risk-taking. J. Polit. Econ. 61(5), 434–435 (1953)
Hirose, I.: Moral Aggregation. Oxford University Press, New York (2015)
Keeling, G.: Commentary: using virtual reality to assess ethical decisions in road traffic scenarios: applicability of value-of-life-based models and influences of time pressure. Front. Behav. Neurosci. 11, 247 (2017)
Keeling, G.: Legal necessity, Pareto efficiency and justified killing in autonomous vehicle collisions. Ethical Theory Moral Pract. 21(2), 413–427 (2018)
Leben, D.: A Rawlsian algorithm for autonomous vehicles. Ethics Inf. Technol. 19(2), 107–115 (2017)
Lin, P.: Why ethics matters for autonomous cars. In: Maurer, M., Gerdes, J.C., Lenz, B., Winner, H. (eds.) Autonomous Driving, pp. 69–85. Springer, Berlin (2016)
Norcross, A.: Comparing harms: headaches and human lives. Philos. Public Aff. 26(2), 135–167 (1997)
Otsuka, M.: Saving lives, moral theory, and the claims of individuals. Philos. Public Aff. 34(2), 109–135 (2006)
Otsuka, M.: Prioritarianism and the separateness of persons. Utilitas 24(3), 365–380 (2012)
Parfit, D.: Justifiability to each person. Ratio 16(4), 368–390 (2003)
Rasmussen, K.B.: Should the probabilities count? Philos. Stud. 159(2), 205–218 (2012)
Rawls, J.: A Theory of Justice. Harvard University Press, Cambridge (1971)
Rawls, J.: Justice as Fairness: A Restatement. Harvard University Press, Cambridge (2001)
Rivera-López, E.: Probabilities in tragic choices. Utilitas 20(3), 323–333 (2008)
Scanlon, T.M.: What We Owe to Each Other. Harvard University Press, Cambridge (1998)
Schelling, T.: Should the numbers determine whom to save? In: The Strategies of Commitment, pp. 113–146. Harvard University Press, Cambridge (2006)
Sen, A.: The Idea of Justice. Harvard University Press, Cambridge (2011)
Sen, A.: Welfare inequalities and Rawlsian axiomatics. Theor. Decis. 7(4), 243–262 (1976)
von Neumann, J., Morgenstern, O.: Theory of Games and Economic Behavior. Princeton University Press, Princeton (1953)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Keeling, G. (2018). Against Leben’s Rawlsian Collision Algorithm for Autonomous Vehicles. In: Müller, V. (eds) Philosophy and Theory of Artificial Intelligence 2017. PT-AI 2017. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol 44. Springer, Cham. https://doi.org/10.1007/978-3-319-96448-5_29
Download citation
DOI: https://doi.org/10.1007/978-3-319-96448-5_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-96447-8
Online ISBN: 978-3-319-96448-5
eBook Packages: Computer ScienceComputer Science (R0)