Skip to main content
Log in

No wheel but a dial: why and how passengers in self-driving cars should decide how their car drives

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

Much of the debate on the ethics of self-driving cars has revolved around trolley scenarios. This paper instead takes up the political or institutional question of who should decide how a self-driving car drives. Specifically, this paper is on the question of whether and why passengers should be able to control how their car drives. The paper reviews existing arguments—those for passenger ethics settings and for mandatory ethics settings respectively—and argues that they fail. Although the arguments are not successful, they serve as the basis to formulate desiderata that any approach to regulating the driving behavior of self-driving cars ought to fulfill. The paper then proposes one way of designing passenger ethics settings that meets these desiderata.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. By “self-driving cars,” “autonomous vehicles” or “automated vehicles” (AV) I understand individually-owned passenger vehicles with automation levels 4 or higher according to the SAE definition. I concentrate on cars owned by individuals, in contrast to corporate-owned cars.

  2. For arguments in favor of the relevance of trolley scenarios, however, see Lin (2017), Keeling (2020) and Awad et al. (2020)

  3. The nomenclature is from Gogoll and Müller (2017). The distinction between PES and MES depends on whether a passenger can meaningfully control a vehicle’s driving style and macro path planning. The expression “meaningful control” is central to the ethics of robotics.

  4. In addition to arguments that address PES directly, I also review related arguments that can be applied to the issue of PES (Bonnefon et al., 2016; Millar, 2014a; 2015).

  5. My discussion here is prompted by comments by a peer reviewer for a different journal.

  6. Tesla’s cost function for path planning minimizes traversal time, collision risk, lateral acceleration, and lateral jerk—the latter as a measure of comfort (Tesla, 2021). The behavior of Teslas is hence governed via deliberately designed properties of the cost function.

  7. Technical and normative issues are not independent: Technological choices constrain the ethics of a system. This is an important insight in the value-alignment literature (cf. Gabriel, 2020), of which the debate on the ethics of self-driving cars can be seen as a part.

  8. Things are actually more complicated because it is not clear whose proxy the cars ought to be—there is thus a “moral proxy problem” (Thoma, 2022). Depending on whether cars are proxies for individuals or aggregates (such as developers or regulators), they should make risky decisions very differently (ibid.).

  9. What these limits should be and what considerations should guide our delineation of limits is often not clear. But see Contissa et al., (2017, p. 374) and Etzioni and Etzioni (2017).

  10. Of course, there could be a collective decision in favor of PES; but this is not how PES are usually defended.

  11. I take the name for this argument from the title of a paper by Bonnefon et al. (2016), who present the empirical finding that motivates the argument that I present here (The main idea in the argument is also called the “ethical opt-out problem” (Bonnefon et al., 2020)). However—to avoid misattribution—the argument I present here is not theirs. The argument is hinted at by Contissa et al., (2017, p. 367) who write that “[i]f an impartial (utilitarian) ethical setting is made compulsory for, and rigidly implemented into, all AVs, many people may refuse to use AVs, even though AVs may have significant advantages, in particular with regard to safety, over human-driven vehicles.” Bonnefon et al. (2020, p. 110), however, advance a similar argument. They write: “[I]f people are not satisfied with the ethical principles that guide moral algorithms, they will simply opt out of using these algorithms, thus nullifying all their expected benefits.”.

  12. Similarly, Ryan (2020) writes: “Very few people would buy [a self-driving car] if they prioritised the lives of others over the vehicle’s driver and passengers.”.

  13. The social dilemma argument is motivated by an empirical finding: Although a majority of people agree that a driving style that maximizes overall welfare or health in a population is the preferable driving style from a moral point of view, many people would not actually want to use or buy a vehicle that drives in this way (Bonnefon et al., 2016; Gill 2021). This is the social dilemma.

  14. What I describe is only an extreme version of an egoistic car. In fact, as has been argued, there could be a continuum (Contissa et al., 2017).

  15. A prisoners dilemma is a two-person symmetric game with two pure strategies, “cooperate” and “defect”, in which the payoffs of the four different outcomes satisfy the condition T > R > P > S, that is, temptation to defect against a cooperator has a strictly greater payoff than reward of mutual cooperation, punishment for mutual defection, and the so-called sucker payoff for cooperating with a defector.

  16. This is acknowledged by some (Bonnefon et al., 2016).

  17. Respondents in China would find it “tolerable” if self-driving cars are four to five times as safe as human drivers and “acceptable” if the cars were safer by one to two orders of magnitude (Liu et al., 2019).

  18. For context: These are data from US participants. US participants can be expected to have relatively unfavorable attitudes towards AVs compared to India or China. A study in 2014 found that only 14% to 22% of respondents in the UK and US respectively hold very positive attitudes towards automated vehicles compared to 46% and 50% in India and China (Schoettle and Sivak, 2014).

  19. The Kelley Blue Book calls these “value shoppers” (KBB Editors, 2022).

  20. This is not a crucial assumption: Even if the nominal insurance costs might be higher, especially in the short term, they could be decreased by policy to make self-driving cars attractive (Ravid, 2014).

  21. Moreover, it would likely take decades to be able to have sufficient exposure to measure (as opposed to simulate or estimate) the safety of self-driving cars (Kalra & Paddock, 2016).

  22. I concentrate on this argument because it is a recent and the best developed one.

  23. By “best interest of society” the authors mean that traffic injuries and fatalities are minimized in a given population.

  24. This differs from the social dilemma argument which assumed that purchasing decisions are a PD instead of traffic being a PD.

  25. I write “emerge” and “stable” to indicate that the game is played repeatedly. Even if players will not cooperate in one-shot games, the prospects for achieving widespread cooperation look much better when PD is played repeatedly.

  26. It could be said that the traffic game is embedded in other games within the political structure.

  27. Of course, also MES could incorporate a concern for pluralism. But, arguably, PES are more responsive to occupants’ preferences. On PES, the average distance between behavior and preference will likely be narrower than on MES.

  28. Another illustration of this conflict between others’ interest and your interest is, of course, in trolley cases and collision scenarios such as in the Tunnel Problem where a car needs to choose between running over a pedestrian or running the car into the wall of a tunnel (Millar, 2014a).

  29. By “mobility” I understand the time required to get to a destination. By “safety” I understand the absence of risk, defined as a function of the probability of a hazardous event and the harm to the occupants and others. It should be noted that I understand both “mobility” and “safety” impartially as everyone’s mobility and safety and not just those of vehicle occupants.

  30. Assume also that this situation occurs in a location that does not prescribe a minimum lateral distance for safe passing.

  31. Of course, the details of this would have to be worked out by operationalizing these value conflicts and by studying the user interaction design (cf. Thornton et al., 2019).

  32. This is a matter of how the one dial trades off between the mobility–safety conflict and the other for the self-interest–other-interest conflict. How the one dial makes this tradeoff—the path of the indifference curve though the space of parameter combinations—is an important question for ethics and design.

  33. Another problem with this objection is that it considers frequency but not stakes. It might be true that there are more opportunities for mobility and few for safety. But the stakes for safety might be much higher than those for mobility: Safety is about avoiding injuries and physical harms but mobility only about getting to a destination faster.

  34. Shariff et al. (2017) discuss the importance of “virtue signalling”, however, not in the context of PES but instead as a psychological mechanism to exploit (in advertisement and communication) to increase AV adoption.

References

Download references

Acknowledgements

I am grateful for thoughts and comments I received from Johanna Thoma and Sebastian Köhler, from students at Sonoma State University, from participants and the audience at the Automated Vehicles Symposium 2019 in Orlando, as well as from an anonymous reviewer for this journal.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johannes Himmelreich.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Himmelreich, J. No wheel but a dial: why and how passengers in self-driving cars should decide how their car drives. Ethics Inf Technol 24, 45 (2022). https://doi.org/10.1007/s10676-022-09668-5

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10676-022-09668-5

Keywords

Navigation