Skip to main content

Advertisement

Log in

Irresponsibilities, inequalities and injustice for autonomous vehicles

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

With their prospect for causing both novel and known forms of damage, harm and injury, the issue of responsibility has been a recurring theme in the debate concerning autonomous vehicles. Yet, the discussion of responsibility has obscured the finer details both between the underlying concepts of responsibility, and their application to the interaction between human beings and artificial decision-making entities. By developing meaningful distinctions and examining their ramifications, this article contributes to this debate by refining the underlying concepts that together inform the idea of responsibility. Two different approaches are offered to the question of responsibility and autonomous vehicles: targeting and risk distribution. The article then introduces a thought experiment which situates autonomous vehicles within the context of crash optimisation impulses and coordinated or networked decision-making. It argues that guiding ethical frameworks overlook compound or aggregated effects which may arise, and which can lead to subtle forms of structural discrimination. Insofar as such effects remain unrecognised by the legal systems relied upon to remedy them, the potential for societal inequalities is increased and entrenched, situations of injustice and impunity may be unwittingly maintained. This second set of concerns may represent a hitherto overlooked type of responsibility gap arising from inadequate accountability processes capable of challenging systemic risk displacement.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. HLA Hart, in proposing the typology of responsibility, suggested that ‘it is clear that in this causal sense not only human beings but also their actions or omissions, and things, conditions, and events, may be said to be responsible for outcomes’, p. 214.

  2. Whereas wrongs and injuries import blame, the neutral status of harm and damage may be remedied by reinstating the position prior to the event triggering the harm or damage.

  3. Functional autonomy describes systems which are capable of undertaking only predetermined or strictly limited forms of independent action, while systems possessing discretional autonomy substitute human decision-making processes in its domain, p. 327.

  4. Existing models for this include modalities of strict liability, as well as mandatory insurance scehemes.

  5. Marchant and Lindor in this case stipulate the caveat that the manufacturer is ultimately responsible from a doctrinal perspective, which suggests that the flaw in reasoning is with the legal doctrine upon which they comment.

  6. Christof Heyns articulates the human dignity argument: ‘Death by algorithm means that people are treated simply as targets and not as complete and unique human beings’ (Heyns 2016).

  7. The human beings physically in the vehicle have been characterised in the literature primarily as the owners. Obviously, a proprietary relationship with the vehicle need not be the determining variable in the responsibility calculus. Instead, a more encompassing and consistent criterion could be characterised on the basis a beneficiary status in relation to that vehicle.

  8. Classically, responsibility doctrines trace intentional self-handicapping back to the initial intentional action of self-handicapping, thus treating it as recklessness with regard to risks of foreseeable damage. The clarity of this doctrine might be muddied by the omission nature of the occupant’s role responsibility that is akin to oversight—the occupant only needs to be capable of intervening in an accident scenario and not of actively operating the vehicle. Moreover, where an accident occurs, the responsibility of the occupant will be limited to her failure to intervene, rather than for having caused the accident (thereby significantly curbing the scope of her liability).

  9. While negligence is the typical form of responsibility on the road at present, such conflating conceptual distinctions presented here become increasingly problematic when additional actors are involved in producing the outcome.

  10. Both the programmer and the manufacturer are deployed here in their prototypical, singular, form. While the complexity involved in these processes suggest that these roles will be played by a multitude of persons and corporations, the point here is that significant responsibility issues remain even in this rudimentary caricature of reality. Furthermore, the consideration here excludes other identifiable parties who may bear at least a portion of the responsibility for an accident, such as the manufacturer of a component used in the autonomous system and the road designer where an intelligent road system is deployed to assist control of the autonomous vehicle (see Marchant and Lindor 2012).

  11. This targeting discussion is written under the assumption that this is an expression of ‘crash optimisation’: as actions and inactions within accident scenarios that minimise the objective total harm. These comments do not encompass the possibility for autonomous vehicles to be directed towards placing the risk burden on certain groups or individuals which would invoke a different set of issues beyond the scope of this paper.

  12. The suggestion here goes beyond the biased computer systems described by Friedman and Nissenbaum (1996): ‘A system discriminates unfairly if it denies an opportunity or a good or if it assigns an undesirable outcome to an individual or group of individuals on grounds that are unreasonable or inappropriate’ (at 332).This suggests that discrimination from technological artefacts bear human fingerprints, and responses such as value-sensitive design are able to go a long way towards meeting those concerns. Unlike the effects of biased computer systems discussed by Friedman and Nissenbaum, however, the structural form of discrimination countenanced here is difficult to fathom in a direct sense because the system neither “assigns” benefits or burdens, and it does not do so upon objectionable grounds. Rather, the system “optimises”, offering results that look a lot like discrimination. Furthermore, these “discriminatory” effects are difficult to encapsulate because they are both cumulative and emergent, and not directly and causally connected in traditional manifestations of discrimination. These differences suggest that it would be difficult to effectively apply the solutions that have been designed against computer bias to the present problems posed by structural discrimination.

  13. At a basic level, systematic outcomes are endemic in policies that stipulate preferences because of the probabilistic likelihood that one type of result will occur more often as a result of the preference settings.

  14. The choice of the fat man in the thought-experiment, for subliminal or pragmatic reasons, may hint towards discriminatory tendencies, when considered in light of research suggesting that systematic discriminatory biases disadvantage obese individuals (Puhl and Brownell 2001).

  15. Here, the methodologies deployed by value-sensitive design and especially those pertaining to the identification of direct and indirect stakeholders as well as the benefits and harms they might incur goes some way to addressing these issues (Friedman et al. 2006).

  16. I owe this idea to John Danaher, in conversation on Algocracy, https://algocracy.wordpress.com/.

References

  • Bakan, J. (2005). The corporation: The pathological pursuit of profit and power. London: Constable & Robinson.

    Google Scholar 

  • Bonnefon, J. F., & Sharif, A., Rahwan, I. (2015). Autonomous vehicles need experimental ethics: Are we ready for utilitarian cars? arXiv. Retrieved October 12, 2015, from http://arxiv.org/abs/1510.03346.

  • Coffee, J. C. (1981). ‘No soul to damn: no body to kick’: An unscandalized inquiry into the problem of corporate punishment. Michigan Law Review, 79(3), 386–459.

    Article  Google Scholar 

  • Cummings, M. L., Mastracchio, C., Thornburg, K. M., & Mkrtchyan, A. (2013). Boredom and distraction in multiple unmanned vehicle supervisory control. Interacting with Computers, 25(1), 34–47.

    Article  Google Scholar 

  • Davis, L. C. (2015). Would you pull the trolley switch? Does it matter? The Atlantic. Retrieved October 9, 2015, from http://www.theatlantic.com/technology/archive/2015/10/trolley-problem-history-psychology-morality-driverless-cars/409732/.

  • de Sio, F. S. (2017). Killing by autonomous vehicles and the legal doctrine of necessity. Ethical Theory and Moral Practice, 20(2), 411–429.

    Article  Google Scholar 

  • Doctorow, C. (2015). The problem with self-driving cars: Who controls the code? The Guardian. Retrieved December 23, 2015, from http://www.theguardian.com/technology/2015/dec/23/the-problem-with-self-driving-cars-who-controls-the-code.

  • Douma, F., & Palodichuk, S. A. (2012). Criminal liability issues created by autonomous vehicles. Santa Clara Law Review 52(4), 1157–1169.

    Google Scholar 

  • Edmonds, D. (2013). Would you kill the fat man? The trolley problem and what your answer tells us about right and wrong. Princeton: Princeton University Press.

    Google Scholar 

  • Friedman, B., Kahn, P. H., & Borning, A. (2006). Value sensitive design and information systems. In P. Zhang & D. F. Galletta (Eds.), Human–computer interaction and management information systems: Foundations (pp. 348–372). London: Taylor and Francis.

    Google Scholar 

  • Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems 14(3), 330–347.

    Article  Google Scholar 

  • Gerdes, J. C., & Thornton, S. M. (2015). Implementable ethics for autonomous vehicles. In M. Maurer, J. C. Gerdes, B. Lenz, & H. Winner (Eds.), Autonomes Fahren (pp. 87–102). Berlin: Springer.

    Chapter  Google Scholar 

  • Gleick, J. (1997). Chaos: Making a new science. New York: Vintage.

    MATH  Google Scholar 

  • Goodall, N. J. (2014). Machine Ethics and Automated Vehicles. In G. Meyer & S. Beiker (Eds.), Road vehicle automation (pp. 93–102). Berlin: Springer.

    Chapter  Google Scholar 

  • Goodall, N. J. (2016). Away from trolley problems and toward risk management. Applied Artificial Intelligence 30(8), 810–821.

    Article  Google Scholar 

  • Graham, K. (2012). Of frightened horses and autonomous vehicles: Tort law and its assimilation of innovations. Santa Clara Law Review, 52(4), 1241.

    Google Scholar 

  • Hart, H. L. A. (2008). Punishment and responsibility: Essays in the philosophy of law. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and Engineering Ethics, 21(3), 619–630.

    Article  Google Scholar 

  • Heyns, C. (2016). Autonomous weapons systems: Living a dignified life and dying a dignified death. In N. Bhuta, S. Beck, R. Geiss, H.-Y. Liu, & C. Kress (Eds.), Autonomous weapons systems (pp. 3–20). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Jain, N. (2016). Autonomous weapons systems: New frameworks for individual responsibility. In N. Bhuta, S. Beck, R. Geiss, H.-Y. Liu, & C. Kress (Eds.), Autonomous weapons systems—Law, ethics policy (pp. 303–324). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Lin, P. (2013a). The ethics of saving lives with autonomous cars is far murkier than you think. WIRED. Retrieved July 30, 2013, from http://www.wired.com/2013/07/the-surprising-ethics-of-robot-cars/.

  • Lin, P. (2013b). The ethics of autonomous cars. The Atlantic. Retrieved October 8, 2013, from http://www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/.

  • Lin, P. (2014a). The robot car of tomorrow may just be programmed to hit you. WIRED. Retrieved May 6, 2014, from http://www.wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-programmed-to-hit-you/.

  • Lin, P. (2014b). Here’s a terrible idea: Robot cars with adjustable ethics settings. WIRED. Retrieved August 18, 2014, from http://www.wired.com/2014/08/heres-a-terrible-idea-robot-cars-with-adjustable-ethics-settings/.

  • Lin, P. (2015). Why ethics matters for autonomous cars. In M. Maurer, J. C. Gerdes, B. Lenz, & H. Winner (Eds.), Autonomes Fahren (pp. 69–85). Berlin: Springer.

    Chapter  Google Scholar 

  • Liu, H.-Y. (2015). Law’s impunity: Responsibility and the modern private military company. Oxford: Hart Publishing.

    Google Scholar 

  • Liu, H.-Y. (2016). Refining responsibility: Differentiating two types of responsibility issues raised by autonomous weapons systems. In N. Bhuta, S. Beck, R. Geiss, H.-Y. Liu, & C. Kress (Eds.), Autonomous weapons systems—Law, ethics policy (pp. 325–344). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • MacCormick, N. (1995). Argumentation and interpretation in law. Argumentation, 9(3), 467–480.

    Article  Google Scholar 

  • Marchant, G. E., & Lindor, R. A. (2012). The coming collision between autonomous vehicles and the liability system. Santa Clara Law Review, 52(4), 1321.

    Google Scholar 

  • Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.

    Article  Google Scholar 

  • Nyholm, S., & Smids, J. (2016). The ethics of accident-algorithms for self-driving cars: An applied trolley problem? Ethical Theory and Moral Practice, 19(5), 1275–1289.

    Article  Google Scholar 

  • Puhl, R., & Brownell, K. D. (2001). Bias, discrimination, and obesity. Obesity Research, 9(12), 788–805.

    Article  Google Scholar 

  • Robbins, M. (2016). Statistically, self-driving cars are about to kill someone. What happens next? The Guardian. Retrieved June 14, 2016, from https://www.theguardian.com/science/2016/jun/14/statistically-self-driving-cars-are-about-to-kill-someone-what-happens-next.

  • Salomon v A Salomon. (1897). [1897] AC 22. U.K. House of Lords.

  • Santa Clara County v Southern Pacific Railroad. (1886). 118 U.S. 394. U.S. Supreme Court.

  • Schulzke, M. (2013). Autonomous weapons and distributed responsibility. Philosophy & Technology, 26(2), 203–219.

    Article  Google Scholar 

  • Seck, S. L. (2011). Collective responsibility and transnational corporate conduct. In T. Isaacs & R. Vernon (Eds.), Accountability for collective wrongdoing (pp. 140–168). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Suchman, L., & Weber, J. (2016). Human–machine autonomies. In N. Bhuta, S. Beck, R. Geiss, H.-Y. Liu, & C. Kress (Eds.), Autonomous weapons systems: Law, ethics, policy (pp. 75–102). Cambridge: Cambridge University Press.

    Google Scholar 

  • Veitch, S. (2007). Law and irresponsibility: On the legitimation of human suffering. Oxford: Routledge-Cavendish.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hin-Yan Liu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, HY. Irresponsibilities, inequalities and injustice for autonomous vehicles. Ethics Inf Technol 19, 193–207 (2017). https://doi.org/10.1007/s10676-017-9436-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-017-9436-2

Keywords

Navigation