Skip to main content
Log in

Vicarious liability: a solution to a problem of AI responsibility?

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

Who is responsible when an AI machine causes something to go wrong? Or is there a gap in the ascription of responsibility? Answers range from claiming there is a unique responsibility gap, several different responsibility gaps, or no gap at all. In a nutshell, the problem is as follows: on the one hand, it seems fitting to hold someone responsible for a wrong caused by an AI machine; on the other hand, there seems to be no fitting bearer of responsibility for this wrong. In this article, we focus on a particular (aspect of the) AI responsibility gap: it seems fitting that someone should bear the legal consequences in scenarios involving AI machines with design defects; however, there seems to be no such fitting bearer. We approach this problem from the legal perspective, and suggest vicarious liability of AI manufacturers as a solution to this problem. Our proposal comes in two variants: the first one has a narrower range of application, but can be easily integrated in current legal frameworks; the second one requires a revision of current legal frameworks, but has a wider range of application. The latter variant employs a broadened account of vicarious liability. We emphasise strengths of the two variants and finally highlight how vicarious liability offers important insights for addressing a moral AI responsibility gap.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. We will mainly use the term AI machine in a very broad sense, meaning any machine equipped with a form of artificial intelligence whose behaviour may have normatively relevant consequences. In the following, we will encounter examples involving specific kinds of machines, such as robots and autonomous vehicles. While not everything that is said about one of these kinds automatically carries over to the others, much of the discussion that is relevant to AI responsibility would be lost if we were to exclude works that focus on robot responsibility or on specific kinds of AI machines from our analysis.

  2. The philosophical debate on fittingness is vast (see, e.g., the survey in Howard, 2018). Nevertheless, for the purposes of the present work it is not necessary to assume a specific view; it is sufficient to have a simple understanding of this notion as an element introducing a normative perspective.

  3. Since AI machines do not usually have decisional autonomy, one might instead say that there is ultimately human-AI collaboration with respect to decisions: at least at the current stage of AI technology, humans are always causally responsible for how a machine is initially programmed.

  4. For an opposite view on the moral responsibility of robots, see Sullins (2011).

  5. To give a few examples, in Bazley v Curry 2 SCR 534 (1999), Lister v Hesley Hall Ltd 1 AC 215 (2002), and Majrowski v Guy’s and St. Thomas’s NHS Trust UKHL 34 (2006), an employer was held vicariously responsible for the intentional wrongdoing of their employees (abuse, harassment, sexual misconduct).

  6. We do not discuss permissions since, in our view, ascriptions of vicarious liability primarily deal with cases of norm violation.

References

  • Asaro, P. M. (2012). A body to kick, but still no soul to damn: Legal perspectives on robotics. Robot ethics: The ethical and social implications of robotics (pp. 169–186). MIT Press.

    Google Scholar 

  • Brodie, D. (2006). The enterprise and the borrowed worker. Industrial Law Journal, 35(1), 87–92.

    Article  Google Scholar 

  • Brodie, D. (2007). Enterprise liability: Justifying vicarious liability. Oxford Journal of Legal Studies, 27(3), 493–508.

    Article  Google Scholar 

  • Chesterman, S. (2021). We, the robots? Regulating artificial intelligence and the limits of the law. Cambridge University Press.

    Book  Google Scholar 

  • Coeckelbergh, M. (2020a). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26(4), 2051–2068.

    Article  Google Scholar 

  • Coeckelbergh, M. (2020b). AI ethics. MIT Press.

    Book  Google Scholar 

  • Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer.

    Book  Google Scholar 

  • Giliker, P. (2010). Vicarious liability in tort: A comparative perspective. Cambridge University Press.

    Book  Google Scholar 

  • Gray, A. (2018). Vicarious liability: Critique and reform. Hart Publishing.

    Book  Google Scholar 

  • Gunkel, D. J. (2020). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology, 22, 307–320.

    Article  Google Scholar 

  • Gurney, J. (2017). Applying a reasonable driver standard to accidents caused by autonomous vehicles. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0 (pp. 51–65). Oxford University Press.

  • Hakli, R., & Mäkelä, P. (2019). Moral responsibility of robots and hybrid agents. The Monist, 102(2), 259–275.

    Article  Google Scholar 

  • Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and Engineering Ethics, 21(3), 619–630.

    Article  Google Scholar 

  • Hyman, J. (2015). Action, knowledge, and will. Oxford University Press.

    Book  Google Scholar 

  • Howard, C. (2018). Fittingness. Philosophy. Compass, 13, e12542.

    Article  Google Scholar 

  • Köhler, S., Roughley, N., & Sauer, H. (2017). Technologically blurred accountability? Technology, responsibility gaps and the robustness of our everyday conceptual scheme. In C. Ulbert, P. Finkenbusch, E. Sondermann, & T. Debiel (Eds.), Moral agency and the politics of responsibility (pp. 51–68). Routledge.

  • Lin, P., Abney, K., & Jenkins, R. (Eds.). (2017). Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford University Press.

    Google Scholar 

  • Loh, W., & Loh, J. (2017). Autonomy and responsibility in hybrid systems. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0 (pp. 35–50). Oxford University Press.

    Google Scholar 

  • Magnet, J. (2015). Vicarious liability and the professional employee. Canadian Cases on the Law of Torts, 6, 208–226.

    Google Scholar 

  • Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6, 175–183.

    Article  Google Scholar 

  • Santoni de Sio, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Online first in Philosophy & technology.

  • Sullins, J. P. (2011). When is a robot a moral agent. In M. Anderson & S. L. Anderson (Eds.), Machine ethics (pp. 151–161). Cambridge University Press.

    Chapter  Google Scholar 

  • Tigard, D. W. (2020). There is no techno-responsibility gap. Online first in Philosophy & technology.

  • Turner, J. (2019). Robot rules: Regulating artificial intelligence. Palgrave Macmillan.

    Book  Google Scholar 

  • White, T. N., & Baum, S. D. (2017). Liability for present and future robotics technology. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0 (pp. 66–79). Oxford University Press.

    Google Scholar 

  • Wu, S. S. (2016). Product liability issues in the US and associated risk management. In M. Maurer, J. C. Gerdes, B. Lenz, & H. Winner (Eds.), Autonomous driving (pp. 553–569). Springer.

    Google Scholar 

Download references

Acknowledgements

Daniela Glavaničová was supported by the Slovak Research and Development Agency under the contract no. APVV-170057 and VEGA 1/0197/20. Matteo Pascucci was supported by the Štefan Schwarz Fund for the project "A fine-grained analysis of Hohfeldian concepts” (2020-2023) and VEGA 2/0125/22. The authors thank John Hyman, Maximilian Kiener, Alessandra Marra and her students at LMU Munich for thoughtful discussions of the paper’s central ideas.

Author information

Authors and Affiliations

Authors

Contributions

The contents of the article are the result of a joint research work of the two authors.

Corresponding author

Correspondence to Matteo Pascucci.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Glavaničová, D., Pascucci, M. Vicarious liability: a solution to a problem of AI responsibility?. Ethics Inf Technol 24, 28 (2022). https://doi.org/10.1007/s10676-022-09657-8

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10676-022-09657-8

Keywords

Navigation