Black is the new orange: how to determine AI liability

Artificial Intelligence and Law 31 (1):133-167 (2023)
  Copy   BIBTEX

Abstract

Autonomous artificial intelligence (AI) systems can lead to unpredictable behavior causing loss or damage to individuals. Intricate questions must be resolved to establish how courts determine liability. Until recently, understanding the inner workings of “black boxes” has been exceedingly difficult; however, the use of Explainable Artificial Intelligence (XAI) would help simplify the complex problems that can occur with autonomous AI systems. In this context, this article seeks to provide technical explanations that can be given by XAI, and to show how suitable explanations for liability can be reached in court. It provides an analysis of whether existing liability frameworks, in both civil and common law tort systems, with the support of XAI, can address legal concerns related to AI. Lastly, it claims their further development and adoption should allow AI liability cases to be decided under current legal and regulatory rules until new liability regimes for AI are enacted.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,953

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Artificial intelligence as law. [REVIEW]Bart Verheij - 2020 - Artificial Intelligence and Law 28 (2):181-206.
Is Explainable AI Responsible AI?Isaac Taylor - forthcoming - AI and Society.
Law, liability and expert systems.Dr Joseph A. Cannataci - 1989 - AI and Society 3 (3):169-183.

Analytics

Added to PP
2022-01-15

Downloads
81 (#211,477)

6 months
29 (#110,265)

Historical graph of downloads
How can I increase my downloads?