Reasoning about responsibility in autonomous systems: challenges and opportunities

AI and Society 38 (4):1453-1464 (2023)
  Copy   BIBTEX

Abstract

Ensuring the trustworthiness of autonomous systems and artificial intelligence is an important interdisciplinary endeavour. In this position paper, we argue that this endeavour will benefit from technical advancements in capturing various forms of responsibility, and we present a comprehensive research agenda to achieve this. In particular, we argue that ensuring the reliability of autonomous system can take advantage of technical approaches for quantifying degrees of responsibility and for coordinating tasks based on that. Moreover, we deem that, in certifying the legality of an AI system, formal and computationally implementable notions of _responsibility_, _blame_, _accountability_, and _liability_ are applicable for addressing potential responsibility gaps (i.e. situations in which a group is responsible, but individuals’ responsibility may be unclear). This is a call to enable AI systems themselves, as well as those involved in the design, monitoring, and governance of AI systems, to represent and reason about who can be seen as responsible in prospect (e.g. for completing a task in future) and who can be seen as responsible retrospectively (e.g. for a failure that has already occurred). To that end, in this work, we show that across all stages of the design, development, and deployment of trustworthy autonomous systems (TAS), responsibility reasoning should play a key role. This position paper is the first step towards establishing a road map and research agenda on how the notion of responsibility can provide novel solution concepts for ensuring the _reliability_ and _legality_ of TAS and, as a result, enables an effective embedding of AI technologies into society.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,349

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Call for papers.[author unknown] - 2018 - AI and Society 33 (3):457-458.
Call for papers.[author unknown] - 2018 - AI and Society 33 (3):453-455.
Privacy preserving or trapping?Xiao-yu Sun & Bin Ye - forthcoming - AI and Society:1-11.
The political imaginary of National AI Strategies.Guy Paltieli - 2022 - AI and Society 37 (4):1613-1624.

Analytics

Added to PP
2022-12-09

Downloads
23 (#661,981)

6 months
13 (#185,110)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Sebastian Stein
Universität Stuttgart

Citations of this work

Estados Unidos, China y Rusia: propuestas nacionales para una ética de la IA en la nueva guerra fría.Fabio Morandín-Ahuerma - 2023 - In Principios normativos para una ética de la inteligencia artificial. Puebla, México: Consejo de Ciencia y Tecnología del Estado de Puebla (Concytep). pp. 162-185.

Add more citations