Clinical AI: opacity, accountability, responsibility and liability

AI and Society 36 (2):535-545 (2021)
  Copy   BIBTEX

Abstract

The aim of this literature review was to compose a narrative review supported by a systematic approach to critically identify and examine concerns about accountability and the allocation of responsibility and legal liability as applied to the clinician and the technologist as applied the use of opaque AI-powered systems in clinical decision making. This review questions if it is permissible for a clinician to use an opaque AI system in clinical decision making and if a patient was harmed as a result of using a clinician using an AIS’s suggestion, how would responsibility and legal liability be allocated? Literature was systematically searched, retrieved, and reviewed from nine databases, which also included items from three clinical professional regulators, as well as relevant grey literature from governmental and non-governmental organisations. This literature was subjected to inclusion/exclusion criteria; those items found relevant to this review underwent data extraction. This review found that there are multiple concerns about opacity, accountability, responsibility and liability when considering the stakeholders of technologists and clinicians in the creation and use of AIS in clinical decision making. Accountability is challenged when the AIS used is opaque, and allocation of responsibility is somewhat unclear. Legal analysis would help stakeholders to understand their obligations and prepare should an undesirable scenario of patient harm eventuate when AIS were used.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 94,593

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2020-07-26

Downloads
54 (#292,435)

6 months
10 (#397,728)

Historical graph of downloads
How can I increase my downloads?