Percentages and reasons: AI explainability and ultimate human responsibility within the medical field

Ethics and Information Technology 26 (2):1-10 (2024)
  Copy   BIBTEX


With regard to current debates on the ethical implementation of AI, especially two demands are linked: the call for explainability and for ultimate human responsibility. In the medical field, both are condensed into the role of one person: It is the physician to whom AI output should be explainable and who should thus bear ultimate responsibility for diagnostic or treatment decisions that are based on such AI output. In this article, we argue that a black box AI indeed creates a rationally irresolvable epistemic situation for the physician involved. Specifically, strange errors that are occasionally made by AI sometimes detach its output from human reasoning. Within this article it is further argued that such an epistemic situation is problematic in the context of ultimate human responsibility. Since said strange errors limit the promises of explainability and the concept of explainability frequently appears irrelevant or insignificant when applied to a diverse set of medical applications, we deem it worthwhile to reconsider the call for ultimate human responsibility.



    Upload a copy of this work     Papers currently archived: 94,549

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

The Ethics of AI in Human Resources.Evgeni Aizenberg & Matthew J. Dennis - 2022 - Ethics and Information Technology 24 (3):1-3.
Explainability, Public Reason, and Medical Artificial Intelligence.Michael Da Silva - 2023 - Ethical Theory and Moral Practice 26 (5):743-762.
Correction to: the Ethics of AI in Human Resources.Evgeni Aizenberg & Matthew J. Dennis - 2023 - Ethics and Information Technology 25 (1):1-1.


Added to PP

35 (#453,459)

6 months
35 (#118,091)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Markus Herrmann
University of Heidelberg

Citations of this work

No citations found.

Add more citations