On the Justified Use of AI Decision Support in Evidence-Based Medicine: Validity, Explainability, and Responsibility

Cambridge Quarterly of Healthcare Ethics:1-7 (forthcoming)
  Copy   BIBTEX

Abstract

When is it justified to use opaque artificial intelligence (AI) output in medical decision-making? Consideration of this question is of central importance for the responsible use of opaque machine learning (ML) models, which have been shown to produce accurate and reliable diagnoses, prognoses, and treatment suggestions in medicine. In this article, I discuss the merits of two answers to the question. According to the Explanation View, clinicians must have access to an explanation of why an output was produced. According to the Validation View, it is sufficient that the AI system has been validated using established standards for safety and reliability. I defend the Explanation View against two lines of criticism, and I argue that within the framework of evidence-based medicine mere validation seems insufficient for the use of AI output. I end by characterizing the epistemic responsibility of clinicians and point out how a mere AI output cannot in itself ground a practical conclusion about what to do.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,127

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Should we be afraid of medical AI?Ezio Di Nucci - 2019 - Journal of Medical Ethics 45 (8):556-558.
Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.

Analytics

Added to PP
2023-06-11

Downloads
32 (#516,416)

6 months
20 (#139,007)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Sune Holm
University of Copenhagen

Citations of this work

Algorithmic legitimacy in clinical decision-making.Sune Holm - 2023 - Ethics and Information Technology 25 (3):1-10.

Add more citations