Predicting and Preferring

Inquiry: An Interdisciplinary Journal of Philosophy (forthcoming)
  Copy   BIBTEX

Abstract

The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.

Similar books and articles

Law, Ethics, and the Patient Preference Predictor.R. Dresser - 2014 - Journal of Medicine and Philosophy 39 (2):178-186.
Predicting End-of-Life Treatment Preferences: Perils and Practicalities.P. H. Ditto & C. J. Clark - 2014 - Journal of Medicine and Philosophy 39 (2):196-204.
Autonomy, shared agency and prediction.Sungwoo Um - 2022 - Journal of Medical Ethics 48 (5):313-314.
Reflections on the Patient Preference Predictor Proposal.D. W. Brock - 2014 - Journal of Medicine and Philosophy 39 (2):153-160.
Sovereignty, authenticity and the patient preference predictor.Ben Schwan - 2022 - Journal of Medical Ethics 48 (5):311-312.
The concept of precedent autonomy.John K. Davies - 2002 - Bioethics 16 (2):114–133.

Analytics

Added to PP
2023-08-09

Downloads
180 (#84,474)

6 months
180 (#5,935)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Nathaniel Sharadin
University of Hong Kong

Citations of this work

No citations found.

Add more citations