Predicting and Preferring

Inquiry: An Interdisciplinary Journal of Philosophy (forthcoming)
  Copy   BIBTEX

Abstract

The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Will intelligent machines become moral patients?Parisa Moosavi - 2023 - Philosophy and Phenomenological Research 109 (1):95-116.
AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.

Analytics

Added to PP
2023-08-09

Downloads
504 (#41,157)

6 months
204 (#17,411)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Nathaniel Sharadin
University of Hong Kong

References found in this work

Rethinking informed consent in bioethics.Neil C. Manson - 2007 - New York: Cambridge University Press. Edited by Onora O'Neill.
Self-Fulfilling Beliefs: A Defence.Paul Silva - 2023 - Australasian Journal of Philosophy 101 (4):1012-1018.

View all 20 references / Add more references