Computer knows best? The need for value-flexibility in medical AI

Journal of Medical Ethics 45 (3):156-160 (2019)
  Copy   BIBTEX

Abstract

Artificial intelligence is increasingly being developed for use in medicine, including for diagnosis and in treatment decision making. The use of AI in medical treatment raises many ethical issues that are yet to be explored in depth by bioethicists. In this paper, I focus specifically on the relationship between the ethical ideal of shared decision making and AI systems that generate treatment recommendations, using the example of IBM’s Watson for Oncology. I argue that use of this type of system creates both important risks and significant opportunities for promoting shared decision making. If value judgements are fixed and covert in AI systems, then we risk a shift back to more paternalistic medical care. However, if designed and used in an ethically informed way, AI could offer a potentially powerful way of supporting shared decision making. It could be used to incorporate explicit value reflection, promoting patient autonomy. In the context of medical treatment, we need value-flexible AI that can both respond to the values and treatment goals of individual patients and support clinicians to engage in shared decision making.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,031

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Shared Decision-Making and the Lower Literate Patient.David I. Shalowitz & Michael S. Wolf - 2004 - Journal of Law, Medicine and Ethics 32 (4):759-764.
Shared Decision-Making and the Lower Literate Patient.David I. Shalowitz & Michael S. Wolf - 2004 - Journal of Law, Medicine and Ethics 32 (4):759-764.

Analytics

Added to PP
2018-11-23

Downloads
142 (#135,046)

6 months
23 (#124,770)

Historical graph of downloads
How can I increase my downloads?