Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy

Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019) (2020)
  Copy   BIBTEX


This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what needs to be done to achieve an acceptable level of explainability in an ML algorithm when it is used in a healthcare context.



External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Should we be afraid of medical AI?Ezio Di Nucci - 2019 - Journal of Medical Ethics 45 (8):556-558.
Placebo: Deception and the notion of autonomy.Evangelos D. Protopapadakis - 2018 - In George Arabatzis & Evangelos D. Protopapadakis (eds.), Thinking in Action. Athens, Greece: pp. 103-115.
Patient autonomy: A view from the kitchen.Rita M. Struhkamp - 2005 - Medicine, Health Care and Philosophy 8 (1):105-114.
From privacy to anti-discrimination in times of machine learning.Thilo Hagendorff - 2019 - Ethics and Information Technology 21 (4):331-343.
Teaching for patient-centred ethics.Richard E. Ashcroft - 2000 - Medicine, Health Care and Philosophy 3 (3):285-293.


Added to PP

287 (#61,808)

6 months
59 (#65,595)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Michał Klincewicz
Tilburg University
Lily Frank
Eindhoven University of Technology

Citations of this work

No citations found.

Add more citations