Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy

Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019) (2020)
  Copy   BIBTEX

Abstract

This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what needs to be done to achieve an acceptable level of explainability in an ML algorithm when it is used in a healthcare context.

Other Versions

No versions found

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Testimonial injustice in medical machine learning.Giorgia Pozzi - 2023 - Journal of Medical Ethics 49 (8):536-540.

Analytics

Added to PP
2020-09-23

Downloads
411 (#67,384)

6 months
52 (#97,601)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Michał Klincewicz
Tilburg University
Lily Frank
Eindhoven University of Technology

Citations of this work

No citations found.

Add more citations