Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy
Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019) (2020)
AbstractThis paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what needs to be done to achieve an acceptable level of explainability in an ML algorithm when it is used in a healthcare context.
Similar books and articles
Legal requirements on explainability in machine learning.Adrien Bibal, Michael Lognoul, Alexandre de Streel & Benoît Frénay - 2021 - Artificial Intelligence and Law 29 (2):149-169.
Should Artificial Intelligence Augment Medical Decision Making? The Case for an Autonomy Algorithm.Camillo Lamanna - 2018 - AMA Journal of Ethics 9 (20):E902-910.
Ethics and Bias in Machine Learning: A Technical Study of What Makes Us “Good”.Nicole Shadowen - 2019 - In Newton Lee (ed.), The Transhumanism Handbook. Springer Verlag. pp. 247-261.
On the Ethics of Algorithmic Decision-Making in Healthcare.Thomas Grote & Philipp Berens - 2020 - Journal of Medical Ethics 46 (3):205-211.
Respect for Autonomy and Human Dignity in Codes of Conduct of Health Care Professionals (in Slovakia).Katarína Komenská - 2012 - Ethics and Bioethics (in Central Europe) 2 (3-4):192-200.
Two Challenges for CI Trustworthiness and How to Address Them.Kevin Baum, Eva Schmidt & A. Köhl Maximilian - 2017
Argument Based Machine Learning Applied to Law.Martin Možina, Jure Žabkar, Trevor Bench-Capon & Ivan Bratko - 2005 - Artificial Intelligence and Law 13 (1):53-73.
Machine Learning Ecml-93 : European Conference on Machine Learning, Vienna, Austria, April 5-7, 1993 : Proceedings.Pavel B. Brazdil & Austrian Research Institute for Artificial Intelligence - 1993 - Springer Verlag.
Placebo: Deception and the Notion of Autonomy.Evangelos D. Protopapadakis - 2018 - In George Arabatzis & Evangelos D. Protopapadakis (eds.), Thinking in Action. Athens, Greece: pp. 103-115.
Patient Autonomy: A View From the Kitchen.Rita M. Struhkamp - 2004 - Medicine, Health Care and Philosophy 8 (1):105-114.
From Privacy to Anti-Discrimination in Times of Machine Learning.Thilo Hagendorff - 2019 - Ethics and Information Technology 21 (4):331-343.
Added to PP
Historical graph of downloads
Citations of this work
No citations found.
References found in this work
Mind the Gap: Responsible Robotics and the Problem of Responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci.Sven Nyholm - 2018 - Science and Engineering Ethics 24 (4):1201-1219.
The Retribution-Gap and Responsibility-Loci Related to Robots and Automated Technologies: A Reply to Nyholm.Roos de Jong - 2020 - Science and Engineering Ethics 26 (2):727-735.
Trust in Medicine.Philip J. Nickel & Lily Frank - 2020 - In Judith Simon (ed.), The Routledge Handbook of Trust and Philosophy.