Article Text

Download PDFPDF
What you believe you want, may not be what the algorithm knows
  1. Seppe Segers1,2
  1. 1 Department of Health, Ethics, and Society, Maastricht University Faculty of Health Medicine and Life Sciences, Maastricht, The Netherlands
  2. 2 Department of Philosophy and Moral Sciences, Ghent University Bioethics Institute Ghent, Ghent, Belgium
  1. Correspondence to Dr Seppe Segers, Department of Health, Ethics, and Society, Maastricht University Faculty of Health Medicine and Life Sciences, Maastricht, The Netherlands; seppe.segers{at}maastrichtuniversity.nl

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Tensions between respect for autonomy and paternalism loom large in Ferrario et al’s discussion of artificial intelligence (AI)-based preference predictors.1 To be sure, their analysis (rightfully) brings out the moral matter of respecting patient preferences. My point here, however, is that their consideration of AI-based preference predictors in treatment of incapacitated patients opens more fundamental moral questions about the desirability of over-ruling considered patient preferences, not only if these are disclosed by surrogates, but possibly also in treating competent patients.

I do not advocate such an evolution—the moral desirability of that calls for a much broader debate, one in which the meaning of ‘doing good’ in medicine, and how this intersects with normative views on ‘the goal(s) of medicine’ would be central elements. While my aim in this piece is more modest, I nonetheless hope to approach it sideways, by indicating how the contribution by Ferrario et al reopens discussion about paternalism and the normativity of preferences in medicine.

This follows from what these authors give as reason for endorsing this technology in care for incapacitated patients. Their argument for employing such tools in case of incapacitation is based on the premise that advice of surrogate decision-makers about care preferences is suboptimal because of their biased perception …

View Full Text

Footnotes

  • Contributors SS is the sole author.

  • Funding This study was funded by H2020 European Research Council (Grant number: 949841; European Research Council (ERC)).

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Linked Articles