Switch to: References

Add citations

You must login to add citations.
  1. The Patient preference predictor and the objection from higher-order preferences.Jakob Thrane Mainz - 2023 - Journal of Medical Ethics 49 (3):221-222.
    Recently, Jardas _et al_ have convincingly defended the patient preference predictor (PPP) against a range of autonomy-based objections. In this response, I propose a new autonomy-based objection to the PPP that is not explicitly discussed by Jardas _et al_. I call it the ‘objection from higher-order preferences’. Even if this objection is not sufficient reason to reject the PPP, the objection constitutes a pro tanto reason that is at least as powerful as the ones discussed by Jardas _et al._.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Response to commentaries: ‘autonomy-based criticisms of the patient preference predictor’.David Wasserman & David Wendler - 2023 - Journal of Medical Ethics 49 (8):580-582.
    The authors respond to four JME commentaries on their Feature Article, ‘Autonomy-based criticisms of the patient preference predictor’.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Autonomy, shared agency and prediction.Sungwoo Um - 2022 - Journal of Medical Ethics 48 (5):313-314.
    The patient preference predictor is a computer-based algorithm devised to predict the medical treatment that decisionally incapacitated patients would have preferred. The target paper argues against various criticisms to the effect that the use of a PPP is inconsistent with proper respect for patient autonomy.1 In this commentary, I aim to add some clarifications to the complex relationship between autonomy and the PPP. First, I highlight one way in which the decision of a surrogate designated by the patient realises respect (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Predicting and Preferring.Nathaniel Sharadin - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Sovereignty, authenticity and the patient preference predictor.Ben Schwan - 2022 - Journal of Medical Ethics 48 (5):311-312.
    The question of how to treat an incapacitated patient is vexed, both normatively and practically—normatively, because it is not obvious what the relevant objectives are; practically, because even once the relevant objectives are set, it is often difficult to determine which treatment option is best given those objectives. But despite these complications, here is one consideration that is clearly relevant: what a patient prefers. And so any device that could reliably identify a patient’s preferences would be a promising tool for (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons.Sabine Salloch, Tim Kacprowski, Wolf-Tilo Balke, Frank Ursin & Lasse Benzinger - 2023 - BMC Medical Ethics 24 (1):1-9.
    BackgroundHealthcare providers have to make ethically complex clinical decisions which may be a source of stress. Researchers have recently introduced Artificial Intelligence (AI)-based applications to assist in clinical ethical decision-making. However, the use of such tools is controversial. This review aims to provide a comprehensive overview of the reasons given in the academic literature for and against their use.MethodsPubMed, Web of Science, Philpapers.org and Google Scholar were searched for all relevant publications. The resulting set of publications was title and abstract (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Commentary on ‘Autonomy-based criticisms of the patient preference predictor’.Collin O'Neil - forthcoming - Journal of Medical Ethics.
    When a patient lacks sufficient capacity to make a certain treatment decision, whether because of deficits in their ability to make a judgement that reflects their values or to make a decision that reflects their judgement or both, the decision must be made by a surrogate. Often the best way to respect the patient’s autonomy, in such cases, is for the surrogate to make a ‘substituted’ judgement on behalf of the patient, which is the decision that best reflects the patient’s (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethics of the algorithmic prediction of goal of care preferences: from theory to practice.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):165-174.
    Artificial intelligence (AI) systems are quickly gaining ground in healthcare and clinical decision-making. However, it is still unclear in what way AI can or should support decision-making that is based on incapacitated patients’ values and goals of care, which often requires input from clinicians and loved ones. Although the use of algorithms to predict patients’ most likely preferred treatment has been discussed in the medical ethics literature, no example has been realised in clinical practice. This is due, arguably, to the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Meta-surrogate decision making and artificial intelligence.Brian D. Earp - 2022 - Journal of Medical Ethics 48 (5):287-289.
    How shall we decide for others who cannot decide for themselves? And who—or what, in the case of artificial intelligence — should make the decision? The present issue of the journal tackles several interrelated topics, many of them having to do with surrogate decision making. For example, the feature article by Jardas et al 1 explores the potential use of artificial intelligence to predict incapacitated patients’ likely treatment preferences based on their sociodemographic characteristics, raising questions about the means by which (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable.Brian D. Earp, Sebastian Porsdam Mann, Jemima Allen, Sabine Salloch, Vynn Suren, Karin Jongsma, Matthias Braun, Dominic Wilkinson, Walter Sinnott-Armstrong, Annette Rid, David Wendler & Julian Savulescu - forthcoming - American Journal of Bioethics:1-14.
    When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations