12 found
Order:
  1.  62
    Testimonial injustice in medical machine learning.Giorgia Pozzi - 2023 - Journal of Medical Ethics 49 (8):536-540.
    Machine learning (ML) systems play an increasingly relevant role in medicine and healthcare. As their applications move ever closer to patient care and cure in clinical settings, ethical concerns about the responsibility of their use come to the fore. I analyse an aspect of responsible ML use that bears not only an ethical but also a significant epistemic dimension. I focus on ML systems’ role in mediating patient–physician relations. I thereby consider how ML systems may silence patients’ voices and relativise (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  2.  65
    Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare.Giorgia Pozzi - 2023 - Ethics and Information Technology 25 (1):1-12.
    Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients’ likelihood (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  3. From ethics to epistemology and back again: informativeness and epistemic injustice in explanatory medical machine learning.Giorgia Pozzi & Juan M. Durán - forthcoming - AI and Society:1-12.
    In this paper, we discuss epistemic and ethical concerns brought about by machine learning (ML) systems implemented in medicine. We begin by fleshing out the logic underlying a common approach in the specialized literature (which we call the _informativeness account_). We maintain that the informativeness account limits its analysis to the impact of epistemological issues on ethical concerns without assessing the bearings that ethical features have on the epistemological evaluation of ML systems. We argue that according to this methodological approach, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  4.  65
    Conversational Artificial Intelligence and the Potential for Epistemic Injustice.Michiel De Proost & Giorgia Pozzi - 2023 - American Journal of Bioethics 23 (5):51-53.
    In their article, Sedlakova and Trachsel (2023) propose a holistic, ethical, and epistemic analysis of conversational artificial intelligence (CAI) in psychotherapeutic settings. They mainly descri...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  5. From ethics to epistemology and back again: informativeness and epistemic injustice in explanatory medical machine learning.Giorgia Pozzi & Juan M. Durán - 2025 - AI and Society 40 (2):299-310.
    In this paper, we discuss epistemic and ethical concerns brought about by machine learning (ML) systems implemented in medicine. We begin by fleshing out the logic underlying a common approach in the specialized literature (which we call the informativeness account). We maintain that the informativeness account limits its analysis to the impact of epistemological issues on ethical concerns without assessing the bearings that ethical features have on the epistemological evaluation of ML systems. We argue that according to this methodological approach, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  58
    Trust and Trustworthiness in AI.Juan Manuel Durán & Giorgia Pozzi - 2025 - Philosophy and Technology 38 (1):1-31.
    Achieving trustworthy AI is increasingly considered an essential desideratum to integrate AI systems into sensitive societal fields, such as criminal justice, finance, medicine, and healthcare, among others. For this reason, it is important to spell out clearly its characteristics, merits, and shortcomings. This article is the first survey in the specialized literature that maps out the philosophical landscape surrounding trust and trustworthiness in AI. To achieve our goals, we proceed as follows. We start by discussing philosophical positions on trust and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  7.  11
    Why we should talk about institutional (dis)trustworthiness and medical machine learning.Michiel De Proost & Giorgia Pozzi - 2025 - Medicine, Health Care and Philosophy 28 (1):83-92.
    The principle of trust has been placed at the centre as an attitude for engaging with clinical machine learning systems. However, the notions of trust and distrust remain fiercely debated in the philosophical and ethical literature. In this article, we proceed on a structural level ex negativo as we aim to analyse the concept of “institutional distrustworthiness” to achieve a proper diagnosis of how we should not engage with medical machine learning. First, we begin with several examples that hint at (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8.  17
    Machine learning for mental health diagnosis: tackling contributory injustice and epistemic oppression.Giorgia Pozzi & Michiel De Proost - 2024 - Journal of Medical Ethics 50 (9):596-597.
    Introduction In their contribution, Ugar and Malele 1 shed light on an often overlooked but crucial aspect of the ethical development of machine learning (ML) systems to support the diagnosis of mental health disorders. The authors restrain their focus on pointing to the danger of misdiagnosing mental health pathologies that do not qualify as such within sub-Saharan African communities and argue for the need to include population-specific values in these technologies’ design. However, an analysis of the nature of the harm (...))
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9.  10
    On the normality of trust.Giorgia Pozzi - forthcoming - Metascience:1-4.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  25
    Physicians’ Professional Role in Clinical Care: AI as a Change Agent.Giorgia Pozzi & Jeroen van den Hoven - 2023 - American Journal of Bioethics 23 (12):57-59.
    Doernberg and Truog (2023) provide an insightful analysis of the role of medical professionals in what they call spheres of morality. While their framework is useful for inquiring into the moral de...
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  11.  31
    Further remarks on testimonial injustice in medical machine learning: a response to commentaries.Giorgia Pozzi - 2023 - Journal of Medical Ethics 49 (8):551-552.
    In my paper entitled ‘Testimonial injustice in medical machine learning’,1 I argued that machine learning (ML)-based Prediction Drug Monitoring Programmes (PDMPs) could infringe on patients’ epistemic and moral standing inflicting a testimonial injustice.2 I am very grateful for all the comments the paper received, some of which expand on it while others take a more critical view. This response addresses two objections raised to my consideration of ML-induced testimonial injustice in order to clarify the position taken in the paper. The (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12. Philosophy of science for machine learning: Core issues and new perspectives.Juan Manuel Durán & Giorgia Pozzi (eds.) - forthcoming - Springer.
    No categories
     
    Export citation  
     
    Bookmark