Results for 'xAI'

53 found
Order:
  1. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  2.  33
    Evaluating XAI: A comparison of rule-based and example-based explanations.Jasper van der Waa, Elisabeth Nieuwburg, Anita Cremers & Mark Neerincx - 2021 - Artificial Intelligence 291 (C):103404.
  3.  96
    Axe the X in XAI: A Plea for Understandable AI.Andrés Páez - forthcoming - In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term “explanation” in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors’ claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  4.  8
    C-XAI: A conceptual framework for designing XAI tools that support trust calibration.Mohammad Naiseh, Auste Simkute, Baraa Zieni, Nan Jiang & Raian Ali - 2024 - Journal of Responsible Technology 17 (C):100076.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5.  38
    Causal Explanations and XAI.Sander Beckers - 2022 - Proceedings of the 1St Conference on Causal Learning and Reasoning, Pmlr.
    Although standard Machine Learning models are optimized for making predictions about observations, more and more they are used for making predictions about the results of actions. An important goal of Explainable Artificial Intelligence (XAI) is to compensate for this mismatch by offering explanations about the predictions of an ML-model which ensure that they are reliably action-guiding. As action-guiding explanations are causal explanations, the literature on this topic is starting to embrace insights from the literature on causal models. Here I take (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  6.  27
    Explainable Artificial Intelligence (XAI) to Enhance Trust Management in Intrusion Detection Systems Using Decision Tree Model.Basim Mahbooba, Mohan Timilsina, Radhya Sahal & Martin Serrano - 2021 - Complexity 2021:1-11.
    Despite the growing popularity of machine learning models in the cyber-security applications ), most of these models are perceived as a black-box. The eXplainable Artificial Intelligence has become increasingly important to interpret the machine learning models to enhance trust management by allowing human experts to understand the underlying data evidence and causal reasoning. According to IDS, the critical role of trust management is to understand the impact of the malicious data to detect any intrusion in the system. The previous studies (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7.  15
    Understanding via exemplification in XAI: how explaining image classification benefits from exemplars.Sara Mann - forthcoming - AI and Society:1-16.
    Artificial intelligent (AI) systems that perform image classification tasks are being used to great success in many application contexts. However, many of these systems are opaque, even to experts. This lack of understanding can be problematic for ethical, legal, or practical reasons. The research field Explainable AI (XAI) has therefore developed several approaches to explain image classifiers. The hope is to bring about understanding, e.g., regarding why certain images are classified as belonging to a particular target class. Most of these (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8. What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  9. The Pragmatic Turn in Explainable Artificial Intelligence (XAI).Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  10.  17
    The quest of parsimonious XAI: A human-agent architecture for explanation formulation.Yazan Mualla, Igor Tchappi, Timotheus Kampik, Amro Najjar, Davide Calvaresi, Abdeljalil Abbas-Turki, Stéphane Galland & Christophe Nicolle - 2022 - Artificial Intelligence 302 (C):103573.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11.  55
    Defining Explanation and Explanatory Depth in XAI.Stefan Buijsman - 2022 - Minds and Machines 32 (3):563-584.
    Explainable artificial intelligence (XAI) aims to help people understand black box algorithms, particularly of their outputs. But what are these explanations and when is one explanation better than another? The manipulationist definition of explanation from the philosophy of science offers good answers to these questions, holding that an explanation consists of a generalization that shows what happens in counterfactual cases. Furthermore, when it comes to explanatory depth this account holds that a generalization that has more abstract variables, is broader in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  12.  21
    Toward personalized XAI: A case study in intelligent tutoring systems.Cristina Conati, Oswald Barral, Vanessa Putnam & Lea Rieger - 2021 - Artificial Intelligence 298 (C):103503.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13.  19
    SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI.Emily Sullivan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    Explainable AI (xAI) methods are important for establishing trust in using black-box models. However, recent criticism has mounted against current xAI methods that they disagree, are necessarily false, and can be manipulated, which has started to undermine the deployment of black-box models. Rudin (2019) goes so far as to say that we should stop using black-box models altogether in high-stakes cases because xAI explanations ‘must be wrong’. However, strict fidelity to the truth is historically not a desideratum in science. Idealizations (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  14.  31
    Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies.Eoin M. Kenny, Courtney Ford, Molly Quinn & Mark T. Keane - 2021 - Artificial Intelligence 294 (C):103459.
  15.  70
    A Means-End Account of Explainable Artificial Intelligence.Oliver Buchholz - 2023 - Synthese 202 (33):1-23.
    Explainable artificial intelligence (XAI) seeks to produce explanations for those machine learning methods which are deemed opaque. However, there is considerable disagreement about what this means and how to achieve it. Authors disagree on what should be explained (topic), to whom something should be explained (stakeholder), how something should be explained (instrument), and why something should be explained (goal). In this paper, I employ insights from means-end epistemology to structure the field. According to means-end epistemology, different means ought to be (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  16. Local explanations via necessity and sufficiency: unifying theory and practice.David Watson, Limor Gultchin, Taly Ankur & Luciano Floridi - 2022 - Minds and Machines 32:185-218.
    Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence (XAI), a fast-growing research area that is so far lacking in firm theoretical foundations. Building on work in logic, probability, and causality, we establish the central role of necessity and sufficiency in XAI, unifying seemingly disparate methods in a single formal framework. We provide a sound and complete algorithm for computing explanatory factors (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  17.  2
    Neobjašnjiv objašnjiv AI.Hyeongjoo Kim - 2023 - Synthesis Philosophica 38 (2):275-295.
    This paper critically investigates the explainable artificial intelligence (XAI) project. I analyze the word “explain” in XAI and the theory of explanation and identify the discrepancy between the meaning of the explanation claimed to be necessary and that which is actually presented. After summarizing the history of AI related to explainability, I argue that American philosophy in the 1900s operated in the background of said history. I then extract the meaning of explanation in view of XAI, to elucidate the relationship (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18.  25
    Backtracking Counterfactuals.Julius von Kügelgen, Abdirisak Mohamed & Sander Beckers - forthcoming - Proceedings of the 2Nd Conference on Causal Learning and Reasoning.
    Counterfactual reasoning -- envisioning hypothetical scenarios, or possible worlds, where some circumstances are different from what (f)actually occurred (counter-to-fact) -- is ubiquitous in human cognition. Conventionally, counterfactually-altered circumstances have been treated as "small miracles" that locally violate the laws of nature while sharing the same initial conditions. In Pearl's structural causal model (SCM) framework this is made mathematically rigorous via interventions that modify the causal laws while the values of exogenous variables are shared. In recent years, however, this purely interventionist (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  19.  59
    Explanatory pragmatism: a context-sensitive framework for explainable medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
    Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  20. ANNs and Unifying Explanations: Reply to Erasmus, Brunet, and Fisher.Yunus Prasetya - 2022 - Philosophy and Technology 35 (2):1-9.
    In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) does not make them (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  22. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias can be handled (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  23. Explaining Machine Learning Decisions.John Zerilli - 2022 - Philosophy of Science 89 (1):1-19.
    The operations of deep networks are widely acknowledged to be inscrutable. The growing field of Explainable AI has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  24.  21
    Creating meaningful work in the age of AI: explainable AI, explainability, and why it matters to organizational designers.Kristin Wulff & Hanne Finnestrand - forthcoming - AI and Society:1-14.
    In this paper, we contribute to research on enterprise artificial intelligence (AI), specifically to organizations improving the customer experiences and their internal processes through using the type of AI called machine learning (ML). Many organizations are struggling to get enough value from their AI efforts, and part of this is related to the area of explainability. The need for explainability is especially high in what is called black-box ML models, where decisions are made without anyone understanding how an AI reached (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  25. Explainable machine learning practices: opening another black box for reliable medical AI.Emanuele Ratti & Mark Graves - 2022 - AI and Ethics:1-14.
    In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to be (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  26.  75
    Black is the new orange: how to determine AI liability.Paulo Henrique Padovan, Clarice Marinho Martins & Chris Reed - 2023 - Artificial Intelligence and Law 31 (1):133-167.
    Autonomous artificial intelligence (AI) systems can lead to unpredictable behavior causing loss or damage to individuals. Intricate questions must be resolved to establish how courts determine liability. Until recently, understanding the inner workings of “black boxes” has been exceedingly difficult; however, the use of Explainable Artificial Intelligence (XAI) would help simplify the complex problems that can occur with autonomous AI systems. In this context, this article seeks to provide technical explanations that can be given by XAI, and to show how (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies.Uwe Peters & Mary Carman - forthcoming - IEEE Intelligent Systems.
    Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (N = (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  28.  76
    The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  29.  53
    On Explainable AI and Abductive Inference.Kyrylo Medianovskyi & Ahti-Veikko Pietarinen - 2022 - Philosophies 7 (2):35.
    Modern explainable AI methods remain far from providing human-like answers to ‘why’ questions, let alone those that satisfactorily agree with human-level understanding. Instead, the results that such methods provide boil down to sets of causal attributions. Currently, the choice of accepted attributions rests largely, if not solely, on the explainee’s understanding of the quality of explanations. The paper argues that such decisions may be transferred from a human to an XAI agent, provided that its machine-learning algorithms perform genuinely abductive inferences. (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  39
    Explainable Artificial Intelligence in Data Science.Joaquín Borrego-Díaz & Juan Galán-Páez - 2022 - Minds and Machines 32 (3):485-531.
    A widespread need to explain the behavior and outcomes of AI-based systems has emerged, due to their ubiquitous presence. Thus, providing renewed momentum to the relatively new research area of eXplainable AI (XAI). Nowadays, the importance of XAI lies in the fact that the increasing control transference to this kind of system for decision making -or, at least, its use for assisting executive stakeholders- already affects many sensitive realms (as in Politics, Social Sciences, or Law). The decision-making power handover to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  31.  16
    Subjectivity of Explainable Artificial Intelligence.Александр Николаевич Райков - 2022 - Russian Journal of Philosophical Sciences 65 (1):72-90.
    The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  32. From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  33. Explaining Go: Challenges in Achieving Explainability in AI Go Programs.Zack Garrett - 2023 - Journal of Go Studies 17 (2):29-60.
    There has been a push in recent years to provide better explanations for how AIs make their decisions. Most of this push has come from the ethical concerns that go hand in hand with AIs making decisions that affect humans. Outside of the strictly ethical concerns that have prompted the study of explainable AIs (XAIs), there has been research interest in the mere possibility of creating XAIs in various domains. In general, the more accurate we make our models the harder (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  34.  74
    Is explainable artificial intelligence intrinsically valuable?Nathan Colaner - 2022 - AI and Society 37 (1):231-238.
    There is general consensus that explainable artificial intelligence is valuable, but there is significant divergence when we try to articulate why, exactly, it is desirable. This question must be distinguished from two other kinds of questions asked in the XAI literature that are sometimes asked and addressed simultaneously. The first and most obvious is the ‘how’ question—some version of: ‘how do we develop technical strategies to achieve XAI?’ Another question is specifying what kind of explanation is worth having in the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  35. Certifiable AI.Jobst Landgrebe - 2022 - Applied Sciences 12 (3):1050.
    Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.
    We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: Are networks explainable, and if so, what does it mean to explain the output of a network? And what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  37. Explainable AI and Causal Understanding: Counterfactual Approaches Considered.Sam Baron - 2023 - Minds and Machines 33 (2):347-377.
    The counterfactual approach to explainable AI (XAI) seeks to provide understanding of AI systems through the provision of counterfactual explanations. In a recent systematic review, Chou et al. (Inform Fus 81:59–83, 2022) argue that the counterfactual approach does not clearly provide causal understanding. They diagnose the problem in terms of the underlying framework within which the counterfactual approach has been developed. To date, the counterfactual approach has not been developed in concert with the approach for specifying causes developed by Pearl (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  38. Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks these variations and that (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  39.  35
    Integrating Artificial Intelligence in Scientific Practice: Explicable AI as an Interface.Emanuele Ratti - 2022 - Philosophy and Technology 35 (3):1-5.
    A recent article by Herzog provides a much-needed integration of ethical and epistemological arguments in favor of explicable AI in medicine. In this short piece, I suggest a way in which its epistemological intuition of XAI as “explanatory interface” can be further developed to delineate the relation between AI tools and scientific research.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40.  6
    Kognitive Optimierung durch KI?Sabine Ammon - 2023 - Philosophisches Jahrbuch 130 (2):92-107.
    Recent developments in artificial intelligence (AI) promise cognitive optimization in many areas of our lives, ranging from automated decision-making to superintelligence. In a predominant narrative, the black-box of machine learning systems is identified as one of the biggest obstacles from an epistemic point of view. The problem is expected to be solved by algorithmic counteractions emerging from the field of explainable artificial intelligence (XAI). However, deeper questions about a meaningful cognitive division of labor between AI algorithms and human actors who (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41.  9
    Allure of Simplicity.Thomas Grote - 2023 - Philosophy of Medicine 4 (1).
    This paper develops an account of the opacity problem in medical machine learning (ML). Guided by pragmatist assumptions, I argue that opacity in ML models is problematic insofar as it potentially undermines the achievement of two key purposes: ensuring generalizability and optimizing clinician–machine decision-making. Three opacity amelioration strategies are examined, with explainable artificial intelligence (XAI) as the predominant approach, challenged by two revisionary strategies in the form of reliabilism and the interpretability by design. Comparing the three strategies, I argue that (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42.  40
    The End of Vagueness: Technological Epistemicism, Surveillance Capitalism, and Explainable Artificial Intelligence.Alison Duncan Kerr & Kevin Scharp - 2022 - Minds and Machines 32 (3):585-611.
    Artificial Intelligence (AI) pervades humanity in 2022, and it is notoriously difficult to understand how certain aspects of it work. There is a movement—_Explainable_ Artificial Intelligence (XAI)—to develop new methods for explaining the behaviours of AI systems. We aim to highlight one important philosophical significance of XAI—it has a role to play in the elimination of vagueness. To show this, consider that the use of AI in what has been labeled _surveillance capitalism_ has resulted in humans quickly gaining the capability (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  61
    Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach.Tae Wan Kim & Bryan R. Routledge - 2022 - Business Ethics Quarterly 32 (1):75-102.
    Businesses increasingly rely on algorithms that are data-trained sets of decision rules (i.e., the output of the processes often called “machine learning”) and implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a “right to explanation.” It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed consent is a static and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  44.  12
    Explanation–Question–Response dialogue: An argumentative tool for explainable AI.Federico Castagna, Peter McBurney & Simon Parsons - forthcoming - Argument and Computation:1-23.
    Advancements and deployments of AI-based systems, especially Deep Learning-driven generative language models, have accomplished impressive results over the past few years. Nevertheless, these remarkable achievements are intertwined with a related fear that such technologies might lead to a general relinquishing of our lives’s control to AIs. This concern, which also motivates the increasing interest in the eXplainable Artificial Intelligence (XAI) research field, is mostly caused by the opacity of the output of deep learning systems and the way that it is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45.  28
    Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice.David S. Watson, Limor Gultchin, Ankur Taly & Luciano Floridi - 2022 - Minds and Machines 32 (1):185-218.
    Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence, a fast-growing research area that is so far lacking in firm theoretical foundations. In this article, an expanded version of a paper originally presented at the 37th Conference on Uncertainty in Artificial Intelligence, we attempt to fill this gap. Building on work in logic, probability, and causality, we establish the central role of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  46.  7
    Is Explainable AI Responsible AI?Isaac Taylor - forthcoming - AI and Society.
    When artificial intelligence (AI) is used to make high-stakes decisions, some worry that this will create a morally troubling responsibility gap—that is, a situation in which nobody is morally responsible for the actions and outcomes that result. Since the responsibility gap might be thought to result from individuals lacking knowledge of the future behavior of AI systems, it can be and has been suggested that deploying explainable artificial intelligence (XAI) techniques will help us to avoid it. These techniques provide humans (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47.  6
    Explainable AI in the military domain.Nathan Gabriel Wood - 2024 - Ethics and Information Technology 26 (2):1-13.
    Artificial intelligence (AI) has become nearly ubiquitous in modern society, from components of mobile applications to medical support systems, and everything in between. In societally impactful systems imbued with AI, there has been increasing concern related to opaque AI, that is, artificial intelligence where it is unclear how or why certain decisions are reached. This has led to a recent boom in research on “explainable AI” (XAI), or approaches to making AI more explainable and understandable to human users. In the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  49. Interpretability and Unification.Adrian Erasmus & Tyler D. P. Brunet - 2022 - Philosophy and Technology 35 (2):1-6.
    In a recent reply to our article, “What is Interpretability?,” Prasetya argues against our position that artificial neural networks are explainable. It is claimed that our indefeasibility thesis—that adding complexity to an explanation of a phenomenon does not make the phenomenon any less explainable—is false. More precisely, Prasetya argues that unificationist explanations are defeasible to increasing complexity, and thus, we may not be able to provide such explanations of highly complex AI models. The reply highlights an important lacuna in our (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  50.  13
    Decolonial Model of Environmental Management and Conservation: Insights from Indigenous-led Grizzly Bear Stewardship in the Great Bear Rainforest.J. Walkus, C. N. Service, D. Neasloss, M. F. Moody, J. E. Moody, W. G. Housty, J. Housty, C. T. Darimont, H. M. Bryan, M. S. Adams & K. A. Artelle - 2021 - Ethics, Policy and Environment 24 (3):283-323.
    ABSTRACT Global biodiversity declines are increasingly recognized as profound ecological and social crises. In areas subject to colonialization, these declines have advanced in lockstep with settler colonialism and imposition of centralized resource management by settler states. Many have suggested that resurgent Indigenous-led governance systems could help arrest these trends while advancing effective and socially just approaches to environmental interactions that benefit people and places alike. However, how dominant management and conservation approaches might be decolonized (i.e., how their underlying colonial structure (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 53