Understanding, Idealization, and Explainable AI

Episteme 19 (4):534-560 (2022)
  Copy   BIBTEX

Abstract

Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. I argue for a unified account of these key concepts that treats the concept of understanding as fundamental. This allows resources from the philosophy of science and the epistemology of understanding to help guide opacity alleviation efforts. A first significant benefit of this understanding account is that it defuses one of the primary, in-principle objections to post hoc explainable AI (XAI) methods. This “rationalization objection” argues that XAI methods provide mere rationalizations rather than genuine explanations. This is because XAI methods involve using a separate “explanation” system to approximate the original black box system. These explanation systems function in a completely different way than the original system, yet XAI methods make inferences about the original system based on the behavior of the explanation system. I argue that, if we conceive of XAI methods as idealized scientific models, this rationalization worry is dissolved. Idealized scientific models misrepresent their target phenomena, yet are capable of providing significant and genuine understanding of their targets.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 107,499

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Explainable AI (XAI).Rami Al-Dahdooh, Ahmad Marouf, Mahmoud Jamal Abu Ghali, Ali Osama Mahdi, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2025 - International Journal of Academic Information Systems Research (IJAISR) 9 (1):65-70.
SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI.Emily Sullivan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
Transparency and Interpretability in Cloud- based Machine Learning with Explainable AI.V. Talati Dhruvitkumar - 2024 - International Journal of Multidisciplinary Research in Science, Engineering and Technology 7 (7):11823-11831.
EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI): ENHANCING TRANSPARENCY AND TRUST IN MACHINE LEARNING MODELS.Prasad Pasam Thulasiram - 2025 - International Journal for Innovative Engineering and Management Research 14 (1):204-213.

Analytics

Added to PP
2022-11-04

Downloads
457 (#71,238)

6 months
72 (#95,275)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Will Fleisher
Georgetown University

References found in this work

Models in Science (2nd edition).Roman Frigg & Stephan Hartmann - 2021 - The Stanford Encyclopedia of Philosophy.
Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
Explanatory unification.Philip Kitcher - 1981 - Philosophy of Science 48 (4):507-531.
Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.

View all 30 references / Add more references