The Pragmatic Turn in Explainable Artificial Intelligence (XAI)

Minds and Machines (3):1-19 (2019)

Authors
Andrés Páez
University of the Andes
Abstract
In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will lack a well-defined goal. Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it. After an examination of the different types of understanding discussed in the philosophical and psychological literature, I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post hoc interpretability. This conclusion is partly based on the shortcomings of the purely functionalist approach to post hoc interpretability that seems to be predominant in most recent literature.
Keywords Explainable Artificial Intelligence  Understanding  Explanation  Model Transparency  Post-hoc interpretability  Machine learning  Black box models
Categories (categorize this paper)
DOI 10.1007/s11023-019-09502-w
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

Our Archive
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

Making Things Happen. A Theory of Causal Explanation.James Woodward - 2007 - Philosophy and Phenomenological Research 74 (1):233-249.
No Understanding Without Explanation.Michael Strevens - 2013 - Studies in History and Philosophy of Science Part A 44 (3):510-515.
Is Understanding a Species of Knowledge?Stephen R. Grimm - 2006 - British Journal for the Philosophy of Science 57 (3):515-535.

View all 26 references / Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
Ethical Machines?Ariela Tubert - 2018 - Seattle University Law Review 41 (4).
Philosophy and Machine Learning.Paul Thagard - 1990 - Canadian Journal of Philosophy 20 (2):261-76.
Action Models and Their Induction.Michal Čertický - 2013 - Organon F: Medzinárodný Časopis Pre Analytickú Filozofiu 20 (2):206-215.

Analytics

Added to PP index
2019-05-30

Total views
357 ( #15,715 of 2,271,452 )

Recent downloads (6 months)
250 ( #1,602 of 2,271,452 )

How can I increase my downloads?

Downloads

My notes

Sign in to use this feature