FAT* 2019 Proceedings 1 (forthcoming)

Authors
Brent Mittelstadt
Oxford University
Abstract
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.
Keywords interpretability  explanations  accountability  philosophy of science  data ethics  machine learning  artificial intelligence  automated decision-making
Categories (categorize this paper)
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy

 PhilArchive page | Other versions
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

Counterfactuals.David Kellogg Lewis - 1973 - Cambridge, MA, USA: Blackwell.
Counterfactuals.David Lewis - 1973 - Foundations of Language 13 (1):145-151.
Models and Analogies in Science.Mary Hesse - 1965 - British Journal for the Philosophy of Science 16 (62):161-163.
Models and Analogies in Science.Mary B. Hesse - 1966 - Philosophy and Rhetoric 3 (3):190-191.

View all 25 references / Add more references

Citations of this work BETA

What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.

View all 31 citations / Add more citations

Similar books and articles

How Could Models Possibly Provide How-Possibly Explanations?Philippe Verreault-Julien - 2019 - Studies in History and Philosophy of Science Part A 73:1-12.
Contrastive, Non-Probabilistic Statistical Explanations.Bruce Glymour - 1998 - Philosophy of Science 65 (3):448-471.
Biological Teleology: Questions and Explanations.Robert N. Brandon - 1981 - Studies in History and Philosophy of Science Part A 12 (2):91.
Explanation as Condition Satisfaction.Paul Humphreys - 2014 - Philosophy of Science 81 (5):1103-1116.
Dynamical Models and Explanation in Neuroscience.Lauren N. Ross - 2015 - Philosophy of Science 82 (1):32-54.
On Explanations From Geometry of Motion.Juha Saatsi - 2018 - British Journal for the Philosophy of Science 69 (1):253–273.
On Computational Explanations.Anna-Mari Rusanen & Otto Lappi - 2016 - Synthese 193 (12):3931-3949.
On Explanations From Geometry of Motion.Juha Saatsi - 2016 - British Journal for the Philosophy of Science:axw007.
On Explanations From Geometry of Motion.Juha Saatsi - 2018 - British Journal for the Philosophy of Science 69 (1):253–273.

Analytics

Added to PP index
2018-11-04

Total views
893 ( #7,727 of 2,520,967 )

Recent downloads (6 months)
81 ( #9,307 of 2,520,967 )

How can I increase my downloads?

Downloads

My notes