Philosophy and Technology 34 (2):265-288 (2019)

Authors
Carlos Zednik
Eindhoven University of Technology
Abstract
Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from philosophy of science, this framework is modeled after accounts of explanation in cognitive science. The framework distinguishes between the explanation-seeking questions that are likely to be asked by different stakeholders, and specifies the general ways in which these questions should be answered so as to allow these stakeholders to perform their roles in the Machine Learning ecosystem. By applying the normative framework to recently developed techniques such as input heatmapping, feature-detector visualization, and diagnostic classification, it is possible to determine whether and to what extent techniques from Explainable Artificial Intelligence can be used to render opaque computing systems transparent and, thus, whether they can be used to solve the Black Box Problem.
Keywords Machine Learning  Explainable Artificial Intelligence  Epistemic Opacity  Marr's Levels  Deep Learning  Scientific Explanation
Categories (categorize this paper)
Reprint years 2021
ISBN(s)
DOI 10.1007/s13347-019-00382-7
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy


Upload a copy of this paper     Check publisher's policy     Papers currently archived: 71,259
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

View all 34 references / Add more references

Citations of this work BETA

What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.

View all 26 citations / Add more citations

Similar books and articles

Understanding From Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
Socially Useful Artificial Intelligence.Richard Ennals - 1987 - AI and Society 1 (1):5-15.
Philosophy and Machine Learning.Paul Thagard - 1990 - Canadian Journal of Philosophy 20 (2):261-76.
Mechanisms in Cognitive Science.Carlos Zednik - 2017 - In Phyllis McKay Illari & Stuart Glennan (eds.), The Routledge Handbook of Mechanisms and Mechanical Philosophy. London: Routledge. pp. 389-400.
Ethical Machines?Ariela Tubert - 2018 - Seattle University Law Review 41 (4).

Analytics

Added to PP index
2019-10-28

Total views
158 ( #75,173 of 2,518,488 )

Recent downloads (6 months)
17 ( #48,194 of 2,518,488 )

How can I increase my downloads?

Downloads

My notes