Synthese 198 (10):9211-9242 (2021)
AbstractWe propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealisedexplanation gamein which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of variable granularity and scope. We characterise the conditions under which such a game is almost surely guaranteed to converge on a (conditionally) optimal explanation surface in polynomial time, and highlight obstacles that will tend to prevent the players from advancing beyond certain explanatory thresholds. The game serves a descriptive and a normative function, establishing a conceptual space in which to analyse and compare existing proposals, as well as design new and improved solutions.
Similar books and articles
The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–32.
Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
A Conceptual Framework Over Contextual Analysis of Concept Learning Within Human-Machine Interplays.Farshad Badie - 2017 - In Emerging Technologies for Education. Cham, Switzerland: pp. 65-74.
Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2021 - Philosophy and Technology 34 (2):265-288.
Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning.Maya Krishnan - 2020 - Philosophy and Technology 33 (3):487-502.
Concept Representation Analysis in the Context of Human-Machine Interactions.Farshad Badie - 2016 - In 14th International Conference on e-Society. pp. 55-61.
What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.
Machine learning in tutorials – Universal applicability, underinformed application, and other misconceptions.Andreas Breiter, Juliane Jarke & Hendrik Heuer - 2021 - Big Data and Society 8 (1).
Editors' Introduction: Why Formal Learning Theory Matters for Cognitive Science.Sean Fulop & Nick Chater - 2013 - Topics in Cognitive Science 5 (1):3-12.
Game logic and its applications I.Mamoru Kaneko & Takashi Nagashima - 1996 - Studia Logica 57 (2-3):325 - 354.
An Evaluation of the Pipeline Framework for Ethical Considerations in Machine Learning Healthcare Applications: The Case of Prediction from Functional Neuroimaging Data.Dawson J. Overton - 2020 - American Journal of Bioethics 20 (11):56-58.
Added to PP
Historical graph of downloads
Citations of this work
The ethics of algorithms: key problems and solutions.Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo & Luciano Floridi - 2021 - AI and Society.
AI and its new winter: from myths to realities.Luciano Floridi - 2020 - Philosophy and Technology 33 (1):1-3.
Conceptual challenges for interpretable machine learning.David S. Watson - 2022 - Synthese 200 (2):1-33.
Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice.David S. Watson, Limor Gultchin, Ankur Taly & Luciano Floridi - 2022 - Minds and Machines 32 (1):185-218.
Defining Explanation and Explanatory Depth in XAI.Stefan Buijsman - 2022 - Minds and Machines 32 (3):563-584.
References found in this work
What is Justified Belief?Alvin Goldman - 1979 - In George Pappas (ed.), Justification and Knowledge. Boston: D. Reidel. pp. 1-25.
The ethics of algorithms: mapping the debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2).