Authors
Emily Sullivan
Eindhoven University of Technology
Abstract
Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding.
Keywords Scientific Understanding  Explanation  Epistemic Opacity  Machine Learning Models  how-possibly explanation
Categories (categorize this paper)
DOI 10.1093/bjps/axz035
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

Our Archive
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

Minimal Model Explanations.Robert W. Batterman & Collin C. Rice - 2014 - Philosophy of Science 81 (3):349-376.
Understanding Why.Alison Hills - 2015 - Noûs 49 (2):661-688.
Understanding Why.Alison Hills - 2016 - Noûs 50 (4):661-688.

View all 30 references / Add more references

Citations of this work BETA

A Puzzle concerning Compositionality in Machines.Ryan M. Nefdt - 2020 - Minds and Machines 30 (1):47-75.

Add more citations

Similar books and articles

Understanding (With) Toy Models.Alexander Reutlinger, Dominik Hangleiter & Stephan Hartmann - 2016 - British Journal for the Philosophy of Science:axx005.
Understanding (with) Toy Models.Alexander Reutlinger, Dominik Hangleiter & Stephan Hartmann - 2018 - British Journal for the Philosophy of Science 69 (4):1069-1099.
Models and Method.D. W. Theobald - 1964 - Philosophy 39 (149):260 - 267.
Understanding Does Not Depend on (Causal) Explanation.Philippe Verreault-Julien - 2019 - European Journal for Philosophy of Science 9 (2):18.
Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
How Could Models Possibly Provide How-Possibly Explanations?Philippe Verreault-Julien - 2019 - Studies in History and Philosophy of Science Part A 73:1-12.
Understanding with Theoretical Models.Petri Ylikoski & N. Emrah Aydinonat - 2014 - Journal of Economic Methodology 21 (1):19-36.

Analytics

Added to PP index
2019-07-18

Total views
248 ( #30,287 of 2,325,526 )

Recent downloads (6 months)
80 ( #6,648 of 2,325,526 )

How can I increase my downloads?

Downloads

My notes