Minds and Machines 32 (1):43-75 (2022)

Florian J. Boge
Bergische Universität Wuppertal
Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models are successfully used in exploratory contexts, scientists face a whole new challenge in forming the concepts required for understanding underlying mechanisms.
Keywords No keywords specified (fix it)
Categories (categorize this paper)
DOI 10.1007/s11023-021-09569-4
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Translate to english
Revision history

Download options

PhilArchive copy

Upload a copy of this paper     Check publisher's policy     Papers currently archived: 70,307
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

Idealization and the Aims of Science.Angela Potochnik - 2017 - Chicago: University of Chicago Press.
Scientific Perspectivism.Ronald N. Giere - 2006 - University of Chicago Press.

View all 67 references / Add more references

Citations of this work BETA

Add more citations

Similar books and articles

A Puzzle concerning Compositionality in Machines.Ryan M. Nefdt - 2020 - Minds and Machines 30 (1):47-75.


Added to PP index

Total views
29 ( #394,724 of 2,507,866 )

Recent downloads (6 months)
16 ( #50,503 of 2,507,866 )

How can I increase my downloads?


My notes