Switch to: References

Add citations

You must login to add citations.
  1. Tasks in cognitive science: mechanistic and nonmechanistic perspectives.Samuel D. Taylor - forthcoming - Phenomenology and the Cognitive Sciences:1-27.
    A tension exists between those who do—e.g. Meyer (The British Journal for the Philosophy of Science 71:959–985, 2020 ) and Chemero ( 2011 )—and those who do not—e.g. Kaplan and Craver (Philosophy of Science 78:601–627, 2011 ) Piccinini and Craver (Synthese 183:283–311, 2011 )—afford nonmechanistic explanations a role in cognitive science. Here, I argue that one’s perspective on this matter will cohere with one’s interpretation of the tasks of cognitive science; that is, of the actions for which cognitive scientists are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • The predictive reframing of machine learning applications: good predictions and bad measurements.Alexander Martin Mussgnug - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    Supervised machine learning has found its way into ever more areas of scientific inquiry, where the outcomes of supervised machine learning applications are almost universally classified as predictions. I argue that what researchers often present as a mere terminological particularity of the field involves the consequential transformation of tasks as diverse as classification, measurement, or image segmentation into prediction problems. Focusing on the case of machine-learning enabled poverty prediction, I explore how reframing a measurement problem as a prediction task alters (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Explanatory Role of Machine Learning in Molecular Biology.Fridolin Gross - forthcoming - Erkenntnis:1-21.
    The philosophical debate around the impact of machine learning in science is often framed in terms of a choice between AI and classical methods as mutually exclusive alternatives involving difficult epistemological trade-offs. A common worry regarding machine learning methods specifically is that they lead to opaque models that make predictions but do not lead to explanation or understanding. Focusing on the field of molecular biology, I argue that in practice machine learning is often used with explanatory aims. More specifically, I (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence.Hajo Greif - 2022 - Minds and Machines 32 (1):111-133.
    The problem of epistemic opacity in Artificial Intelligence is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I first (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Philosophers Ought to Develop, Theorize About, and Use Philosophically Relevant AI.Graham Clay & Caleb Ontiveros - 2023 - Metaphilosophy 54 (4):463-479.
    The transformative power of artificial intelligence (AI) is coming to philosophy—the only question is the degree to which philosophers will harness it. In this paper, we argue that the application of AI tools to philosophy could have an impact on the field comparable to the advent of writing, and that it is likely that philosophical progress will significantly increase as a consequence of AI. The role of philosophers in this story is not merely to use AI but also to help (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2021 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • Using Computer Simulations for Hypothesis-Testing and Prediction: Epistemological Strategies.Tan Nguyen - manuscript
    This paper explores the epistemological challenges in using computer simulations for two distinct goals: explanation via hypothesis-testing and prediction. It argues that each goal requires different strategies for justifying inferences drawn from simulation results due to different practical and conceptual constraints. The paper identifies unique and shared strategies researchers employ to increase confidence in their inferences for each goal. For explanation via hypothesis-testing, researchers need to address the underdetermination, interpretability, and attribution challenges. In prediction, the emphasis is on the model's (...)
    Direct download  
     
    Export citation  
     
    Bookmark