Year:

  1.  4
    Simple Models in Complex Worlds: Occam’s Razor and Statistical Learning Theory.Falco J. Bargagli Stoffi, Gustavo Cevolani & Giorgio Gnecco - 2022 - Minds and Machines 32 (1):13-42.
    The idea that “simplicity is a sign of truth”, and the related “Occam’s razor” principle, stating that, all other things being equal, simpler models should be preferred to more complex ones, have been long discussed in philosophy and science. We explore these ideas in the context of supervised machine learning, namely the branch of artificial intelligence that studies algorithms which balance simplicity and accuracy in order to effectively learn about the features of the underlying domain. Focusing on statistical learning theory, (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  2.  14
    Minds and Machines Special Issue: Machine Learning: Prediction Without Explanation?F. J. Boge, P. Grünke & R. Hillerbrand - 2022 - Minds and Machines 32 (1):1-9.
  3.  29
    Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2022 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  4.  2
    Correction to: (What) Can Deep Learning Contribute to Theoretical Linguistics?Gabe Dupre - 2022 - Minds and Machines 32 (1):11-11.
  5.  2
    The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples.Timo Freiesleben - 2022 - Minds and Machines 32 (1):77-109.
    The same method that creates adversarial examples to fool image-classifiers can be used to generate counterfactual explanations that explain algorithmic decisions. This observation has led researchers to consider CEs as AEs by another name. We argue that the relationship to the true label and the tolerance with respect to proximity are two properties that formally distinguish CEs and AEs. Based on these arguments, we introduce CEs, AEs, and related concepts mathematically in a common framework. Furthermore, we show connections between current (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  6.  6
    Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence.Hajo Greif - 2022 - Minds and Machines 32 (1):111-133.
    The problem of epistemic opacity in Artificial Intelligence is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I first (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  7.  25
    Explanations in AI as Claims of Tacit Knowledge.Nardi Lam - 2022 - Minds and Machines 32 (1):135-158.
    As AI systems become increasingly complex it may become unclear, even to the designer of a system, why exactly a system does what it does. This leads to a lack of trust in AI systems. To solve this, the field of explainable AI has been working on ways to produce explanations of these systems’ behaviors. Many methods in explainable AI, such as LIME, offer only a statistical argument for the validity of their explanations. However, some methods instead study the internal (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  8.  15
    The Automated Laplacean Demon: How ML Challenges Our Views on Prediction and Explanation.Sanja Srećković, Andrea Berber & Nenad Filipović - 2022 - Minds and Machines 32 (1):159-183.
    Certain characteristics make machine learning a powerful tool for processing large amounts of data, and also particularly unsuitable for explanatory purposes. There are worries that its increasing use in science may sideline the explanatory goals of research. We analyze the key characteristics of ML that might have implications for the future directions in scientific research: epistemic opacity and the ‘theory-agnostic’ modeling. These characteristics are further analyzed in a comparison of ML with the traditional statistical methods, in order to demonstrate what (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  9.  7
    Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice.David S. Watson, Limor Gultchin, Ankur Taly & Luciano Floridi - 2022 - Minds and Machines 32 (1):185-218.
    Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence, a fast-growing research area that is so far lacking in firm theoretical foundations. In this article, an expanded version of a paper originally presented at the 37th Conference on Uncertainty in Artificial Intelligence, we attempt to fill this gap. Building on work in logic, probability, and causality, we establish the central role of (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   2 citations  
  10.  9
    Scientific Exploration and Explainable Artificial Intelligence.Carlos Zednik & Hannes Boelsen - 2022 - Minds and Machines 32 (1):219-239.
    Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  11.  55
    Local Explanations Via Necessity and Sufficiency: Unifying Theory and Practice.David Watson, Limor Gultchin, Taly Ankur & Luciano Floridi - 2022 - Minds and Machines 32:185-218.
    Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence (XAI), a fast-growing research area that is so far lacking in firm theoretical foundations. Building on work in logic, probability, and causality, we establish the central role of necessity and sufficiency in XAI, unifying seemingly disparate methods in a single formal framework. We provide a sound and complete algorithm for computing explanatory factors (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
 Previous issues
  
Next issues