Year:

  1.  6
    Dretske and Informational Closure.Yves Bouchard - 2022 - Minds and Machines 32 (2):311-322.
    Christoph Jäger has argued that Dretske’s information-based account of knowledge is committed to both knowledge and information closure under known entailment. However, in a reply to Jäger, Dretske defended his view on the basis of a discrepancy between the relation of information and the relation of logical implication. This paper shares Jäger’s criticism that Dretske’s externalist notion of information implies closure, but provides an analysis based on different grounds. By means of a distinction between two perspectives, the mathematical perspective and (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  2.  10
    A Unified Model of Ad Hoc Concepts in Conceptual Spaces.Davide Coraci - 2022 - Minds and Machines 32 (2):289-309.
    Ad hoc concepts are highly-context dependent representations humans construct to deal with novel or uncommon situations and to interpret linguistic stimuli in communication. In the last decades, such concepts have been investigated both in experimental cognitive psychology and within pragmatics by proponents of so-called relevance theory. These two research lines have however proceeded in parallel, proposing two unconnected strategies to account for the construction and use of ad hoc concepts. The present work explores the relations between these two approaches and (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  3.  11
    Correction to: What Might Machines Mean?Mitchell Green & Jan G. Michel - 2022 - Minds and Machines 32 (2):339-339.
  4.  12
    What Might Machines Mean?Mitchell Green & Jan G. Michel - 2022 - Minds and Machines 32 (2):323-338.
    This essay addresses the question whether artificial speakers can perform speech acts in the technical sense of that term common in the philosophy of language. We here argue that under certain conditions artificial speakers can perform speech acts so understood. After explaining some of the issues at stake in these questions, we elucidate a relatively uncontroversial way in which machines can communicate, namely through what we call verbal signaling. But verbal signaling is not sufficient for the performance of a speech (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  5.  16
    A Hybrid Theory of Event Memory.David H. Ménager, Dongkyu Choi & Sarah K. Robins - 2022 - Minds and Machines 32 (2):365-394.
    Amongst philosophers, there is ongoing debate about what successful event remembering requires. Causal theorists argue that it requires a causal connection to the past event. Simulation theorists argue, in contrast, that successful remembering requires only production by a reliable memory system. Both views must contend with the fact that people can remember past events they have experienced with varying degrees of accuracy. The debate between them thus concerns not only the account of successful remembering, but how each account explains the (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  6.  7
    Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation.Jakob Mökander, Maria Axente, Federico Casolari & Luciano Floridi - 2022 - Minds and Machines 32 (2):241-268.
    The proposed European Artificial Intelligence Act is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  7.  8
    Playing Games with Ais: The Limits of GPT-3 and Similar Large Language Models.Adam Sobieszek & Tadeusz Price - 2022 - Minds and Machines 32 (2):341-364.
    This article contributes to the debate around the abilities of large language models such as GPT-3, dealing with: firstly, evaluating how well GPT does in the Turing Test, secondly the limits of such models, especially their tendency to generate falsehoods, and thirdly the social consequences of the problems these models have with truth-telling. We start by formalising the recently proposed notion of reversible questions, which Floridi & Chiriatti propose allow one to ‘identify the nature of the source of their answers’, (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  8.  7
    Strictly Human: Limitations of Autonomous Systems.Sadjad Soltanzadeh - 2022 - Minds and Machines 32 (2):269-288.
    Can autonomous systems replace humans in the performance of their activities? How does the answer to this question inform the design of autonomous systems? The study of technical systems and their features should be preceded by the study of the activities in which they play roles. Each activity can be described by its overall goals, governing norms and the intermediate steps which are taken to achieve the goals and to follow the norms. This paper uses the activity realist approach to (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  9.  11
    Is Your Neural Data Part of Your Mind? Exploring the Conceptual Basis of Mental Privacy.Abel Wajnerman Paz - 2022 - Minds and Machines 32 (2):395-415.
    It has been argued that neural data are an especially sensitive kind of personal information that could be used to undermine the control we should have over access to our mental states, and therefore need a stronger legal protection than other kinds of personal data. The Morningside Group, a global consortium of interdisciplinary experts advocating for the ethical use of neurotechnology, suggests achieving this by treating legally ND as a body organ. Although the proposal is currently shaping ND-related policies, it (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  10.  4
    Simple Models in Complex Worlds: Occam’s Razor and Statistical Learning Theory.Falco J. Bargagli Stoffi, Gustavo Cevolani & Giorgio Gnecco - 2022 - Minds and Machines 32 (1):13-42.
    The idea that “simplicity is a sign of truth”, and the related “Occam’s razor” principle, stating that, all other things being equal, simpler models should be preferred to more complex ones, have been long discussed in philosophy and science. We explore these ideas in the context of supervised machine learning, namely the branch of artificial intelligence that studies algorithms which balance simplicity and accuracy in order to effectively learn about the features of the underlying domain. Focusing on statistical learning theory, (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  11.  14
    Minds and Machines Special Issue: Machine Learning: Prediction Without Explanation?F. J. Boge, P. Grünke & R. Hillerbrand - 2022 - Minds and Machines 32 (1):1-9.
  12.  29
    Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2022 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   2 citations  
  13.  2
    Correction to: (What) Can Deep Learning Contribute to Theoretical Linguistics?Gabe Dupre - 2022 - Minds and Machines 32 (1):11-11.
  14.  2
    The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples.Timo Freiesleben - 2022 - Minds and Machines 32 (1):77-109.
    The same method that creates adversarial examples to fool image-classifiers can be used to generate counterfactual explanations that explain algorithmic decisions. This observation has led researchers to consider CEs as AEs by another name. We argue that the relationship to the true label and the tolerance with respect to proximity are two properties that formally distinguish CEs and AEs. Based on these arguments, we introduce CEs, AEs, and related concepts mathematically in a common framework. Furthermore, we show connections between current (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  15.  6
    Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence.Hajo Greif - 2022 - Minds and Machines 32 (1):111-133.
    The problem of epistemic opacity in Artificial Intelligence is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I first (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  16.  25
    Explanations in AI as Claims of Tacit Knowledge.Nardi Lam - 2022 - Minds and Machines 32 (1):135-158.
    As AI systems become increasingly complex it may become unclear, even to the designer of a system, why exactly a system does what it does. This leads to a lack of trust in AI systems. To solve this, the field of explainable AI has been working on ways to produce explanations of these systems’ behaviors. Many methods in explainable AI, such as LIME, offer only a statistical argument for the validity of their explanations. However, some methods instead study the internal (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  17.  16
    The Automated Laplacean Demon: How ML Challenges Our Views on Prediction and Explanation.Sanja Srećković, Andrea Berber & Nenad Filipović - 2022 - Minds and Machines 32 (1):159-183.
    Certain characteristics make machine learning a powerful tool for processing large amounts of data, and also particularly unsuitable for explanatory purposes. There are worries that its increasing use in science may sideline the explanatory goals of research. We analyze the key characteristics of ML that might have implications for the future directions in scientific research: epistemic opacity and the ‘theory-agnostic’ modeling. These characteristics are further analyzed in a comparison of ML with the traditional statistical methods, in order to demonstrate what (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  18.  7
    Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice.David S. Watson, Limor Gultchin, Ankur Taly & Luciano Floridi - 2022 - Minds and Machines 32 (1):185-218.
    Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence, a fast-growing research area that is so far lacking in firm theoretical foundations. In this article, an expanded version of a paper originally presented at the 37th Conference on Uncertainty in Artificial Intelligence, we attempt to fill this gap. Building on work in logic, probability, and causality, we establish the central role of (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   2 citations  
  19.  9
    Scientific Exploration and Explainable Artificial Intelligence.Carlos Zednik & Hannes Boelsen - 2022 - Minds and Machines 32 (1):219-239.
    Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  20.  65
    Local Explanations Via Necessity and Sufficiency: Unifying Theory and Practice.David Watson, Limor Gultchin, Taly Ankur & Luciano Floridi - 2022 - Minds and Machines 32:185-218.
    Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence (XAI), a fast-growing research area that is so far lacking in firm theoretical foundations. Building on work in logic, probability, and causality, we establish the central role of necessity and sufficiency in XAI, unifying seemingly disparate methods in a single formal framework. We provide a sound and complete algorithm for computing explanatory factors (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
 Previous issues
  
Next issues