Minds and Machines

ISSNs: 0924-6495, 1572-8641

10 found

View year:

  1.  2
    Autonomous Force Beyond Armed Conflict.Alexander Blanchard - 2023 - Minds and Machines 33 (1):251-260.
    Proposals by the San Francisco Police Department (SFPD) to use bomb disposal robots for deadly force against humans have met with widespread condemnation. Media coverage of the furore has tended, incorrectly, to conflate these robots with autonomous weapon systems (AWS), the AI-based weapons used in armed conflict. These two types of systems should be treated as distinct since they have different sets of social, ethical, and legal implications. However, the conflation does raise a pressing question: what _if_ the SFPD had (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  2.  9
    How a Minimal Learning Agent can Infer the Existence of Unobserved Variables in a Complex Environment.Benjamin Eva, Katja Ried, Thomas Müller & Hans J. Briegel - 2023 - Minds and Machines 33 (1):185-219.
    According to a mainstream position in contemporary cognitive science and philosophy, the use of abstract compositional concepts is amongst the most characteristic indicators of meaningful deliberative thought in an organism or agent. In this article, we show how the ability to develop and utilise abstract conceptual structures can be achieved by a particular kind of learning agent. More specifically, we provide and motivate a concrete operational definition of what it means for these agents to be in possession of abstract concepts, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  3.  20
    The Turing Test is a Thought Experiment.Bernardo Gonçalves - 2023 - Minds and Machines 33 (1):1-31.
    The Turing test has been studied and run as a controlled experiment and found to be underspecified and poorly designed. On the other hand, it has been defended and still attracts interest as a test for true artificial intelligence (AI). Scientists and philosophers regret the test’s current status, acknowledging that the situation is at odds with the intellectual standards of Turing’s works. This article refers to this as the Turing Test Dilemma, following the observation that the test has been under (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4.  55
    A dilemma for dispositional answers to Kripkenstein’s challenge.Andrea Guardo - 2023 - Minds and Machines 33 (1):135-152.
    Kripkenstein’s challenge is usually described as being essentially about the use of a word in new kinds of cases ‒ the old kinds of cases being commonly considered as non-problematic. I show that this way of conceiving the challenge is neither true to Kripke’s intentions nor philosophically defensible: the Kripkean skeptic can question my answering “125” to the question “What is 68 plus 57?” even if that problem is one I have already encountered and answered. I then argue that once (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  5.  14
    Enactivism Meets Mechanism: Tensions & Congruities in Cognitive Science.Jonny Lee - 2023 - Minds and Machines 33 (1):153-184.
    Enactivism advances an understanding of cognition rooted in the dynamic interaction between an embodied agent and their environment, whilst new mechanism suggests that cognition is explained by uncovering the organised components underlying cognitive capacities. On the face of it, the mechanistic model’s emphasis on localisable and decomposable mechanisms, often neural in nature, runs contrary to the enactivist ethos. Despite appearances, this paper argues that mechanistic explanations of cognition, being neither narrow nor reductive, and compatible with plausible iterations of ideas like (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  6.  8
    Computers as Interactive Machines: Can We Build an Explanatory Abstraction?Alice Martin, Mathieu Magnaudet & Stéphane Conversy - 2023 - Minds and Machines 33 (1):83-112.
    In this paper, we address the question of what current computers are from the point of view of human-computer interaction. In the early days of computing, the Turing machine (TM) has been the cornerstone of the understanding of computers. The TM defines what can be computed and how computation can be carried out. However, in the last decades, computers have evolved and increasingly become interactive systems, reacting in real-time to external events in an ongoing loop. We argue that the TM (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  7.  2
    Correction to: The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems.Jakob Mökander, Margi Sheth, David S. Watson & Luciano Floridi - 2023 - Minds and Machines 33 (1):249-249.
  8.  9
    The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems.Jakob Mökander, Margi Sheth, David S. Watson & Luciano Floridi - 2023 - Minds and Machines 33 (1):221-248.
    Organisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9.  8
    Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI.Devesh Narayanan & Zhi Ming Tan - 2023 - Minds and Machines 33 (1):55-82.
    It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10.  13
    Grounding the Vector Space of an Octopus: Word Meaning from Raw Text.Anders Søgaard - 2023 - Minds and Machines 33 (1):33-54.
    Most, if not all, philosophers agree that computers cannot learn what words refers to from raw text alone. While many attacked Searle’s Chinese Room thought experiment, no one seemed to question this most basic assumption. For how can computers learn something that is not in the data? Emily Bender and Alexander Koller ( 2020 ) recently presented a related thought experiment—the so-called Octopus thought experiment, which replaces the rule-based interlocutor of Searle’s thought experiment with a neural language model. The Octopus (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
 Previous issues
  
Next issues