Minds and Machines

ISSNs: 0924-6495, 1572-8641

16 found

View year:

  1.  35
    From Monitors to Monitors: A Primitive History.Troy K. Astarte - 2024 - Minds and Machines 34 (1):51-71.
    As computers became multi-component systems in the 1950s, handling the speed differentials efficiently was identified as a major challenge. The desire for better understanding and control of ‘concurrency’ spread into hardware, software, and formalism. This paper examines the way in which the problem emerged and was handled across various computing cultures from 1955 to 1985. In the machinic culture of the late 1950s, system programs called ‘monitors’ were used for directly managing synchronisation. Attempts to reframe synchronisation in the subsequent algorithmic (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Towards a Benchmark for Scientific Understanding in Humans and Machines.Kristian Gonzalez Barman, Sascha Caron, Tom Claassen & Henk de Regt - 2024 - Minds and Machines 34 (1):1-16.
    Scientific understanding is a fundamental goal of science. However, there is currently no good way to measure the scientific understanding of agents, whether these be humans or Artificial Intelligence systems. Without a clear benchmark, it is challenging to evaluate and compare different levels of scientific understanding. In this paper, we propose a framework to create a benchmark for scientific understanding, utilizing tools from philosophy of science. We adopt a behavioral conception of understanding, according to which genuine understanding should be recognized (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  24
    A Pragmatic Theory of Computational Artefacts.Alessandro G. Buda & Giuseppe Primiero - 2024 - Minds and Machines 34 (1):139-170.
    Some computational phenomena rely essentially on pragmatic considerations, and seem to undermine the independence of the specification from the implementation. These include software development, deviant uses, esoteric languages and recent data-driven applications. To account for them, the interaction between pragmatics, epistemology and ontology in computational artefacts seems essential, indicating the need to recover the role of the language metaphor. We propose a User Levels (ULs) structure as a pragmatic complement to the Levels of Abstraction (LoAs)-based structure defining the ontology and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  4.  41
    Limits of Optimization.Cesare Carissimo & Marcin Korecki - 2024 - Minds and Machines 34 (1):117-137.
    Optimization is about finding the best available object with respect to an objective function. Mathematics and quantitative sciences have been highly successful in formulating problems as optimization problems, and constructing clever processes that find optimal objects from sets of objects. As computers have become readily available to most people, optimization and optimized processes play a very broad role in societies. It is not obvious, however, that the optimization processes that work for mathematics and abstract objects should be readily applied to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  5.  8
    Contentless Representationalism? A Neglected Option Between Radical Enactivist and Predictive Processing Accounts of Representation.Dionysis Christias - 2024 - Minds and Machines 34 (1):1-21.
  6.  26
    True Turing: A Bird’s-Eye View.Edgar Daylight - 2024 - Minds and Machines 34 (1):29-49.
    Alan Turing is often portrayed as a materialist in secondary literature. In the present article, I suggest that Turing was instead an idealist, inspired by Cambridge scholars, Arthur Eddington, Ernest Hobson, James Jeans and John McTaggart. I outline Turing’s developing thoughts and his legacy in the USA to date. Specifically, I contrast Turing’s two notions of computability (both from 1936) and distinguish between Turing’s “machine intelligence” in the UK and the more well-known “artificial intelligence” in the USA. According to my (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Epistemology Goes AI: A Study of GPT-3’s Capacity to Generate Consistent and Coherent Ordered Sets of Propositions on a Single-Input-Multiple-Outputs Basis.Marcelo de Araujo, Guilherme de Almeida & José Luiz Nunes - 2024 - Minds and Machines 34 (1):1-18.
    The more we rely on digital assistants, online search engines, and AI systems to revise our system of beliefs and increase our body of knowledge, the less we are able to resort to some independent criterion, unrelated to further digital tools, in order to asses the epistemic reliability of the outputs delivered by them. This raises some important questions to epistemology in general and pressing questions to applied to epistemology in particular. In this paper, we propose an experimental method for (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8.  1
    Anthropomorphising Machines and Computerising Minds: The Crosswiring of Languages between Artificial Intelligence and Brain & Cognitive Sciences.Luciano Floridi & Anna C. Nobre - 2024 - Minds and Machines 34 (1):1-9.
    The article discusses the process of “conceptual borrowing”, according to which, when a new discipline emerges, it develops its technical vocabulary also by appropriating terms from other neighbouring disciplines. The phenomenon is likened to Carl Schmitt’s observation that modern political concepts have theological roots. The authors argue that, through extensive conceptual borrowing, AI has ended up describing computers anthropomorphically, as computational brains with psychological properties, while brain and cognitive sciences have ended up describing brains and minds computationally and informationally, as (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  25
    Leibniz and the Stocking Frame: Computation, Weaving and Knitting in the 17th Century.Michael Friedman - 2024 - Minds and Machines 34 (1):11-28.
    The comparison made by Ada Lovelace in 1843 between the Analytical Engine and the Jacquard loom is one of the well-known analogies between looms and computation machines. Given the fact that weaving – and textile production in general – is one of the oldest cultural techniques in human history, the question arises whether this was the first time that such a parallel was drawn. As this paper will show, centuries before Lovelace’s analogy, such a comparison was made by Gottfried Wilhelm (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10.  35
    Computing Cultures: Historical and Philosophical Perspectives.Juan Luis Gastaldi - 2024 - Minds and Machines 34 (1):1-10.
  11.  14
    Three Early Formal Approaches to the Verification of Concurrent Programs.Cliff B. Jones - 2024 - Minds and Machines 34 (1):73-92.
    This paper traces a relatively linear sequence of early research approaches to the formal verification of concurrent programs. It does so forwards and then backwards in time. After briefly outlining the context, the key insights from three distinct approaches from the 1970s are identified (Ashcroft/Manna, Ashcroft (solo) and Owicki). The main technical material in the paper focuses on a specific program taken from the last published of the three pieces of research (Susan Owicki’s): her own verification of her _Findpos_ example (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  12.  1
    The Man Behind the Curtain: Appropriating Fairness in AI.Marcin Korecki, Guillaume Köstner, Emanuele Martinelli & Cesare Carissimo - 2024 - Minds and Machines 34 (1):1-30.
    Our goal in this paper is to establish a set of criteria for understanding the meaning and sources of attributing (un)fairness to AI algorithms. To do so, we first establish that (un)fairness, like other normative notions, can be understood in a proper primary sense and in secondary senses derived by analogy. We argue that AI algorithms cannot be said to be (un)fair in the proper sense due to a set of criteria related to normativity and agency. However, we demonstrate how (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13.  1
    Gamification, Side Effects, and Praise and Blame for Outcomes.Sven Nyholm - 2024 - Minds and Machines 34 (1):1-21.
    Abstract“Gamification” refers to adding game-like elements to non-game activities so as to encourage participation. Gamification is used in various contexts: apps on phones motivating people to exercise, employers trying to encourage their employees to work harder, social media companies trying to stimulate user engagement, and so on and so forth. Here, I focus on gamification with this property: the game-designer (a company or other organization) creates a “game” in order to encourage the players (the users) to bring about certain outcomes (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14.  1
    We are Building Gods: AI as the Anthropomorphised Authority of the Past.Carl Öhman - 2024 - Minds and Machines 34 (1):1-18.
    This article argues that large language models (LLMs) should be interpreted as a form of gods. In a theological sense, a god is an immortal being that exists beyond time and space. This is clearly nothing like LLMs. In an anthropological sense, however, a god is rather defined as the personified authority of a group through time—a conceptual tool that molds a collective of ancestors into a unified agent or voice. This is exactly what LLMs are. They are products of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15. Philosophical Lessons for Emotion Recognition Technology.Rosalie Waelen - 2024 - Minds and Machines 34 (1):1-13.
    Emotion recognition technology uses artificial intelligence to make inferences about a person’s emotions, on the basis of their facial expressions, body language, tone of voice, or other types of input. Underlying such technology are a variety of assumptions about the manifestation, nature, and value of emotions. To assure the quality and desirability of emotion recognition technology, it is important to critically assess the assumptions embedded in the technology. Within philosophy, there is a long tradition of epistemological, ontological, phenomenological, and ethical (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16.  37
    Informational Equivalence but Computational Differences? Herbert Simon on Representations in Scientific Practice.David Waszek - 2024 - Minds and Machines 34 (1):93-116.
    To explain why, in scientific problem solving, a diagram can be “worth ten thousand words,” Jill Larkin and Herbert Simon (1987) relied on a computer model: two representations can be “informationally” equivalent but differ “computationally,” just as the same data can be encoded in a computer in multiple ways, more or less suited to different kinds of processing. The roots of this proposal lay in cognitive psychology, more precisely in the “imagery debate” of the 1970s on whether there are image-like (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
 Previous issues
  
Next issues