Minds and Machines

ISSNs: 0924-6495, 1572-8641

30 found

View year:

  1.  90
    The Hierarchical Correspondence View of Levels: A Case Study in Cognitive Science.Luke Kersten - 2024 - Minds and Machines 34 (18):1-21.
    There is a general conception of levels in philosophy which says that the world is arrayed into a hierarchy of levels and that there are different modes of analysis that correspond to each level of this hierarchy, what can be labelled the ‘Hierarchical Correspondence View of Levels” (or HCL). The trouble is that despite its considerable lineage and general status in philosophy of science and metaphysics the HCL has largely escaped analysis in specific domains of inquiry. The goal of this (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  2.  7
    Models of Possibilities Instead of Logic as the Basis of Human Reasoning.P. N. Johnson-Laird, Ruth M. J. Byrne & Sangeet S. Khemlani - 2024 - Minds and Machines 34 (3):1-22.
    The theory of mental models and its computer implementations have led to crucial experiments showing that no standard logic—the sentential calculus and all logics that include it—can underlie human reasoning. The theory replaces the logical concept of validity (the conclusion is true in all cases in which the premises are true) with necessity (conclusions describe no more than possibilities to which the premises refer). Many inferences are both necessary and valid. But experiments show that individuals make necessary inferences that are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3.  1
    The New Mechanistic Approach and Cognitive Ontology—Or: What role do (neural) mechanisms play in cognitive ontology?Beate Krickel - 2024 - Minds and Machines 34 (3):1-19.
    Cognitive ontology has become a popular topic in philosophy, cognitive psychology, and cognitive neuroscience. At its center is the question of which cognitive capacities should be included in the ontology of cognitive psychology and cognitive neuroscience. One common strategy for answering this question is to look at brain structures and determine the cognitive capacities for which they are responsible. Some authors interpret this strategy as a search for neural mechanisms, as understood by the so-called new mechanistic approach. In this article, (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  4. A sociotechnical system perspective on AI.Olya Kudina & Ibo van de Poel - 2024 - Minds and Machines 34 (3):1-9.
  5.  7
    Find the Gap: AI, Responsible Agency and Vulnerability.Shannon Vallor & Tillmann Vierkant - 2024 - Minds and Machines 34 (3):1-23.
    The responsibility gap, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  6.  4
    AI Within Online Discussions: Rational, Civil, Privileged?Jonas Aaron Carstens & Dennis Friess - 2024 - Minds and Machines 34 (2):1-25.
    While early optimists have seen online discussions as potential spaces for deliberation, the reality of many online spaces is characterized by incivility and irrationality. Increasingly, AI tools are considered as a solution to foster deliberative discourse. Against the backdrop of previous research, we show that AI tools for online discussions heavily focus on the deliberative norms of rationality and civility. In the operationalization of those norms for AI tools, the complex deliberative dimensions are simplified, and the focus lies on the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  7.  4
    Toward Sociotechnical AI: Mapping Vulnerabilities for Machine Learning in Context.Roel Dobbe & Anouk Wolters - 2024 - Minds and Machines 34 (2):1-51.
    This paper provides an empirical and conceptual account on seeing machine learning models as part of a sociotechnical system to identify relevant vulnerabilities emerging in the context of use. As ML is increasingly adopted in socially sensitive and safety-critical domains, many ML applications end up not delivering on their promises, and contributing to new forms of algorithmic harm. There is still a lack of empirical insights as well as conceptual tools and frameworks to properly understand and design for the impact (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8.  5
    Tool-Augmented Human Creativity.Kjell Jørgen Hole - 2024 - Minds and Machines 34 (2):1-14.
    Creativity is the hallmark of human intelligence. Roli et al. (Frontiers in Ecology and Evolution 9:806283, 2022) state that algorithms cannot achieve human creativity. This paper analyzes cooperation between humans and intelligent algorithmic tools to compensate for algorithms’ limited creativity. The intelligent tools have functionality from the neocortex, the brain’s center for learning, reasoning, planning, and language. The analysis provides four key insights about human-tool cooperation to solve challenging problems. First, no neocortex-based tool without feelings can achieve human creativity. Second, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9.  2
    Towards Transnational Fairness in Machine Learning: A Case Study in Disaster Response Systems.Cem Kozcuer, Anne Mollen & Felix Bießmann - 2024 - Minds and Machines 34 (2):1-26.
    Research on fairness in machine learning (ML) has been largely focusing on individual and group fairness. With the adoption of ML-based technologies as assistive technology in complex societal transformations or crisis situations on a global scale these existing definitions fail to account for algorithmic fairness transnationally. We propose to complement existing perspectives on algorithmic fairness with a notion of transnational algorithmic fairness and take first steps towards an analytical framework. We exemplify the relevance of a transnational fairness assessment in a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10.  7
    Black-Box Testing and Auditing of Bias in ADM Systems.Tobias D. Krafft, Marc P. Hauer & Katharina Zweig - 2024 - Minds and Machines 34 (2):1-31.
    For years, the number of opaque algorithmic decision-making systems (ADM systems) with a large impact on society has been increasing: e.g., systems that compute decisions about future recidivism of criminals, credit worthiness, or the many small decision computing systems within social networks that create rankings, provide recommendations, or filter content. Concerns that such a system makes biased decisions can be difficult to investigate: be it by people affected, NGOs, stakeholders, governmental testing and auditing authorities, or other external parties. Scientific testing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11.  5
    Reflective Artificial Intelligence.Peter R. Lewis & Ştefan Sarkadi - 2024 - Minds and Machines 34 (2):1-30.
    As artificial intelligence (AI) technology advances, we increasingly delegate mental tasks to machines. However, today’s AI systems usually do these tasks with an unusual imbalance of insight and understanding: new, deeper insights are present, yet many important qualities that a human mind would have previously brought to the activity are utterly absent. Therefore, it is crucial to ask which features of minds have we replicated, which are missing, and if that matters. One core feature that humans bring to tasks, when (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12. Regulation by Design: Features, Practices, Limitations, and Governance Implications.Kostina Prifti, Jessica Morley, Claudio Novelli & Luciano Floridi - 2024 - Minds and Machines 34 (2):1-23.
    Regulation by design (RBD) is a growing research field that explores, develops, and criticises the regulative function of design. In this article, we provide a qualitative thematic synthesis of the existing literature. The aim is to explore and analyse RBD’s core features, practices, limitations, and related governance implications. To fulfil this aim, we examine the extant literature on RBD in the context of digital technologies. We start by identifying and structuring the core features of RBD, namely the goals, regulators, regulatees, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  13.  15
    A Genealogical Approach to Algorithmic Bias.Marta Ziosi, David Watson & Luciano Floridi - 2024 - Minds and Machines 34 (2):1-17.
    The Fairness, Accountability, and Transparency (FAccT) literature tends to focus on bias as a problem that requires ex post solutions (e.g. fairness metrics), rather than addressing the underlying social and technical conditions that (re)produce it. In this article, we propose a complementary strategy that uses genealogy as a constructive, epistemic critique to explain algorithmic bias in terms of the conditions that enable it. We focus on XAI feature attributions (Shapley values) and counterfactual approaches as potential tools to gauge these conditions (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14.  36
    From Monitors to Monitors: A Primitive History.Troy K. Astarte - 2024 - Minds and Machines 34 (1):51-71.
    As computers became multi-component systems in the 1950s, handling the speed differentials efficiently was identified as a major challenge. The desire for better understanding and control of ‘concurrency’ spread into hardware, software, and formalism. This paper examines the way in which the problem emerged and was handled across various computing cultures from 1955 to 1985. In the machinic culture of the late 1950s, system programs called ‘monitors’ were used for directly managing synchronisation. Attempts to reframe synchronisation in the subsequent algorithmic (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15.  14
    Towards a Benchmark for Scientific Understanding in Humans and Machines.Kristian Gonzalez Barman, Sascha Caron, Tom Claassen & Henk de Regt - 2024 - Minds and Machines 34 (1):1-16.
    Scientific understanding is a fundamental goal of science. However, there is currently no good way to measure the scientific understanding of agents, whether these be humans or Artificial Intelligence systems. Without a clear benchmark, it is challenging to evaluate and compare different levels of scientific understanding. In this paper, we propose a framework to create a benchmark for scientific understanding, utilizing tools from philosophy of science. We adopt a behavioral conception of understanding, according to which genuine understanding should be recognized (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  16.  25
    A Pragmatic Theory of Computational Artefacts.Alessandro G. Buda & Giuseppe Primiero - 2024 - Minds and Machines 34 (1):139-170.
    Some computational phenomena rely essentially on pragmatic considerations, and seem to undermine the independence of the specification from the implementation. These include software development, deviant uses, esoteric languages and recent data-driven applications. To account for them, the interaction between pragmatics, epistemology and ontology in computational artefacts seems essential, indicating the need to recover the role of the language metaphor. We propose a User Levels (ULs) structure as a pragmatic complement to the Levels of Abstraction (LoAs)-based structure defining the ontology and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  17.  42
    Limits of Optimization.Cesare Carissimo & Marcin Korecki - 2024 - Minds and Machines 34 (1):117-137.
    Optimization is about finding the best available object with respect to an objective function. Mathematics and quantitative sciences have been highly successful in formulating problems as optimization problems, and constructing clever processes that find optimal objects from sets of objects. As computers have become readily available to most people, optimization and optimized processes play a very broad role in societies. It is not obvious, however, that the optimization processes that work for mathematics and abstract objects should be readily applied to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  18.  16
    Contentless Representationalism? A Neglected Option Between Radical Enactivist and Predictive Processing Accounts of Representation.Dionysis Christias - 2024 - Minds and Machines 34 (1):1-21.
  19.  29
    True Turing: A Bird’s-Eye View.Edgar Daylight - 2024 - Minds and Machines 34 (1):29-49.
    Alan Turing is often portrayed as a materialist in secondary literature. In the present article, I suggest that Turing was instead an idealist, inspired by Cambridge scholars, Arthur Eddington, Ernest Hobson, James Jeans and John McTaggart. I outline Turing’s developing thoughts and his legacy in the USA to date. Specifically, I contrast Turing’s two notions of computability (both from 1936) and distinguish between Turing’s “machine intelligence” in the UK and the more well-known “artificial intelligence” in the USA. According to my (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20.  6
    Epistemology Goes AI: A Study of GPT-3’s Capacity to Generate Consistent and Coherent Ordered Sets of Propositions on a Single-Input-Multiple-Outputs Basis.Marcelo de Araujo, Guilherme de Almeida & José Luiz Nunes - 2024 - Minds and Machines 34 (1):1-18.
    The more we rely on digital assistants, online search engines, and AI systems to revise our system of beliefs and increase our body of knowledge, the less we are able to resort to some independent criterion, unrelated to further digital tools, in order to asses the epistemic reliability of the outputs delivered by them. This raises some important questions to epistemology in general and pressing questions to applied to epistemology in particular. In this paper, we propose an experimental method for (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  21.  18
    Anthropomorphising Machines and Computerising Minds: The Crosswiring of Languages between Artificial Intelligence and Brain & Cognitive Sciences.Luciano Floridi & Anna C. Nobre - 2024 - Minds and Machines 34 (1):1-9.
    The article discusses the process of “conceptual borrowing”, according to which, when a new discipline emerges, it develops its technical vocabulary also by appropriating terms from other neighbouring disciplines. The phenomenon is likened to Carl Schmitt’s observation that modern political concepts have theological roots. The authors argue that, through extensive conceptual borrowing, AI has ended up describing computers anthropomorphically, as computational brains with psychological properties, while brain and cognitive sciences have ended up describing brains and minds computationally and informationally, as (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  22.  28
    Leibniz and the Stocking Frame: Computation, Weaving and Knitting in the 17th Century.Michael Friedman - 2024 - Minds and Machines 34 (1):11-28.
    The comparison made by Ada Lovelace in 1843 between the Analytical Engine and the Jacquard loom is one of the well-known analogies between looms and computation machines. Given the fact that weaving – and textile production in general – is one of the oldest cultural techniques in human history, the question arises whether this was the first time that such a parallel was drawn. As this paper will show, centuries before Lovelace’s analogy, such a comparison was made by Gottfried Wilhelm (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23.  36
    Computing Cultures: Historical and Philosophical Perspectives.Juan Luis Gastaldi - 2024 - Minds and Machines 34 (1):1-10.
  24.  8
    Towards a Benchmark for Scientific Understanding in Humans and Machines.Kristian Gonzalez Barman, Sascha Caron, Tom Claassen & Henk De Regt - 2024 - Minds and Machines 34 (1):1-16.
    Scientific understanding is a fundamental goal of science. However, there is currently no good way to measure the scientific understanding of agents, whether these be humans or Artificial Intelligence systems. Without a clear benchmark, it is challenging to evaluate and compare different levels of scientific understanding. In this paper, we propose a framework to create a benchmark for scientific understanding, utilizing tools from philosophy of science. We adopt a behavioral conception of understanding, according to which genuine understanding should be recognized (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  25.  17
    Three Early Formal Approaches to the Verification of Concurrent Programs.Cliff B. Jones - 2024 - Minds and Machines 34 (1):73-92.
    This paper traces a relatively linear sequence of early research approaches to the formal verification of concurrent programs. It does so forwards and then backwards in time. After briefly outlining the context, the key insights from three distinct approaches from the 1970s are identified (Ashcroft/Manna, Ashcroft (solo) and Owicki). The main technical material in the paper focuses on a specific program taken from the last published of the three pieces of research (Susan Owicki’s): her own verification of her _Findpos_ example (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  26.  5
    The Man Behind the Curtain: Appropriating Fairness in AI.Marcin Korecki, Guillaume Köstner, Emanuele Martinelli & Cesare Carissimo - 2024 - Minds and Machines 34 (1):1-30.
    Our goal in this paper is to establish a set of criteria for understanding the meaning and sources of attributing (un)fairness to AI algorithms. To do so, we first establish that (un)fairness, like other normative notions, can be understood in a proper primary sense and in secondary senses derived by analogy. We argue that AI algorithms cannot be said to be (un)fair in the proper sense due to a set of criteria related to normativity and agency. However, we demonstrate how (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27.  15
    Gamification, Side Effects, and Praise and Blame for Outcomes.Sven Nyholm - 2024 - Minds and Machines 34 (1):1-21.
    Abstract“Gamification” refers to adding game-like elements to non-game activities so as to encourage participation. Gamification is used in various contexts: apps on phones motivating people to exercise, employers trying to encourage their employees to work harder, social media companies trying to stimulate user engagement, and so on and so forth. Here, I focus on gamification with this property: the game-designer (a company or other organization) creates a “game” in order to encourage the players (the users) to bring about certain outcomes (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28.  11
    We are Building Gods: AI as the Anthropomorphised Authority of the Past.Carl Öhman - 2024 - Minds and Machines 34 (1):1-18.
    This article argues that large language models (LLMs) should be interpreted as a form of gods. In a theological sense, a god is an immortal being that exists beyond time and space. This is clearly nothing like LLMs. In an anthropological sense, however, a god is rather defined as the personified authority of a group through time—a conceptual tool that molds a collective of ancestors into a unified agent or voice. This is exactly what LLMs are. They are products of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  29.  7
    Philosophical Lessons for Emotion Recognition Technology.Rosalie Waelen - 2024 - Minds and Machines 34 (1):1-13.
    Emotion recognition technology uses artificial intelligence to make inferences about a person’s emotions, on the basis of their facial expressions, body language, tone of voice, or other types of input. Underlying such technology are a variety of assumptions about the manifestation, nature, and value of emotions. To assure the quality and desirability of emotion recognition technology, it is important to critically assess the assumptions embedded in the technology. Within philosophy, there is a long tradition of epistemological, ontological, phenomenological, and ethical (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  30.  40
    Informational Equivalence but Computational Differences? Herbert Simon on Representations in Scientific Practice.David Waszek - 2024 - Minds and Machines 34 (1):93-116.
    To explain why, in scientific problem solving, a diagram can be “worth ten thousand words,” Jill Larkin and Herbert Simon (1987) relied on a computer model: two representations can be “informationally” equivalent but differ “computationally,” just as the same data can be encoded in a computer in multiple ways, more or less suited to different kinds of processing. The roots of this proposal lay in cognitive psychology, more precisely in the “imagery debate” of the 1970s on whether there are image-like (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
 Previous issues
  
Next issues