Minds and Machines

ISSNs: 0924-6495, 1572-8641

16 found

View year:

  1. ChatGPT-4 in the Turing Test.Ricardo Restrepo Echavarría - 2025 - Minds and Machines 35 (8):1-10.
    There has been considerable optimistic speculation on how well ChatGPT-4 would perform in a Turing Test. However, no minimally serious implementation of the test has been reported to have been carried out. This brief note documents the re-sults of subjecting ChatGPT-4 to 10 Turing Tests, with different interrogators and participants. The outcome is tremendously disappointing for the optimists. Despite ChatGPT reportedly outperforming 99.9% of humans in a Verbal IQ test, it falls short of passing the Turing Test. In 9 out (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  2. (1 other version)Artificial Intelligence (AI) and Global Justice.Siavosh Sahebi & Paul Formosa - 2025 - Minds and Machines 35 (4):1-29.
    This paper provides a philosophically informed and robust account of the global justice implications of Artificial Intelligence (AI). We first discuss some of the key theories of global justice, before justifying our focus on the Capabilities Approach as a useful framework for understanding the context-specific impacts of AI on lowto middle-income countries. We then highlight some of the harms and burdens facing low- to middle-income countries within the context of both AI use and the AI supply chain, by analyzing the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  3.  8
    The Historical Development of Ethics of Emerging Technologies.Philip A. E. Brey - 2025 - Minds and Machines 35 (2):1-9.
    This article traces the historical development of the ethics of emerging technologies. It argues that during the late 2000s and 2010s, the field of ethics of technology transformed from a fragmented, reactive, and methodologically underdeveloped discipline focused on mature technologies and lacking policy orientation into a more cohesive, proactive, methodologically sophisticated, and policy-focused field with a strong emphasis on emerging technologies. An agenda for this transition was set in Jim Moor’s seminal publication “Why We Need Better Ethics for Emerging Technologies”.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4.  2
    In Honor of James Moor: A Grateful Retrospective.Charles M. Ess - 2025 - Minds and Machines 35 (2):1-6.
  5.  8
    Moor’s ‘Are There Decisions Computers Should Never Make?’.Deborah G. Johnson - 2025 - Minds and Machines 35 (2):1-8.
    ‘Are There Decisions Computers Should Never Make?’ is one of James H. Moor’s many groundbreaking papers in computer ethics, and it is one that I have thought a good deal about since its publication in 1979 and especially in recent years in relation to current discourse on AI. In this paper, I describe Jim’s analysis, reflect on its relevance to current thinking about AI, and take issue with several of his arguments. The conclusion of Jim’s paper is that computers should (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  3
    How Do Social Media Algorithms Appear? A Phenomenological Response to the Black Box Metaphor.Anthony Longo - 2025 - Minds and Machines 35 (2):1-21.
    This article challenges the dominant ‘black box’ metaphor in critical algorithm studies by proposing a phenomenological framework for understanding how social media algorithms manifest themselves in user experience. While the black box paradigm treats algorithms as opaque, self-contained entities that exist only ‘behind the scenes’, this article argues that algorithms are better understood as genetic phenomena that unfold temporally through user-platform interactions. Recent scholarship in critical algorithm studies has already identified various ways in which algorithms manifest in user experience: through (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7.  6
    The Quantum Panopticon: A Theory of Surveillance for the Quantum Era.Erik Olsson & Carl Öhman - 2025 - Minds and Machines 35 (2):1-22.
    The advent of quantum computing will compromise current asymmetric cryptography. Awaiting this moment, global superpowers are routinely collecting and storing encrypted data, so as to later decrypt it once sufficiently strong quantum computers are in place. We argue that this situation gives rise to a new mode of global surveillance that we refer to as a _quantum panopticon._ Unlike traditional forms of panoptic surveillance, the quantum panopticon introduces a _temporal axis_, whereby data subjects’ future pasts can be monitored from an (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8.  8
    Persons, Unique Value and Avatars.Paula Sweeney - 2025 - Minds and Machines 35 (2):1-14.
    An individual human has value partly in virtue of their uniqueness. Personal avatar technology—technology which creates a digital replication of a real person—appears to have the potential to undermine that value. Here I explore if and how avatars might make humans less valuable by undermining the value that a human gains from being unique. Ultimately, I conclude that, while avatars cannot make humans no longer unique, they could significantly undermine the value that we place on human uniqueness. First, I argue (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9.  5
    James Moor’s Privacy Framework: A Theory in Need of Further Exploration.Herman T. Tavani - 2025 - Minds and Machines 35 (2):1-7.
    This paper is intended as a tribute to the late James Moor. An esteemed Dartmouth professor, who published in many areas of philosophy, including logic, Moor is perhaps best remembered today for his pioneering work in the field of computer ethics. His seminal (and award-winning) article, “What Is Computer Ethics?” (_Metaphilosophy_, 1985) was highly influential both in defining and shaping the then nascent field of computer ethics. Many other computer-ethics-related papers followed over the next quarter century, in which Moor examined (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  2
    Moor on Ethics for Emerging Technologies: Some Environmental Considerations.John Weckert - 2025 - Minds and Machines 35 (2):1-7.
    Around the turn of this century a number of emerging technologies were in the news, raising some potentially significant ethical questions. Given that they were emerging they as yet had no, or very few, impacts, so it was not obvious how best to assess them ethically. Jim Moor addressed this issue and offered three suggestions for a better ethics for emerging technologies. His first was that ethics should be dynamic, that is, it should be an ongoing process before, during and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11.  10
    Physical Programmability.Nick Wiggershaus - 2025 - Minds and Machines 35 (2):1-29.
    This article delivers an account of what it is for a physical system to be programmable. Despite its significance in computing and beyond, today’s philosophical discourse on programmability is impoverished. This contribution offers a novel definition of _physical programmability_ as the degree to which the selected operations of an automaton can be reconfigured in a controlled way. The framework highlights several key insights: the constrained applicability of physical programmability to material automata, the characterization of selected operations within the neo-mechanistic framework, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12.  9
    Fairness in Algorithmic Profiling: The AMAS Case.Eva Achterhold, Monika Mühlböck, Nadia Steiber & Christoph Kern - 2025 - Minds and Machines 35 (1):1-30.
    We study a controversial application of algorithmic profiling in the public sector, the Austrian AMAS system. AMAS was supposed to help caseworkers at the Public Employment Service (PES) Austria to allocate support measures to job seekers based on their predicted chance of (re-)integration into the labor market. Shortly after its release, AMAS was criticized for its apparent unequal treatment of job seekers based on gender and citizenship. We systematically investigate the AMAS model using a novel real-world dataset of young job (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  13.  14
    Correction: Submarine Cables and the Risks to Digital Sovereignty.Abra Ganz, Martina Camellini, Emmie Hine, Claudio Novelli, Huw Roberts & Luciano Floridi - 2025 - Minds and Machines 35 (1):1-1.
  14.  9
    An App a Day will (Probably Not) Keep the Doctor Away: An Evidence Audit of Health and Medical Apps Available on the Apple App Store.Jessica Morley, Joel Laitila, Joseph S. Ross, Joel Schamroth, Joe Zhang & Luciano Floridi - 2025 - Minds and Machines 35 (1):1-30.
    There are more than 350,000 health apps available in public app stores. The extolled benefits of health apps are numerous and well documented. However, there are also concerns that poor-quality apps, marketed directly to consumers, threaten the tenets of evidence-based medicine and expose individuals to the risk of harm. This study addresses this issue by assessing the overall quality of evidence publicly available to support the effectiveness claims of health apps marketed directly to consumers. To assess the quality of evidence (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15.  6
    On Twelve Shades of Green: Assessing the Levels of Environmental Protection in the Artificial Intelligence Act.Ugo Pagallo - 2025 - Minds and Machines 35 (1):1-19.
    The paper examines twelve legal regimes related to the governance and regulation of both the environmental risks and opportunities brought forth by the use of AI systems and AI models in the Artificial Intelligence Act (‘AIA’) of EU law. The assessment of risks and opportunities of AI related to the environment includes the high-risk management procedures under Art. 9 of the AIA, the “fundamental rights impact assessment” of Art. 27, and the codes of conduct of Art. 95. These provisions are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  16. The Testimony Gap: Machines and Reasons.Robert Sparrow & Gene Flenady - 2025 - Minds and Machines 35 (1):1-16.
    Most people who have considered the matter have concluded that machines cannot be moral agents. Responsibility for acting on the outputs of machines must always rest with a human being. A key problem for the ethical use of AI, then, is to ensure that it does not block the attribution of responsibility to humans or lead to individuals being unfairly held responsible for things over which they had no control. This is the “responsibility gap”. In this paper, we argue that (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
 Previous issues
  
Next issues