Switch to: References

Add citations

You must login to add citations.
  1. ChatGPT: deconstructing the debate and moving it forward.Mark Coeckelbergh & David J. Gunkel - forthcoming - AI and Society:1-11.
    Large language models such as ChatGPT enable users to automatically produce text but also raise ethical concerns, for example about authorship and deception. This paper analyses and discusses some key philosophical assumptions in these debates, in particular assumptions about authorship and language and—our focus—the use of the appearance/reality distinction. We show that there are alternative views of what goes on with ChatGPT that do not rely on this distinction. For this purpose, we deploy the two phased approach of deconstruction and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • How will the state think with ChatGPT? The challenges of generative artificial intelligence for public administrations.Thomas Cantens - forthcoming - AI and Society:1-12.
    This article explores the challenges surrounding generative artificial intelligence (GenAI) in public administrations and its impact on human‒machine interactions within the public sector. First, it aims to deconstruct the reasons for distrust in GenAI in public administrations. The risks currently linked to GenAI in the public sector are often similar to those of conventional AI. However, while some risks remain pertinent, others are less so because GenAI has limited explainability, which, in return, limits its uses in public administrations. Confidentiality, marking (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Do we really need a “Digital Humanism”? A critique based on post-human philosophy of technology and socio-legal techniques.Federica Buongiorno & Xenia Chiaramonte - 2024 - Journal of Responsible Technology 18 (C):100080.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Towards a Benchmark for Scientific Understanding in Humans and Machines.Kristian Gonzalez Barman, Sascha Caron, Tom Claassen & Henk de Regt - 2024 - Minds and Machines 34 (1):1-16.
    Scientific understanding is a fundamental goal of science. However, there is currently no good way to measure the scientific understanding of agents, whether these be humans or Artificial Intelligence systems. Without a clear benchmark, it is challenging to evaluate and compare different levels of scientific understanding. In this paper, we propose a framework to create a benchmark for scientific understanding, utilizing tools from philosophy of science. We adopt a behavioral conception of understanding, according to which genuine understanding should be recognized (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • That’s Why is Worth Continuing to Think About Our Successors – A Reply to Erler.Andrea Lavazza & Murilo Vilaça - 2024 - Philosophy and Technology 37 (2):1-3.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - forthcoming - Topoi.
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial agents and their distinctive (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Therapeutic Chatbots as Cognitive-Affective Artifacts.J. P. Grodniewicz & Mateusz Hohol - forthcoming - Topoi:1-13.
    Conversational Artificial Intelligence (CAI) systems (also known as AI “chatbots”) are among the most promising examples of the use of technology in mental health care. With already millions of users worldwide, CAI is likely to change the landscape of psychological help. Most researchers agree that existing CAIs are not “digital therapists” and using them is not a substitute for psychotherapy delivered by a human. But if they are not therapists, what are they, and what role can they play in mental (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  • Digital humanism as a bottom-up ethics.Gemma Serrano, Francesco Striano & Steven Umbrello - 2024 - Journal of Responsible Technology 18 (June):100082.
    In this paper, we explore a new perspective on digital humanism, emphasizing the centrality of multi-stakeholder dialogues and a bottom-up approach to surfacing stakeholder values. This approach starkly contrasts with existing frameworks, such as the Vienna Manifesto's top-down digital humanism, which hinges on pre-established first principles. Our approach provides a more flexible, inclusive framework that captures a broader spectrum of ethical considerations, particularly those pertinent to the digital realm. We apply our model to two case studies, comparing the insights generated (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may lead (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Large Language Models, Agency, and Why Speech Acts are Beyond Them (For Now) – A Kantian-Cum-Pragmatist Case.Reto Gubelmann - 2024 - Philosophy and Technology 37 (1):1-24.
    This article sets in with the question whether current or foreseeable transformer-based large language models (LLMs), such as the ones powering OpenAI’s ChatGPT, could be language users in a way comparable to humans. It answers the question negatively, presenting the following argument. Apart from niche uses, to use language means to act. But LLMs are unable to act because they lack intentions. This, in turn, is because they are the wrong kind of being: agents with intentions need to be autonomous (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Ethics of generative AI and manipulation: a design-oriented research agenda.Michael Klenk - 2024 - Ethics and Information Technology 26 (1):1-15.
    Generative AI enables automated, effective manipulation at scale. Despite the growing general ethical discussion around generative AI, the specific manipulation risks remain inadequately investigated. This article outlines essential inquiries encompassing conceptual, empirical, and design dimensions of manipulation, pivotal for comprehending and curbing manipulation risks. By highlighting these questions, the article underscores the necessity of an appropriate conceptualisation of manipulation to ensure the responsible development of Generative AI technologies.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - forthcoming - Social Epistemology.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Becoming a Knower: Fabricating Knowing Through Coaction.Marie-Theres Fester-Seeger - 2024 - Social Epistemology 38 (1):49-69.
    This paper takes a step back from considering expertise as a social phenomenon. One should investigate how people become knowers before assigning expertise to a person’s actions. Using a temporal-sensitive systemic ethnography, a case study shows how undergraduate students form a social system out of necessity as they fabricate knowledge around an empty wording like ‘conscious living’. Tracing the engagement with students and tutor to recursive moments of coaction, I argue that, through the subtleties of bodily movements, people incorporate the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Authorship and ChatGPT: a Conservative View.René van Woudenberg, Chris Ranalli & Daniel Bracker - 2024 - Philosophy and Technology 37 (1):1-26.
    Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, we argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and cannot take responsibility for the texts it produces. Three perspectives (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Charting the Terrain of Artificial Intelligence: a Multidimensional Exploration of Ethics, Agency, and Future Directions.Partha Pratim Ray & Pradip Kumar Das - 2023 - Philosophy and Technology 36 (2):1-7.
    This comprehensive analysis dives deep into the intricate interplay between artificial intelligence (AI) and human agency, examining the remarkable capabilities and inherent limitations of large language models (LLMs) such as GPT-3 and ChatGPT. The paper traces the complex trajectory of AI's evolution, highlighting its operation based on statistical pattern recognition, devoid of self-consciousness or innate comprehension. As AI permeates multiple spheres of human life, it raises substantial ethical, legal, and societal concerns that demand immediate attention and deliberation. The metaphorical illustration (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI and the future of humanity: ChatGPT-4, philosophy and education – Critical responses.Michael A. Peters, Liz Jackson, Marianna Papastephanou, Petar Jandrić, George Lazaroiu, Colin W. Evers, Bill Cope, Mary Kalantzis, Daniel Araya, Marek Tesar, Carl Mika, Lei Chen, Chengbing Wang, Sean Sturm, Sharon Rider & Steve Fuller - forthcoming - Educational Philosophy and Theory.
    Michael A PetersBeijing Normal UniversityChatGPT is an AI chatbot released by OpenAI on November 30, 2022 and a ‘stable release’ on February 13, 2023. It belongs to OpenAI’s GPT-3 family (generativ...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Is Academic Enhancement Possible by Means of Generative AI-Based Digital Twins?Sven Nyholm - 2023 - American Journal of Bioethics 23 (10):44-47.
    Large Language Models (LLMs) “assign probabilities to sequences of text. When given some initial text, they use these probabilities to generate new text. Large language models are language models u...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial Intelligence and Human Enhancement: Can AI Technologies Make Us More (Artificially) Intelligent?Sven Nyholm - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (1):76-88.
    This paper discusses two opposing views about the relation between artificial intelligence (AI) and human intelligence: on the one hand, a worry that heavy reliance on AI technologies might make people less intelligent and, on the other, a hope that AI technologies might serve as a form of cognitive enhancement. The worry relates to the notion that if we hand over too many intelligence-requiring tasks to AI technologies, we might end up with fewer opportunities to train our own intelligence. Concerning (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Conceptual Engineering and Philosophy of Technology: Amelioration or Adaptation?Jeroen Hopster & Guido Löhr - 2023 - Philosophy and Technology 36 (4):1-17.
    Conceptual Engineering (CE) is thought to be generally aimed at ameliorating deficient concepts. In this paper, we challenge this assumption: we argue that CE is frequently undertaken with the orthogonal aim of _conceptual adaptation_. We develop this thesis with reference to the interplay between technology and concepts. Emerging technologies can exert significant pressure on conceptual systems and spark ‘conceptual disruption’. For example, advances in Artificial Intelligence raise the question of whether AIs are agents or mere objects, which can be construed (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Irony with a Point: Alan Turing and His Intelligent Machine Utopia.Bernardo Gonçalves - 2023 - Philosophy and Technology 36 (3):1-31.
    Turing made strong statements about the future of machines in society. This article asks how they can be interpreted to advance our understanding of Turing’s philosophy. His irony has been largely caricatured or minimized by historians, philosophers, scientists, and others. Turing is often portrayed as an irresponsible scientist, or associated with childlike manners and polite humor. While these representations of Turing have been widely disseminated, another image suggested by one of his contemporaries, that of a nonconformist, utopian, and radically progressive (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Large language models in medical ethics: useful but not expert.Andrea Ferrario & Nikola Biller-Andorno - forthcoming - Journal of Medical Ethics.
    Large language models (LLMs) have now entered the realm of medical ethics. In a recent study, Balaset alexamined the performance of GPT-4, a commercially available LLM, assessing its performance in generating responses to diverse medical ethics cases. Their findings reveal that GPT-4 demonstrates an ability to identify and articulate complex medical ethical issues, although its proficiency in encoding the depth of real-world ethical dilemmas remains an avenue for improvement. Investigating the integration of LLMs into medical ethics decision-making appears to be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Hidden Costs of ChatGPT: A Call for Greater Transparency.Matthew Elmore - 2023 - American Journal of Bioethics 23 (10):47-49.
    For decades, healthcare has relied on data-driven algorithms to guide clinical practice. Recent advances in machine learning have opened up new possibilities in the field, enabling detailed analyse...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence Implications for Academic Cheating: Expanding the Dimensions of Responsible Human-AI Collaboration with ChatGPT.Jo Ann Oravec - 2023 - Journal of Interactive Learning Research 34 (2).
    Cheating is a growing academic and ethical concern in higher education. This article examines the rise of artificial intelligence (AI) generative chatbots for use in education and provides a review of research literature and relevant scholarship concerning the cheating-related issues involved and their implications for pedagogy. The technological “arms race” that involves cheating-detection system developers versus technology savvy students is attracting increased attention to cheating. AI has added new dimensions to academic cheating challenges as students (as well as faculty and (...)
    Direct download  
     
    Export citation  
     
    Bookmark