About this topic
Summary Ethical issues associated with AI are proliferating and rising to popular attention as machines engineered to perform tasks traditionally requiring biological intelligence become ubiquitous. Consider that civil infrastructure including energy grids and mass-transit systems are increasingly moderated by increasingly intelligent machines. Ethical issues include those of responsibility and/or blameworthiness of such systems, with implications for engineers who must responsibly design them, and philosophers who must interpret impacts - both potential and actual - in order to advise ethical designers. For example, who or what is responsible in the case of an accident due to an AI system error, or due to design flaws, or due to proper operation outside of anticipated constraints, such as part of a semi-autonomous automobile or actuarial algorithm? These are issues falling under the heading of Ethics of AI, as well as to other categories, e.g. those dedicated to autonomous vehicles, algorithmic fairness or artificial system safety. Finally, as AIs become increasingly intelligent, there seems some legitimate concern over the potential for AIs to manage human systems according to AI values, rather than as directly programmed by human designers. These concerns call into question the long-term safety of intelligent systems, not only for individual human beings, but for the human race and life on Earth as a whole. These issues and many others are central to Ethics of AI, and works focusing on such ideas can be found here. 
Key works Some works: Bostrom manuscriptMüller 2014, Müller 2016, Etzioni & Etzioni 2017, Dubber et al 2020, Tasioulas 2019, Müller 2021
Introductions Müller 2013, Gunkel 2012, Coeckelbergh 2020, Gordon et al 2021, Müller 2022Jecker & Nakazawa 2022, Mao & Shi-Kupfer 2023, Dietrich et al 2021, see also  https://plato.stanford.edu/entries/ethics-ai/
Related

Contents
3140 found
Order:
1 — 50 / 3140
Material to categorize
  1. ChatGPT i przyszłość nauczania: jak AI zmienia krajobraz edukacyjny.Karolina Tytko, Mirosław Roszkowski, Marek Malucha, Łukasz Walusiak, Natalia Rylko & Karlygash Nurtazina - 2023 - Ebiś (Edukacja Biologiczna I Środowiskowa) 80 (2):182-195.
    The launch of ChatGPT in November 2022 and the possibility of using this for free by Internet users was undoubtedly a groundbreaking event that had a huge impact on various areas of life. Focusing on education, and especially on the Prussian model of teaching in Poland, one can be tempt ed to compare this phenomenon to the Big Bang. Almost a year after the debut of ChatGPT, as public awareness of this tool grows, a new model of education is slowly (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Going Whole Hog: A Philosophical Defense of AI Cognition.Herman Cappelen & Josh Dever - manuscript
  3. The Collapse of Predictive Compression_ Why Probabilistic Intelligence Fails Without Prime-Chiral Resonance.Devin Bostick - manuscript
    Abstract -/- The current paradigm in artificial intelligence relies on probabilistic compression and entropy optimization. While powerful in reactive domains, these models fundamentally fail to produce coherent, deterministic intelligence. They approximate output without encoding the structural causes of cognition, leading to instability across recursion, contradiction, and long-range coherence. -/- This paper introduces prime-chiral resonance (PCR) as the lawful substrate underpinning structured emergence. PCR replaces probability with phase-aligned intelligence, where signals are selected not by likelihood but by resonance with deterministic coherence (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Security through Unity Europe’s Challenges after Ukraine Crisis.Paul Ertl (ed.) - 2024 - Vienna: Ministry of Defence, Republic of Austria.
  5. A taxonomy of epistemic injustice in the context of AI and the case for generative hermeneutical erasure.Warmhold Jan Thomas Mollema - manuscript
    Whether related to machine learning models’ epistemic opacity, algorithmic classification systems’ discriminatory automation of testimonial prejudice, the distortion of human beliefs via the hallucinations of generative AI, the inclusion of the global South in global AI governance, the execution of bureaucratic violence via algorithmic systems, or located in the interaction with conversational artificial agents epistemic injustice related to AI is a growing concern. Based on a proposed general taxonomy of epistemic injustice, this paper first sketches a taxonomy of the types (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. The Global Brain Argument: Nodes, Computroniums and the AI Megasystem (Target Paper for Special Issue).Susan Schneider - forthcoming - Disputatio.
    The Global Brain Argument contends that many of us are, or will be, part of a global brain network that includes both biological and artificial intelligences (AIs), such as generative AIs with increasing levels of sophistication. Today’s internet ecosystem is but a hodgepodge of fairly unintegrated programs, but it is evolving by the minute. Over time, technological improvements will facilitate smarter AIs and faster, higher-bandwidth information transfer and greater integration between devices in the internet-of-things. The Global Brain (GB) Argument says (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Homo Cyberneticus vs. Homo Economicus. Эволюция человека в эпоху технологий (In Russian) // Homo Cyberneticus vs. Homo Economicus. Human Evolution in the Age of Technology.Oleg N. Gurov - 2025 - Artificial Societes 20 (1).
    The paper explores the dialectic of human nature in the age of digital technologies. To this end, the paper contrasts the classical model of Homo Economicus, a rational agent for whom existence is reduced to a set of factors of economic nature, and the concept of Homo Cyberneticus. This concept arises from the symbiosis between humans and algorithms, in which the products of technology turn from external tools into a component of cognitive processes. The author proves that neurointerfaces, AI and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. La Estética y Ética Del Arte Creado Por Inteligencia Artificial.Carlos Vera Hoyos - manuscript
    En este ensayo se analiza si el arte creado por inteligencia artificial puede efectivamente considerarse arte y las consideraciones éticas de utilizarla parcial o totalmente para elaborar obras de arte. A partir de la teoría institucional del arte, cuyo precursor fue George T. Dickie, se argumenta que los productos generados por IA sí pueden considerarse arte si son reconocidos como tales por instituciones culturales; por su parte, desde la teoría del arte por el arte—defendida por Théophile Gautier, Victor Cousin, Benjamin (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. A Novel Type of Precautionary Argument for Situations of Severe Uncertainty in Science and Policy.Roberto Fumagalli - forthcoming - Philosophy of the Social Sciences.
    This paper articulates and defends a novel type of precautionary argument for situations of severe uncertainty in science and policy, which I term precautionary slippery slope argument. The paper explicates the structure of precautionary slippery slope arguments, identifies the main factors that bear on the strength of these arguments, and illustrates how the proponents of such arguments can address several influential objections put forward against standard slippery slope arguments and other prominent forms of precautionary reasoning.
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  10. Learning alone: Language models, overreliance, and the goals of education.Leonard Dung & Dominik Balg - manuscript
    The development and ubiquitous availability of large language model based systems (LLMs) poses a plurality of potentials and risks for education in schools and universities. In this paper, we provide an analysis and discussion of the overreliance concern as one specific risk: that students might fail to acquire important capacities, or be inhibited in the acquisition of these capacities, because they overly rely on LLMs. We use the distinction between global and local goals of education to guide our investigation. In (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. A Case Study in Acceleration AI Ethics: The Telus GenAI Conversational Agent.James Brusseau - manuscript
    Acceleration ethics addresses the tension between innovation and safety in artificial intelligence. The acceleration argument is that risks raised by innovation should be answered with still more innovating. This paper summarizes the theoretical position, and then shows how acceleration ethics works in a real case. To begin, the paper summarizes acceleration ethics as composed of five elements: innovation solves innovation problems, innovation is intrinsically valuable, the unknown is encouraging, governance is decentralized, ethics is embedded. Subsequently, the paper illustrates the acceleration (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. AI Ethics' Institutional Turn.Jocelyn Maclure & Alexis Morin-Martel - 2025 - Digital Society 4.
    Over the last few years, various public, private, and NGO entities have adopted a staggering number of non-binding ethical codes to guide the development of artificial intelligence. However, this seemingly failed to drive better ethical practices within AI organizations. In light of this observation, this paper aims to reevaluate the roles the ethics of AI can play to have a meaningful impact on the development and implementation of AI systems. In doing so, we challenge the notion that AI ethics should (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13. Uncovering the gap: challenging the agential nature of AI responsibility problems.Joan Llorca Albareda - 2025 - AI and Ethics:1-14.
    In this paper, I will argue that the responsibility gap arising from new AI systems is reducible to the problem of many hands and collective agency. Systematic analysis of the agential dimension of AI will lead me to outline a disjunctive between the two problems. Either we reduce individual responsibility gaps to the many hands, or we abandon the individual dimension and accept the possibility of responsible collective agencies. Depending on which conception of AI agency we begin with, the responsibility (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  14. An Algorithmic Perpetrator, or Why We Need to Acknowledge the Many Things We Do Not (Yet) Know.Kristina Khutsishvili - 2024 - Depictions.
    Rapid technological developments may exacerbate the victimhood already experienced by vulnerable individuals and communities. At the same time, broad societal anxieties induced by technology lead to the perception of algorithms, these entities of the unknown, as perpetrators. In this essay, I argue that these tendencies can be addressed by a nuanced process of technological co-creation and by the fostering of a public discourse in which “experts” and “public” are united in the acknowledgment of a shared vulnerability before the unknown, whether (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15. Are current AI systems capable of well-being?James Fanciullo - 2025 - Asian Journal of Philosophy 4 (1):1-10.
    Recently, Simon Goldstein and Cameron Domenico Kirk-Giannini have argued that certain existing AI systems are capable of well-being. They consider the three leading approaches to well-being—hedonism, desire satisfactionism, and the objective list approach—and argue that theories of these kinds plausibly imply that some current AI systems are capable of welfare. In this paper, I argue that the leading versions of each of these theories do not imply this. I conclude that we have strong reason to doubt that current AI systems (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  16. No right to an explanation.Brett Karlan & Henrik Kugelberg - forthcoming - Philosophy and Phenomenological Research.
    An increasing number of complex and important decisions are now being made with the aid of opaque algorithms. This has led to calls from both theorists and legislators for the implementation of a right to an explanation for algorithmic decisions. In this paper, we argue that, in most cases and for most kinds of explanations, there is no such right. After differentiating a number of different things that might be meant by a ‘right to an explanation,’ we argue that, for (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  17. 道枢论(Daoshulun)-The theory of the Pivot of the Dao.Kefan Jiang - manuscript
    This paper proposes The theory of the Pivot of the Dao (Daoshulun,DSL), aiming to investigate certain issues through the lens of recursivity and non-recursivity. The paper is divided into two main sections: -/- The first section systematically expounds the theoretical foundation of the D-P framework, defining the dialectical relationship between recursivity (P) and non-recursivity (D). Through five core propositions, DSL asserts that the essence of hierarchical evolution lies in the eternal game of recursive chains. -/- The second section explores DSL’s (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  18. Authentic Artificial Love.Ariela Tubert & Justin Tiehen - forthcoming - In Henry Shevlin, AI in Society: Relationships (Oxford Intersections). Oxford University Press.
    Often in romantic relationships, people want partners whose moral, political, and religious values align with their own. This article connects this point to the project of value alignment in AI research, in which researchers aim to design artificial intelligence systems that behave in ways that are in accordance with human values. There are good reasons to pursue the project of value alignment in connection with romantic chatbots—artificial intelligence systems designed to allow human users to simulate or replicate elements of romantic (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  19. On Role-Reversible Judgments and Related Democratic Objections to AI Judges.Amin Ebrahimi Afrouzi - 2023 - Journal of Criminology and Criminal Law 114.
    In a recent article, Kiel Brennan-Marquez and Stephen E. Henderson argue that replacing human judges with AI would violate the role-reversibility ideal of democratic governance. Unlike human judges, they argue, AI judges are not reciprocally vulnerable to the process and effects of their own decisions. I argue that role-reversibility, though a formal ideal of democratic governance, is in the service of substantive ends that may be independently achieved under AI judges. Thus, although role-reversibility is necessary for democratic governance when human (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  20. The Testimony Gap: Machines and Reasons.Robert Sparrow & Gene Flenady - 2025 - Minds and Machines 35 (1):1-16.
    Most people who have considered the matter have concluded that machines cannot be moral agents. Responsibility for acting on the outputs of machines must always rest with a human being. A key problem for the ethical use of AI, then, is to ensure that it does not block the attribution of responsibility to humans or lead to individuals being unfairly held responsible for things over which they had no control. This is the “responsibility gap”. In this paper, we argue that (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Biopolitical Control (In Russian) // Воплощенная власть: киборгизация как механизм биополитического контроля.Oleg N. Gurov - 2024 - Political Conceptology 1 (4):23-33.
    The article explores the links between cyborgisation and biopolitics, examining the impact of technology on the mechanisms of power and control in society. The author analyses how the integration of digital technologies into the human body and consciousness transforms traditional forms of biopolitical control. The paper uses the concepts of Michel Foucault and Gilles Deleuze to analyse the transformation of power and its forms in the context of blurring boundaries between man and machine. At the same time, the author analyses (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  22. Grenzen Künstlicher Intelligenz.Markus Maier & Benjamin Rathgeber (eds.) - 2025 - Stuttgart: Kohlhammer.
    Die aktuellen Entwicklungen Künstlicher Intelligenz vermitteln den Eindruck grenzenloser Möglichkeiten. Sie strahlen dabei in alle gesellschaftlichen Bereiche aus und bestimmen unsere lebensweltlichen Praxen maßgeblich mit. Wie bei allen großen technologischen Transformationen stellt sich dabei die Frage nach deren zukünftigen Potentialen, umgekehrt aber auch nach ihren prinzipiellen Grenzen. In diesem Spannungsverhältnis kann philosophische Reflexion durch begriffliche Klarheit für Orientierung sorgen und einen (auf-)klärenden Beitrag leisten, welcher dabei hilft ernstzunehmende technische Möglichkeiten von unvernünftigen Spekulationen sinnvoll unterscheiden zu können. Der vorliegende Band trägt (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. Chatbots of the dead.Amy Kurzweil & Daniel Story - 2025 - Aeon.
    We can now create compelling experiences of talking with our dead. Is this ghoulish, therapeutic or something else again?
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  24. Construct Validity in Automated Counterterrorism Analysis.Adrian K. Yee - 2025 - Philosophy of Science 92 (1):1-18.
    Governments and social scientists are increasingly developing machine learning methods to automate the process of identifying terrorists in real time and predict future attacks. However, current operationalizations of “terrorist”’ in artificial intelligence are difficult to justify given three issues that remain neglected: insufficient construct legitimacy, insufficient criterion validity, and insufficient construct validity. I conclude that machine learning methods should be at most used for the identification of singular individuals deemed terrorists and not for identifying possible terrorists from some more general (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25. "Responsibility" Plus "Gap" Equals "Problem".Marc Champagne - 2025 - In Johanna Seibt, Peter Fazekas & Oliver Santiago Quick, Social Robots with AI: Prospects, Risks, and Responsible Methods. Amsterdam: IOS Press. pp. 244–252.
    Peter Königs recently argued that, while autonomous robots generate responsibility gaps, such gaps need not be considered problematic. I argue that Königs’ compromise dissolves under analysis since, on a proper understanding of what “responsibility” is and what “gap” (metaphorically) means, their joint endorsement must repel an attitude of indifference. So, just as “calamities that happen but don’t bother anyone” makes no sense, the idea of “responsibility gaps that exist but leave citizens and ethicists unmoved” makes no sense.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  26. Social Robots with AI: Prospects, Risks, and Responsible Methods.Johanna Seibt, Peter Fazekas & Oliver Santiago Quick (eds.) - 2025 - Amsterdam: IOS Press.
  27. Explicability as an AI Principle: Technology and Ethics in Cooperation.Moto Kamiura - forthcoming - Proceedings of the 39Th Annual Conference of the Japanese Society for Artificial Intelligence, 2025.
    This paper categorizes current approaches to AI ethics into four perspectives and briefly summarizes them: (1) Case studies and technical trend surveys, (2) AI governance, (3) Technologies for AI alignment, (4) Philosophy. In the second half, we focus on the fourth perspective, the philosophical approach, within the context of applied ethics. In particular, the explicability of AI may be an area in which scientists, engineers, and AI developers are expected to engage more actively relative to other ethical issues in AI.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  28. Explainability Is Necessary for AI’s Trustworthiness.Ning Fan - 2025 - Philosophy and Technology 38 (1):1-5.
    In a recent article in this journal, Baron (2025) argues that we can appropriately trust unexplainable artificial intelligence (AI) systems, so explainability is not necessary for AI’s trustworthiness. In this commentary, I argue that Baron is wrong. I first offer a positive argument for the claim that explainability is necessary for trustworthiness. Drawing on this argument, I then show that Baron’s argument for thinking otherwise fails.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  29. Beauty Filters in Self-Perception: The Distorted Mirror Gazing Hypothesis.Gloria Andrada - 2025 - Topoi:1-12.
    Beauty filters are automated photo editing tools that use artificial intelligence and computer vision to detect facial features and modify them, allegedly improving a face’s physical appearance and attractiveness. Widespread use of these filters has raised concern due to their potentially damaging psychological effects. In this paper, I offer an account that examines the effect that interacting with such filters has on self-perception. I argue that when looking at digitally-beautified versions of themselves, individuals are looking at AI-curated distorted mirrors. This (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  30. Preservation or Transformation: A Daoist Guide to Griefbots.Pengbo Liu - forthcoming - In Henry Shevlin, AI in Society: Relationships (Oxford Intersections). Oxford University Press.
    Griefbots are chatbots modeled on the personalities of deceased individuals, designed to assist with the grieving process and, according to some, to continue relationships with loved ones after their physical passing. The essay examines the promises and perils of griefbots from a Daoist perspective. According to the Daoist philosopher Zhuangzi, death is a natural and inevitable phenomenon, a manifestation of the constant changes and transformations in the world. This approach emphasizes adaptability, flexibility, and openness to alternative ways of relating to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Artificially sentient beings: Moral, political, and legal issues.Fırat Akova - 2023 - New Techno-Humanities 3 (1):41-48.
    The emergence of artificially sentient beings raises moral, political, and legal issues that deserve scrutiny. First, it may be difficult to understand the well-being elements of artificially sentient beings and theories of well-being may have to be reconsidered. For instance, as a theory of well-being, hedonism may need to expand the meaning of happiness and suffering or it may run the risk of being irrelevant. Second, we may have to compare the claims of artificially sentient beings with the claims of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  32. A Roadmap for Governing AI: Technology Governance and Power-Sharing Liberalism.Danielle Allen, Woojin Lim, Sarah Hubbard, Allison Stanger, Shlomit Wagman, Kinney Zalesne & Omoaholo Omoakhalen - 2025 - AI and Ethics 4 (4).
    This paper aims to provide a roadmap for governing AI. In contrast to the reigning paradigms, we argue that AI governance should be not merely a reactive, punitive, status-quo-defending enterprise, but rather the expression of an expansive, proactive vision for technology—to advance human flourishing. Advancing human flourishing in turn requires democratic/political stability and economic empowerment. To accomplish this, we build on a new normative framework that will give humanity its best chance to reap the full benefits, while avoiding the dangers, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33. Distribution, Recognition, and Just Medical AI.Zachary Daus - 2025 - Philosophy and Technology 38 (1):1-17.
    Medical artificial intelligence (AI) systems are value-laden technologies that can simultaneously encourage and discourage conflicting values that may all be relevant for the pursuit of justice. I argue that the predominant theory of healthcare justice, the Rawls-inspired approach of Norman Daniels, neither adequately acknowledges such conflicts nor explains if and how they can resolved. By juxtaposing Daniels’s theory of healthcare justice with Axel Honneth’s and Nancy Fraser’s respective theories of justice, I draw attention to one such conflict. Medical AI may (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. (1 other version)Attention, Moral Skill, and Algorithmic Recommendation.Nick Schuster & Seth Lazar - 2025 - Philosophical Studies 182 (1):159-184.
    Recommender systems are artificial intelligence technologies, deployed by online platforms, that model our individual preferences and direct our attention to content we’re likely to engage with. As the digital world has become increasingly saturated with information, we’ve become ever more reliant on these tools to efficiently allocate our attention. And our reliance on algorithmic recommendation may, in turn, reshape us as moral agents. While recommender systems could in principle enhance our moral agency by enabling us to cut through the information (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  35. Deception and manipulation in generative AI.Christian Tarsney - forthcoming - Philosophical Studies.
    Large language models now possess human-level linguistic abilities in many contexts. This raises the concern that they can be used to deceive and manipulate on unprecedented scales, for instance spreading political misinformation on social media. In future, agentic AI systems might also deceive and manipulate humans for their own purposes. In this paper, first, I argue that AI-generated content should be subject to stricter standards against deception and manipulation than we ordinarily apply to humans. Second, I offer new characterizations of (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  36. Technology, Liberty, and Guardrails.Kevin Mills - 2025 - AI and Ethics 5:39-46.
    Technology companies are increasingly being asked to take responsibility for the technologies they create. Many of them are rising to the challenge. One way they do this is by implementing “guardrails”: restrictions on functionality that prevent people from misusing their technologies (per some standard of misuse). While there can be excellent reasons for implementing guardrails (and doing so is sometimes morally obligatory), I argue that the unrestricted authority to implement guardrails is incompatible with proper respect for user freedom, and is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. Health AI Poses Distinct Harms and Potential Benefits for Disabled People.Charles Binkley, Joel Michael Reynolds & Andrew Schuman - 2025 - Nature Medicine 1.
    This piece in Nature Medicine notes the risks that incorporation of AI systems into health care poses to disabled patients and proposes ways to avoid them and instead create benefit.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38. 50 preguntas sobre tecnologías para un envejecimiento activo y saludable. Edición española.Francisco Florez-Revuelta, Alin Ake-Kob, Pau Climent-Perez, Paulo Coelho, Liane Colonna, Laila Dahabiyeh, Carina Dantas, Esra Dogru-Huzmeli, Hazım Kemal Ekenel, Aleksandar Jevremovic, Nina Hosseini-Kivanani, Aysegul Ilgaz, Mladjan Jovanovic, Andrzej Klimczuk, Maksymilian M. Kuźmicz, Petre Lameski, Ferlanda Luna, Natália Machado, Tamara Mujirishvili, Zada Pajalic, Galidiya Petrova, Nathalie G. S. Puaschitz, Maria Jose Santofimia, Agusti Solanas, Wilhelmina van Staalduinen & Ziya Ata Yazici - 2024 - Alicante: University of Alicante.
    Este manual sobre tecnologías para un envejecimiento activo y saludable, también conocido como Vida Asistida Activa (Active Assisted Living – AAL en sus siglas en inglés), ha sido creado como parte de la Acción COST GoodBrother, que se ha llevado a cabo desde 2020 hasta 2024. Las Acciones COST son programas de investigación europeos que promueven la colaboración internacional, uniendo a investigadores, profesionales e instituciones para abordar desafíos sociales importantes. GoodBrother se ha centrado en las cuestiones éticas y de privacidad (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. A Capability Approach to AI Ethics.Emanuele Ratti & Mark Graves - 2025 - American Philosophical Quarterly 62 (1):1-16.
    We propose a conceptualization and implementation of AI ethics via the capability approach. We aim to show that conceptualizing AI ethics through the capability approach has two main advantages for AI ethics as a discipline. First, it helps clarify the ethical dimension of AI tools. Second, it provides guidance to implementing ethical considerations within the design of AI tools. We illustrate these advantages in the context of AI tools in medicine, by showing how ethics-based auditing of AI tools in medicine (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40. Do It Yourself Content and the Wisdom of the Crowds.Dallas Amico-Korby, Maralee Harrell & David Danks - 2025 - Erkenntnis:1-29.
    Many social media platforms enable (nearly) anyone to post (nearly) anything. One clear downside of this permissiveness is that many people appear bad at determining who to trust online. Hacks, quacks, climate change deniers, vaccine skeptics, and election deniers have all gained massive followings in these free markets of ideas, and many of their followers seem to genuinely trust them. At the same time, there are many cases in which people seem to reliably determine who to trust online. Consider, for (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  41. Can Chatbots Preserve Our Relationships with the Dead?Stephen M. Campbell, Pengbo Liu & Sven Nyholm - forthcoming - Journal of the American Philosophical Association.
    Imagine that you are given access to an AI chatbot that compellingly mimics the personality and speech of a deceased loved one. If you start having regular interactions with this “thanabot,” could this new relationship be a continuation of the relationship you had with your loved one? And could a relationship with a thanabot preserve or replicate the value of a close human relationship? To the first question, we argue that a relationship with a thanabot cannot be a true continuation (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42. ¿Cómo integrar la ética aplicada a la inteligencia artificial en el currículo? Análisis y recomendaciones desde el feminismo de la ciencia y de datos.G. Arriagada Bruneau & Javiera Arias - 2024 - Revista de filosofía (Chile) 81:137-160.
    Abstract:This article examines the incorporation of applied ethics into artificial intelligence (AI) within Chilean university curricula, emphasizing the urgent need to implement an integrated framework of action. Through a documentary analysis, it becomes evident that most higher education programs do not explicitly include AI ethics courses in their curricula, highlighting the need for institutionalizing this integration systematically. In response, we propose an approach grounded in feminist science and data feminism, advocating for the inclusion of diverse perspectives and experiences in the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  43. Moral parallax: challenges between dignity, AI, and virtual violence.Pablo De la Vega - 2024 - Trayectorias Humanas Trascontinentales 18:116-128.
    Virtual reality is not only a prowess of technological advancement and AI, but also an element that extends the horizons of human existence and complicates the way of approaching various phenomena of the physical world, for example, violence. Its practice in virtuality leads to a series of challenges, especially when virtual reality is considered as genuine reality. This text delves into virtual violence, the influence of AI on it and the problems that its conception implies. To analyze this phenomenon, parallax (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  44. A hybrid marketplace of ideas.Tomer Jordi Chaffer, Dontrail Cotlage & Justin Goldston - manuscript
    The convergence of humans and artificial intelligence (AI) systems introduces new dynamics into the cultural and intellectual landscape. Complementing emerging cultural evolution concepts such as machine culture, AI agents represent a significant techno-sociological development, particularly within the anthropological study of Web3 as a community focused on decentralization through blockchain. Despite their growing presence, the cultural significance of AI agents remains largely unexplored in academic literature. Toward this end, we conceived hybrid netnography, a novel interdisciplinary approach that examines the cultural and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  45. The Better Choice? The Status Quo versus Radical Human Enhancement.Madeleine Hayenhjelm - 2024 - The Journal of Ethics 2024:1-19.
    Can it be rational to favour the status quo when the alternatives to the status quo promise considerable increases in overall value? For instance, can it be rational to favour the status quo over radical human enhancement? A reasonable response to these questions would be to say that it can only be rational if the status quo is indeed the better choice on some measure. In this paper, I argue that it can be rational to favour the status quo over (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  46. AI responsibility gap: not new, inevitable, unproblematic.Huzeyfe Demirtas - 2025 - Ethics and Information Technology 27 (1):1-10.
    Who is responsible for a harm caused by AI, or a machine or system that relies on artificial intelligence? Given that current AI is neither conscious nor sentient, it’s unclear that AI itself is responsible for it. But given that AI acts independently of its developer or user, it’s also unclear that the developer or user is responsible for the harm. This gives rise to the so-called responsibility gap: cases where AI causes a harm, but no one is responsible for (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  47. AI through the looking glass: an empirical study of structural social and ethical challenges in AI.Mark Ryan, Nina De Roo, Hao Wang, Vincent Blok & Can Atik - 2024 - AI and Society 1 (1):1-17.
    This paper examines how professionals (N = 32) working on artificial intelligence (AI) view structural AI ethics challenges like injustices and inequalities beyond individual agents' direct intention and control. This paper answers the research question: What are professionals’ perceptions of the structural challenges of AI (in the agri-food sector)? This empirical paper shows that it is essential to broaden the scope of ethics of AI beyond micro- and meso-levels. While ethics guidelines and AI ethics often focus on the responsibility of (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48. A Bias Network Approach (BNA) to Encourage Ethical Reflection Among AI Developers.Gabriela Arriagada-Bruneau, Claudia López & Alexandra Davidoff - 2025 - Science and Engineering Ethics 31 (1):1-29.
    We introduce the Bias Network Approach (BNA) as a sociotechnical method for AI developers to identify, map, and relate biases across the AI development process. This approach addresses the limitations of what we call the "isolationist approach to AI bias," a trend in AI literature where biases are seen as separate occurrence linked to specific stages in an AI pipeline. Dealing with these multiple biases can trigger a sense of excessive overload in managing each potential bias individually or promote the (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  49. Book review: Nyholm, Sven (2023): This is technology ethics. An introduction. [REVIEW]Michael W. Schmidt - 2024 - TATuP - Zeitschrift Für Technikfolgenabschätzung in Theorie Und Praxis 33 (3):80–81.
    Have you been surprised by the recent development and diffusion of generative artificial intelligence (AI)? Many institutions of civil society have been caught off guard, which provides them with motivation to think ahead. And as many new plausible pathways of socio-technical development are opening up, a growing interest in technology ethics that addresses our corresponding moral uncertainties is warranted. In Sven Nyholm’s words, “[t]he field of technology ethics is absolutely exploding at the moment” (p. 262), and so the publication of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50. AI Romance and Misogyny: A Speech Act Analysis.A. G. Holdier & Kelly Weirich - 2025 - Oxford Intersections.
    Through the lens of feminist speech act theory, this paper argues that artificial intelligence romance systems objectify and subordinate nonvirtual women. AI romance systems treat their users as consumers, offering them relational invulnerability and control over their (usually feminized) digital romantic partner. This paper argues that, though the output of AI chatbots may not generally constitute speech, the framework offered by an AI romance system communicates an unjust perspective on intimate relationships. Through normalizing controlling one’s intimate partner, these systems operate (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 3140