Robot Ethics

Edited by Vincent C. Müller (Universität Erlangen-Nürnberg)
About this topic
Summary Robot ethics concerns the ethical problems raised by the use of robots, as well as the ethical status of the robots themselves and the attempt to make them ethical (the latter is often called "machine ethics"). On PhilPapers, the long-term risk for humanity from AI and robotics is under "Ethics of Artificial Intelligence" and "Artificial Intelligence Safety".
Key works A classic discussion is Wallach & Allen 2008 and a recent textbook is Tzafestas 2016. Some papers are in Lin et al 2011, Veruggio et al 2011 (earlier in Capurro & Nagenborg 2009). Classic problems are the use of robots in war (see Di Nucci & Santoni de Sio 2016) and in healthcare, the responsibility for their actions, the need for adjustment of human ethical and legal norms to robotics and the overall impact on humanity. - Some sources on the field on http://www.pt-ai.org/TG-ELS/
Introductions Consult the systematic survey Müller 2020 (for the 'Stanford Encyclopedia of Philosophy'). Fine introduction in the short paper Asaro 2006 and the introductions in Lin et al 2011, Veruggio et al 2011 and Capurro & Nagenborg 2009. (Also the collection Capurro manuscript.)
Related

Contents
492 found
Order:
1 — 50 / 492
  1. Can a robot lie?Markus Kneer - manuscript
    The potential capacity for robots to deceive has received considerable attention recently. Many papers focus on the technical possibility for a robot to engage in deception for beneficial purposes (e.g. in education or health). In this short experimental paper, I focus on a more paradigmatic case: Robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment with 399 participants which explores the following three (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  2. Virtues, robots, and the enactive self.Anco Peeters - manuscript
    Virtue ethics enjoys new-found attention in philosophy of technology and philosophical psychology. This attention informs the growing realization that virtue has an important role to play in the ethical evaluation of human–technology relations. But it remains unclear which cognitive processes ground such interactions in both their regular and virtuous forms. This paper proposes that an embodied, enactive cognition approach aptly captures the various ways persons and artefacts interact, while at the same time avoiding the explanatory problems its functionalist alternative faces. (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  3. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. If robots are people, can they be made for profit? Commercial implications of robot personhood.Bartek Chomanski - forthcoming - AI and Ethics.
    It could become technologically possible to build artificial agents instantiating whatever properties are sufficient for personhood. It is also possible, if not likely, that such beings could be built for commercial purposes. This paper asks whether such commercialization can be handled in a way that is not morally reprehensible, and answers in the affirmative. There exists a morally acceptable institutional framework that could allow for building artificial persons for commercial gain. The paper first considers the minimal ethical requirements that any (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Deontology and Safe Artificial Intelligence.William D’Alessandro - forthcoming - Philosophical Studies:1-24.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance principles. I (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  6. Commentary: Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure.Geoff Keeling - forthcoming - Frontiers in Behavioral Neuroscience.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  7. Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues.Ronald Leenes, Erica Palmerini, Bert-Jaap Koops, Andrea Bertolini, Pericle Salvini & Federica Lucivero - forthcoming - Law, Innovation and Technology.
    Robots are slowly, but certainly, entering people's professional and private lives. They require the attention of regulators due to the challenges they present to existing legal frameworks and the new legal and ethical questions they raise. This paper discusses four major regulatory dilemmas in the field of robotics: how to keep up with technological advances; how to strike a balance between stimulating innovation and the protection of fundamental rights and values; whether to affirm prevalent social norms or nudge social norms (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  8. The Point of Blaming AI Systems.Hannah Altehenger & Leonhard Menges - 2024 - Journal of Ethics and Social Philosophy 27 (2).
    As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it makes sense (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9. How AI Systems Can Be Blameworthy.Hannah Altehenger, Leonhard Menges & Peter Schulte - 2024 - Philosophia:1-24.
    AI systems, like self-driving cars, healthcare robots, or Autonomous Weapon Systems, already play an increasingly important role in our lives and will do so to an even greater extent in the near future. This raises a fundamental philosophical question: who is morally responsible when such systems cause unjustified harm? In the paper, we argue for the admittedly surprising claim that some of these systems can themselves be morally responsible for their conduct in an important and everyday sense of the term—the (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  10. Tailoring responsible research and innovation to the translational context: the case of AI-supported exergaming.Sabrina Blank, Celeste Mason, Frank Steinicke & Christian Herzog - 2024 - Ethics and Information Technology 26 (2):1-16.
    We discuss the implementation of Responsible Research and Innovation (RRI) within a project for the development of an AI-supported exergame for assisted movement training, outline outcomes and reflect on methodological opportunities and limitations. We adopted the responsibility-by-design (RbD) standard (CEN CWA 17796:2021) supplemented by methods for collaborative, ethical reflection to foster and support a shift towards a culture of trustworthiness inherent to the entire development process. An embedded ethicist organised the procedure to instantiate a collaborative learning effort and implement RRI (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11. Impossibility of Artificial Inventors.Matt Blaszczyk - 2024 - Intellectual Property Forum 137:39-48.
    Recently, the United Kingdom Supreme Court decided that only natural persons can be considered inventors. A year before, the United States Court of Appeals for the Federal Circuit issued a similar decision. In fact, so have many the courts all over the world. This Article analyses these decisions, argues that the courts got it right, and finds that artificial inventorship is at odds with patent law doctrine, theory, and philosophy. The Article challenges the intellectual property (IP) post-humanists, exposing the analytical (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. The Ethics of Automating Therapy.Jake Burley, James J. Hughes, Alec Stubbs & Nir Eisikovits - 2024 - Ieet White Papers.
    The mental health crisis and loneliness epidemic have sparked a growing interest in leveraging artificial intelligence (AI) and chatbots as a potential solution. This report examines the benefits and risks of incorporating chatbots in mental health treatment. AI is used for mental health diagnosis and treatment decision-making and to train therapists on virtual patients. Chatbots are employed as always-available intermediaries with therapists, flagging symptoms for human intervention. But chatbots are also sold as stand-alone virtual therapists or as friends and lovers. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  13. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  14. Artificial Intelligence and Universal Values.Jay Friedenberg - 2024 - UK: Ethics Press.
    The field of value alignment, or more broadly machine ethics, is becoming increasingly important as artificial intelligence developments accelerate. By ‘alignment’ we mean giving a generally intelligent software system the capability to act in ways that are beneficial, or at least minimally harmful, to humans. There are a large number of techniques that are being experimented with, but this work often fails to specify what values exactly we should be aligning. When making a decision, an agent is supposed to maximize (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15. Understanding Sophia? On human interaction with artificial agents.Thomas Fuchs - 2024 - Phenomenology and the Cognitive Sciences 23 (1):21-42.
    Advances in artificial intelligence (AI) create an increasing similarity between the performance of AI systems or AI-based robots and human communication. They raise the questions: whether it is possible to communicate with, understand, and even empathically perceive artificial agents; whether we should ascribe actual subjectivity and thus quasi-personal status to them beyond a certain level of simulation; what will be the impact of an increasing dissolution of the distinction between simulated and real encounters. (1) To answer these questions, the paper (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  16. Engineered Wisdom for Learning Machines.Brett Karlan & Colin Allen - 2024 - Journal of Experimental and Theoretical Artificial Intelligence 36 (2):257-272.
    We argue that the concept of practical wisdom is particularly useful for organizing, understanding, and improving human-machine interactions. We consider the relationship between philosophical analysis of wisdom and psychological research into the development of wisdom. We adopt a practical orientation that suggests a conceptual engineering approach is needed, where philosophical work involves refinement of the concept in response to contributions by engineers and behavioral scientists. The former are tasked with encoding as much wise design as possible into machines themselves, as (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Fiduciary requirements for virtual assistants.Leonie Koessler - 2024 - Ethics and Information Technology 26 (2):1-18.
    Virtual assistants (VAs), like Amazon’s Alexa, Google’s Assistant, and Apple’s Siri, are on the rise. However, despite allegedly being ‘assistants’ to users, they ultimately help firms to maximise profits. With more and more tasks and leeway bestowed upon VAs, the severity as well as the extent of conflicts of interest between firms and users increase. This article builds on the common law field of fiduciary law to argue why and how regulators should address this phenomenon. First, the functions of VAs (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18. A NAO Robot Performing Religious Practices.Anna Puzio - 2024 - ET-Studies 15 (1):129-140.
    In Sect. 2, I will introduce what religious robots are and present examples of such robots. Then, in Sect. 3, I will discuss my project with a NAO robot at the Katholikentag. In Sect. 4, I will discuss anthropological and ethical questions related to religious robots. Thus, I will outline the direction in which research on religious robots can go, where the challenges lie, and highlight two key advantages. Finally, in Sect. 5, I conclude with an outlook for future research (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  19. Towards an Eco-Relational Approach: Relational Approaches Must Be Applied in Ethics and Law.Anna Puzio - 2024 - Philosophy and Technology 37 (67):1-5.
    Relational approaches are gaining more and more importance in philosophy of tech-nology. This brings up the critical question of how they can be implemented in applied ethics, law, and practice. In “Extremely Relational Robots: Implications for Law and Ethics”, Nancy S. Jecker (2024) comments on my article “Not Relational Enough? Towards an Eco-Relational Approach in Robot Ethics” (Puzio, 2024), in which I present a deep relational, “eco-relational approach”. In this reply, I address two of Jecker’s criticisms: in section. 3, I (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20. The entangled human being – a new materialist approach to anthropology of technology.Anna Puzio - 2024 - AI Ethics.
    Technological advancements raise anthropological questions: How do humans differ from technology? Which human capabilities are unique? Is it possible for robots to exhibit consciousness or intelligence, capacities once taken to be exclusively human? Despite the evident need for an anthropological lens in both societal and research contexts, the philosophical anthropology of technology has not been established as a set discipline with a defined set of theories, especially concerning emerging technologies. In this paper, I will utilize a New Materialist approach, focusing (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  21. A Case for 'Killer Robots': Why in the Long Run Martial AI May Be Good for Peace.Ognjen Arandjelović - 2023 - Journal of Ethics, Entrepreneurship and Technology 3 (1).
    Purpose: The remarkable increase of sophistication of artificial intelligence in recent years has already led to its widespread use in martial applications, the potential of so-called 'killer robots' ceasing to be a subject of fiction. -/- Approach: Virtually without exception, this potential has generated fear, as evidenced by a mounting number of academic articles calling for the ban on the development and deployment of lethal autonomous robots (LARs). In the present paper I start with an analysis of the existing ethical (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  22. Mental time-travel, semantic flexibility, and A.I. ethics.Marcus Arvan - 2023 - AI and Society 38 (6):2577-2596.
    This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ GenEth. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  23. Mitigating emotional risks in human-social robot interactions through virtual interactive environment indication.Aorigele Bao, Yi Zeng & Enmeng lu - 2023 - Humanities and Social Sciences Communications 2023.
    Humans often unconsciously perceive social robots involved in their lives as partners rather than mere tools, imbuing them with qualities of companionship. This anthropomorphization can lead to a spectrum of emotional risks, such as deception, disappointment, and reverse manipulation, that existing approaches struggle to address effectively. In this paper, we argue that a Virtual Interactive Environment (VIE) exists between humans and social robots, which plays a crucial role and demands necessary consideration and clarification in order to mitigate potential emotional risks. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  24. Robot Ethics. Mark Coeckelbergh (2022). Cambridge, MIT Press. vii + 191 pp, $16.95 (pb). [REVIEW]Nicholas Barrow - 2023 - Journal of Applied Philosophy (5):970-972.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25. Artificial Dispositions: Investigating Ethical and Metaphysical Issues.William A. Bauer & Anna Marmodoro (eds.) - 2023 - New York: Bloomsbury.
    We inhabit a world not only full of natural dispositions independent of human design, but also artificial dispositions created by our technological prowess. How do these dispositions, found in automation, computation, and artificial intelligence applications, differ metaphysically from their natural counterparts? This collection investigates artificial dispositions: what they are, the roles they play in artificial systems, and how they impact our understanding of the nature of reality, the structure of minds, and the ethics of emerging technologies. It is divided into (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  26. Knowledge representation and acquisition for ethical AI: challenges and opportunities.Vaishak Belle - 2023 - Ethics and Information Technology 25 (1):1-12.
    Machine learning (ML) techniques have become pervasive across a range of different applications, and are now widely used in areas as disparate as recidivism prediction, consumer credit-risk analysis, and insurance pricing. Likewise, in the physical world, ML models are critical components in autonomous agents such as robotic surgeons and self-driving cars. Among the many ethical dimensions that arise in the use of ML technology in such applications, analyzing morally permissible actions is both immediate and profound. For example, there is the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Should the State Prohibit the Production of Artificial Persons?Bartek Chomanski - 2023 - Journal of Libertarian Studies 27.
    This article argues that criminal law should not, in general, prevent the creation of artificially intelligent servants who achieve humanlike moral status, even though it may well be immoral to construct such beings. In defending this claim, a series of thought experiments intended to evoke clear intuitions is proposed, and presuppositions about any particular theory of criminalization or any particular moral theory are kept to a minimum.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. The seven troubles with norm-compliant robots.Tom N. Coggins & Steffen Steinert - 2023 - Ethics and Information Technology 25 (2):1-15.
    Many researchers from robotics, machine ethics, and adjacent fields seem to assume that norms represent good behavior that social robots should learn to benefit their users and society. We would like to complicate this view and present seven key troubles with norm-compliant robots: (1) norm biases, (2) paternalism (3) tyrannies of the majority, (4) pluralistic ignorance, (5) paths of least resistance, (6) outdated norms, and (7) technologically-induced norm change. Because discussions of why norm-compliant robots can be problematic are noticeably absent (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29. Reasons to Punish Autonomous Robots.Zac Cogley - 2023 - The Gradient 14.
    I here consider the reasonableness of punishing future autonomous military robots. I argue that it is an engineering desideratum that these devices be responsive to moral considerations as well as human criticism and blame. Additionally, I argue that someday it will be possible to build such machines. I use these claims to respond to the no subject of punishment objection to deploying autonomous military robots, the worry being that an “accountability gap” could result if the robot committed a war crime. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Les revendications de droits pour les robots : constructions et conflits autour d’une éthique de la robotique.Charles Corval - 2023 - Implications Philosophiques.
    Ce travail examine les revendications contemporaines de droits pour les robots. Il présente les principales formes argumentatives qui ont été développées en faveur d’une considération éthique ou de droits positifs pour ces machines. Il met en relation ces argumentations avec un travail de recherche-action afin de produire un retour critique sur l’idée de droit des robots. Il montre enfin le rapport complexe entre les récits de la modernité et la revendication de droits pour les robots. This article presents contemporary vindications (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  31. The Weaponization of Artificial Intelligence: What The Public Needs to be Aware of.Birgitta Dresp-Langley - 2023 - Frontiers in Artificial Intelligence 6 (1154184):1-6..
    Technological progress has brought about the emergence of machines that have the capacity to take human lives without human control. These represent an unprecedented threat to humankind. This paper starts from the example of chemical weapons, now banned worldwide by the Geneva protocol, to illustrate how technological development initially aimed at the benefit of humankind has, ultimately, produced what is now called the “Weaponization of Artificial Intelligence (AI)”. Autonomous Weapon Systems (AWS) fail the so-called discrimination principle, yet, the wider public (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. AI and the Law: Can Legal Systems Help Us Maximize Paperclips while Minimizing Deaths?Mihailis E. Diamantis, Rebekah Cochran & Miranda Dam - 2023 - In Gregory Robson & Jonathan Y. Tsou (eds.), Technology Ethics: A Philosophical Introduction and Readings. New York, NY, USA: Routledge.
    This Chapter provides a short undergraduate introduction to ethical and philosophical complexities surrounding the law’s attempt (or lack thereof) to regulate artificial intelligence. -/- Swedish philosopher Nick Bostrom proposed a simple thought experiment known as the paperclip maximizer. What would happen if a machine (the “PCM”) were given the sole goal of manufacturing as many paperclips as possible? It might learn how to transact money, source metal, or even build factories. The machine might also eventually realize that humans pose a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  33. (1 other version)Robots, Rebukes, and Relationships: Confucian Ethics and the Study of Human-Robot Interactions.Alexis Elder - 2023 - Res Philosophica 100 (1):43-62.
    The status and functioning of shame is contested in moral psychology. In much of anglophone philosophy and psychology, it is presumed to be largely destructive, while in Confucian philosophy and many East Asian communities, it is positively associated with moral development. Recent work in human-robot interaction offers a unique opportunity to investigate how shame functions while controlling for confounding variables of interpersonal interaction. One research program suggests a Confucian strategy for using robots to rebuke participants, but results from experiments with (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34. What Confucian Ethics Can Teach Us About Designing Caregiving Robots for Geriatric Patients.Alexis Elder - 2023 - Digital Society 2 (1).
    Caregiving robots are often lauded for their potential to assist with geriatric care. While seniors can be wise and mature, possessing valuable life experience, they can also present a variety of ethical challenges, from prevalence of racism and sexism, to troubled relationships, histories of abusive behavior, and aggression, mood swings and impulsive behavior associated with cognitive decline. I draw on Confucian ethics, especially the concept of filial piety, to address these issues. Confucian scholars have developed a rich set of theoretical (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  35. The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights.Tobias Flattery - 2023 - AI and Ethics.
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36. A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37. Connected and Automated Vehicles: Integrating Engineering and Ethics.Fabio Fossa & Federico Cheli (eds.) - 2023 - Cham: Springer.
    This book reports on theoretical and practical analyses of the ethical challenges connected to driving automation. It also aims at discussing issues that have arisen from the European Commission 2020 report “Ethics of Connected and Automated Vehicles. Recommendations on Road Safety, Privacy, Fairness, Explainability and Responsibility”. Gathering contributions by philosophers, social scientists, mechanical engineers, and UI designers, the book discusses key ethical concerns relating to responsibility and personal autonomy, privacy, safety, and cybersecurity, as well as explainability and human-machine interaction. On (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38. Granting negative rights to humanoid robots.Cindy Friedman - 2023 - Frontiers in Artificial Intelligence and Applications 366:145-154.
    The paper argues that we should grant negative rights to humanoid robots. These are rights that relate to non-interference e.g., freedom from violence, or freedom from discrimination. Doing so will prevent moral degradation to our human society. The consideration of robot moral status has seen a progression towards the consideration of robot rights. This is a controversial debate, with many scholars seeing the consideration of robot rights in black and white. It is, however, valuable to take a nuanced approach. This (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  39. Artificial intelligence ELSI score for science and technology: a comparison between Japan and the US.Tilman Hartwig, Yuko Ikkatai, Naohiro Takanashi & Hiromi M. Yokoyama - 2023 - AI and Society 38 (4):1609-1626.
    Artificial intelligence (AI) has become indispensable in our lives. The development of a quantitative scale for AI ethics is necessary for a better understanding of public attitudes toward AI research ethics and to advance the discussion on using AI within society. For this study, we developed an AI ethics scale based on AI-specific scenarios. We investigated public attitudes toward AI ethics in Japan and the US using online questionnaires. We designed a test set using four dilemma scenarios and questionnaire items (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Make Them Rare or Make Them Care: Artificial Intelligence and Moral Cost-Sharing.Blake Hereth & Nicholas Evans - 2023 - In Daniel Schoeni, Tobias Vestner & Kevin Govern (eds.), Ethical Dilemmas in the Global Defense Industry. Oxford University Press.
    The use of autonomous weaponry in warfare has increased substantially over the last twenty years and shows no sign of slowing. Our chapter raises a novel objection to the implementation of autonomous weapons, namely, that they eliminate moral cost-sharing. To grasp the basics of our argument, consider the case of uninhabited aerial vehicles that act autonomously (i.e., LAWS). Imagine that a LAWS terminates a military target and that five civilians die as a side effect of the LAWS bombing. Because LAWS (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  41. Implementing AI Ethics in the Design of AI-assisted Rescue Robots.Désirée Martin, Michael W. Schmidt & Rafaela Hillerbrand - 2023 - Ieee International Symposium on Ethics in Engineering, Science, and Technology (Ethics).
    For implementing ethics in AI technology, there are at least two major ethical challenges. First, there are various competing AI ethics guidelines and consequently there is a need for a systematic overview of the relevant values that should be considered. Second, if the relevant values have been identified, there is a need for an indicator system that helps assessing if certain design features are positively or negatively affecting their implementation. This indicator system will vary with regard to specific forms of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  42. (1 other version)African Reasons Why Artificial Intelligence Should Not Maximize Utility (Repr.).Thaddeus Metz - 2023 - In Aribiah Attoe, Samuel Segun, Victor Nweke & John-Bosco Umezurike (eds.), Conversations on African Philosophy of Mind, Consciousness and AI. Springer. pp. 139-152.
    Reprint of a chapter first appearing in African Values, Ethics, and Technology: Questions, Issues, and Approaches (2021).
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  43. Review of Sven Nyholm’s "Humans and Robots: Ethics, Agency, and Anthropomorphism”. London, 2020: Rowman and Littlefield International. [REVIEW]Diego Morales - 2023 - Journal of Ethics and Emerging Technologies 33 (1):1-5.
    Book review of Sven Nyholm's "Humans and Robots: Ethics, Agency and Anthropomorphism". || Reseña del libro "Humans and Robots: Ethics, Agency and Anthropomorphism", escrito por Sven Nyholm.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  44. Accountability in Artificial Intelligence: What It Is and How It Works.Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 1:1-12.
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, process, (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  45. This is technology ethics: an introduction.Sven Nyholm - 2023 - Hoboken: Wiley-Blackwell.
    In the Technology Age, innovations in medical, communications, and weapons technologies have given rise to many new ethical questions: Are technologies always value-neutral tools? Are human values and human prejudices sometimes embedded in technologies? Should we merge with the technologies we use? Is it ethical to use autonomous weapons systems in warfare? What should a self-driving car do if it detects an unavoidable crash? Can robots have morally relevant properties? -/- This is Technology Ethics: An Introduction provides an accessible overview (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  46. Social Robots and Society.Sven Nyholm, Cindy Friedman, Michael T. Dale, Anna Puzio, Dina Babushkina, Guido Lohr, Bart Kamphorst, Arthur Gwagwa & Wijnand IJsselsteijn - 2023 - In Ibo van de Poel (ed.), Ethics of Socially Disruptive Technologies: An Introduction. Cambridge, UK: Open Book Publishers. pp. 53-82.
    Advancements in artificial intelligence and (social) robotics raise pertinent questions as to how these technologies may help shape the society of the future. The main aim of the chapter is to consider the social and conceptual disruptions that might be associated with social robots, and humanoid social robots in particular. This chapter starts by comparing the concepts of robots and artificial intelligence and briefly explores the origins of these expressions. It then explains the definition of a social robot, as well (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  48. Challenges for ‘Community’ in Science and Values: Cases from Robotics Research.Charles H. Pence & Daniel J. Hicks - 2023 - Humana.Mente Journal of Philosophical Studies 16 (44):1-32.
    Philosophers of science often make reference — whether tacitly or explicitly — to the notion of a scientific community. Sometimes, such references are useful to make our object of analysis tractable in the philosophy of science. For others, tracking or understanding particular features of the development of science proves to be tied to notions of a scientific community either as a target of theoretical or social intervention. We argue that the structure of contemporary scientific research poses two unappreciated, or at (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  49. Authenticity and co-design: On responsibly creating relational robots for children.Milo Phillips-Brown, Marion Boulicault, Jacqueline Kory-Westland, Stephanie Nguyen & Cynthia Breazeal - 2023 - In Mizuko Ito, Remy Cross, Karthik Dinakar & Candice Odgers (eds.), Algorithmic Rights and Protections for Children. MIT Press. pp. 85-121.
    Meet Tega. Blue, fluffy, and AI-enabled, Tega is a relational robot: a robot designed to form relationships with humans. Created to aid in early childhood education, Tega talks with children, plays educational games with them, solves puzzles, and helps in creative activities like making up stories and drawing. Children are drawn to Tega, describing him as a friend, and attributing thoughts and feelings to him ("he's kind," "if you just left him here and nobody came to play with him, he (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  50. When the Digital Continues After Death Ethical Perspectives on Death Tech and the Digital Afterlife.Anna Puzio - 2023 - Communicatio Socialis 56 (3):427-436.
    Nothing seems as certain as death. However, what if life continues digitally after death? Companies and initiatives such as Amazon, Storyfile, Here After AI, Forever Identity and LifeNaut are dedicated to precisely this objective: using avatars, records, and other digital content of the deceased, they strive to enable a digital continuation of life. The deceased live on digitally, and at times, these can even appear very much alive-perhaps too alive? This article explores the ethical implications of these technologies, commonly known (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 492