Results for ' Robot morality'

998 found
Order:
  1. Robot Morals and Human Ethics.Wendell Wallach - 2010 - Teaching Ethics 11 (1):87-92.
    Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   25 citations  
  2.  1
    Robotic Morals.Stephen R. L. Clark - 1988 - Cogito 2 (2):20-22.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3. Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism.John Danaher - 2020 - Science and Engineering Ethics 26 (4):2023-2049.
    Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   72 citations  
  4. Moral Machines: Teaching Robots Right From Wrong.Wendell Wallach & Colin Allen - 2008 - New York, US: Oxford University Press.
    Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the authors argue (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   191 citations  
  5. The Moral Standing of Social Robots: Untapped Insights from Africa.Nancy S. Jecker, Caesar A. Atiure & Martin Odei Ajei - 2022 - Philosophy and Technology 35 (2):1-22.
    This paper presents an African relational view of social robots’ moral standing which draws on the philosophy of ubuntu. The introduction places the question of moral standing in historical and cultural contexts. Section 2 demonstrates an ubuntu framework by applying it to the fictional case of a social robot named Klara, taken from Ishiguro’s novel, Klara and the Sun. We argue that an ubuntu ethic assigns moral standing to Klara, based on her relational qualities and pro-social virtues. Section 3 (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  6. Robot rights? Towards a social-relational justification of moral consideration.Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):209-221.
    Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration (...)
    Direct download (13 more)  
     
    Export citation  
     
    Bookmark   101 citations  
  7. When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   72 citations  
  8. Robotic Nudges for Moral Improvement through Stoic Practice.Michał Klincewicz - 2019 - Techné: Research in Philosophy and Technology 23 (3):425-455.
    This paper offers a theoretical framework that can be used to derive viable engineering strategies for the design and development of robots that can nudge people towards moral improvement. The framework relies on research in developmental psychology and insights from Stoic ethics. Stoicism recommends contemplative practices that over time help one develop dispositions to behave in ways that improve the functioning of mechanisms that are constitutive of moral cognition. Robots can nudge individuals towards these practices and can therefore help develop (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  9.  14
    Robots as Malevolent Moral Agents: Harmful Behavior Results in Dehumanization, Not Anthropomorphism.Aleksandra Swiderska & Dennis Küster - 2020 - Cognitive Science 44 (7):e12872.
    A robot's decision to harm a person is sometimes considered to be the ultimate proof of it gaining a human‐like mind. Here, we contrasted predictions about attribution of mental capacities from moral typecasting theory, with the denial of agency from dehumanization literature. Experiments 1 and 2 investigated mind perception for intentionally and accidentally harmful robotic agents based on text and image vignettes. Experiment 3 disambiguated agent intention (malevolent and benevolent), and additionally varied the type of agent (robotic and human) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Moral Responsibility of Robots and Hybrid Agents.Raul Hakli & Pekka Mäkelä - 2019 - The Monist 102 (2):259-275.
    We study whether robots can satisfy the conditions of an agent fit to be held morally responsible, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. We employ Mele’s history-sensitive account of autonomy and responsibility to argue that even if robots were to have all the capacities required of moral agency, their history would deprive them from autonomy in a responsibility-undermining (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  11.  33
    Children-Robot Friendship, Moral Agency, and Aristotelian Virtue Development.Mihaela Constantinescu, Radu Uszkai, Constantin Vica & Cristina Voinea - 2022 - Frontiers in Robotics and AI 9.
    Social robots are increasingly developed for the companionship of children. In this article we explore the moral implications of children-robot friendships using the Aristotelian framework of virtue ethics. We adopt a moderate position and argue that, although robots cannot be virtue friends, they can nonetheless enable children to exercise ethical and intellectual virtues. The Aristotelian requirements for true friendship apply only partly to children: unlike adults, children relate to friendship as an educational play of exploration, which is constitutive of (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  12. Integrating robot ethics and machine morality: the study and design of moral competence in robots.Bertram F. Malle - 2016 - Ethics and Information Technology 18 (4):243-256.
    Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  13.  88
    Robot Lies in Health Care: When Is Deception Morally Permissible?Andreas Matthias - 2015 - Kennedy Institute of Ethics Journal 25 (2):169-162.
    Autonomous robots are increasingly interacting with users who have limited knowledge of robotics and are likely to have an erroneous mental model of the robot’s workings, capabilities, and internal structure. The robot’s real capabilities may diverge from this mental model to the extent that one might accuse the robot’s manufacturer of deceiving the user, especially in cases where the user naturally tends to ascribe exaggerated capabilities to the machine (e.g. conversational systems in elder-care contexts, or toy robots (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  14.  28
    Toward safe AI.Andres Morales-Forero, Samuel Bassetto & Eric Coatanea - 2023 - AI and Society 38 (2):685-696.
    Since some AI algorithms with high predictive power have impacted human integrity, safety has become a crucial challenge in adopting and deploying AI. Although it is impossible to prevent an algorithm from failing in complex tasks, it is crucial to ensure that it fails safely, especially if it is a critical system. Moreover, due to AI’s unbridled development, it is imperative to minimize the methodological gaps in these systems’ engineering. This paper uses the well-known Box-Jenkins method for statistical modeling as (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15.  13
    Artificial Morality: Virtuous Robots for Virtual Games.Peter Danielson - 1992 - London: Routledge.
    This book explores the role of artificial intelligence in the development of a claim that morality is person-made and rational. Professor Danielson builds moral robots that do better than amoral competitors in a tournament of games like the Prisoners Dilemma and Chicken. The book thus engages in current controversies over the adequacy of the received theory of rational choice. It sides with Gauthier and McClennan, who extend the devices of rational choice to include moral constraint. Artificial Morality goes (...)
    Direct download  
     
    Export citation  
     
    Bookmark   32 citations  
  16.  20
    Moral appearances: emotions, robots, and human morality[REVIEW]Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):235-241.
    Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full moral agency. However, I will argue (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   46 citations  
  17.  5
    Artificial Morality: Virtuous Robots for Virtual Games.Peter Danielson - 1992 - Routledge.
    This book explores the role of artificial intelligence in the development of a claim that morality is person-made and rational. Professor Danielson builds moral robots that do better than amoral competitors in a tournament of games like the Prisoners Dilemma and Chicken. The book thus engages in current controversies over the adequacy of the received theory of rational choice. It sides with Gauthier and McClennan, who extend the devices of rational choice to include moral constraint. _Artificial Morality_ goes further, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   22 citations  
  18. Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  19.  55
    Building Moral Robots: Ethical Pitfalls and Challenges.John-Stewart Gordon - 2020 - Science and Engineering Ethics 26 (1):141-157.
    This paper examines the ethical pitfalls and challenges that non-ethicists, such as researchers and programmers in the fields of computer science, artificial intelligence and robotics, face when building moral machines. Whether ethics is “computable” depends on how programmers understand ethics in the first place and on the adequacy of their understanding of the ethical problems and methodological challenges in these fields. Researchers and programmers face at least two types of problems due to their general lack of ethical knowledge or expertise. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  20. The rise of the robots and the crisis of moral patiency.John Danaher - 2019 - AI and Society 34 (1):129-136.
    This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  21.  4
    Sharing Moral Responsibility with Robots: A Pragmatic Approach.Gordana Dodig Crnkovic & Daniel Persson - 2008 - In Holst, Per Kreuger & Peter Funk (eds.), Frontiers in Artificial Intelligence and Applications Volume 173. IOS Press Books.
    Roboethics is a recently developed field of applied ethics which deals with the ethical aspects of technologies such as robots, ambient intelligence, direct neural interfaces and invasive nano-devices and intelligent soft bots. In this article we look specifically at the issue of (moral) responsibility in artificial intelligent systems. We argue for a pragmatic approach, where responsibility is seen as a social regulatory mechanism. We claim that having a system which takes care of certain tasks intelligently, learning from experience and making (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  22.  15
    The morality of autonomous robots.Aaron M. Johnson & Sidney Axinn - 2013 - Journal of Military Ethics 12 (2):129 - 141.
    While there are many issues to be raised in using lethal autonomous robotic weapons (beyond those of remotely operated drones), we argue that the most important question is: should the decision to take a human life be relinquished to a machine? This question is often overlooked in favor of technical questions of sensor capability, operational questions of chain of command, or legal questions of sovereign borders. We further argue that the answer must be ?no? and offer several reasons for banning (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  23. Social Robotics as Moral Education? Fighting Discrimination Through the Design of Social Robots.Fabio Fossa - 2022 - In Pekka Mäkelä, Raul Hakli & Joanna Seibt (eds.), Social Robots in Social Institutions. Proceedings of Robophilosophy’22. Amsterdam: IOS Press. pp. 184-193.
    Recent research in the field of social robotics has shed light on the considerable role played by biases in the design of social robots. Cues that trigger widespread biased expectations are implemented in the design of social robots to increase their familiarity and boost interaction quality. Ethical discussion has focused on the question concerning the permissibility of leveraging social biases to meet the design goals of social robotics. As a result, integrating ethically problematic social biases in the design of robots-such (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  13
    The Moral Status of Social Robots: A Pragmatic Approach.Paul Showler - 2024 - Philosophy and Technology 37 (2):1-22.
    Debates about the moral status of social robots (SRs) currently face a second-order, or metatheoretical impasse. On the one hand, moral individualists argue that the moral status of SRs depends on their possession of morally relevant properties. On the other hand, moral relationalists deny that we ought to attribute moral status on the basis of the properties that SRs instantiate, opting instead for other modes of reflection and critique. This paper develops and defends a pragmatic approach which aims to reconcile (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  25.  19
    Robots as moral environments.Tomislav Furlanis, Takayuki Kanda & Dražen Brščić - forthcoming - AI and Society:1-19.
    In this philosophical exploration, we investigate the concept of robotic moral environment interaction. The common view understands moral interaction to occur between agents endowed with ethical and interactive capacities. However, recent developments in moral philosophy argue that moral interaction also occurs in relation to the environment. Here conditions and situations of the environment contribute to human moral cognition and the formation of our moral experiences. Based on this philosophical position, we imagine robots interacting as moral environments—a novel conceptualization of human– (...) moral interaction with an inherent capacity for moral augmentation. To explicate this idea, we first define moral environments as moral systems providing moral affordances. We then intuit and explicate two constitutive conditions of moral environments: the environment’s moral ambiance and its moral atmosphere and compare them with real-life cases of moral environments—the moral landmarks. Based on these explications, we construct several explanatory cases to illustrate robots interacting as moral environments. Lastly, we set to evaluate the ethical challenges and wider social ramifications of using moral environment robots in the public space. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26.  50
    On the moral responsibility of military robots.Thomas Hellström - 2013 - Ethics and Information Technology 15 (2):99-107.
    This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  27. Can Humanoid Robots be Moral?Sanjit Chakraborty - 2018 - Ethics in Science, Environment and Politics 18:49-60.
    The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output’, in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated extensive debate, i.e. ‘Can humanoid robots be moral?’, stems from the normative outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of morality and (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  29.  8
    Implementing moral decision making faculties in computers and robots.Wendell Wallach - 2008 - AI and Society 22 (4):463-475.
    The challenge of designing computer systems and robots with the ability to make moral judgments is stepping out of science fiction and moving into the laboratory. Engineers and scholars, anticipating practical necessities, are writing articles, participating in conference workshops, and initiating a few experiments directed at substantiating rudimentary moral reasoning in hardware and software. The subject has been designated by several names, including machine ethics, machine morality, artificial morality, or computational morality. Most references to the challenge elucidate (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  30.  84
    When Morals Ain’t Enough: Robots, Ethics, and the Rules of the Law.Ugo Pagallo - 2017 - Minds and Machines 27 (4):625-638.
    No single moral theory can instruct us as to whether and to what extent we are confronted with legal loopholes, e.g. whether or not new legal rules should be added to the system in the criminal law field. This question on the primary rules of the law appears crucial for today’s debate on roboethics and still, goes beyond the expertise of robo-ethicists. On the other hand, attention should be drawn to the secondary rules of the law: The unpredictability of robotic (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  31. Sympathy for Dolores: Moral Consideration for Robots Based on Virtue and Recognition.Massimiliano L. Cappuccio, Anco Peeters & William McDonald - 2019 - Philosophy and Technology 33 (1):9-31.
    This paper motivates the idea that social robots should be credited as moral patients, building on an argumentative approach that combines virtue ethics and social recognition theory. Our proposal answers the call for a nuanced ethical evaluation of human-robot interaction that does justice to both the robustness of the social responses solicited in humans by robots and the fact that robots are designed to be used as instruments. On the one hand, we acknowledge that the instrumental nature of robots (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  32.  61
    Can a Robot Pursue the Good? Exploring Artificial Moral Agency.Amy Michelle DeBaets - 2014 - Journal of Evolution and Technology 24 (3):76-86.
    In this essay I will explore an understanding of the potential moral agency of robots; arguing that the key characteristics of physical embodiment; adaptive learning; empathy in action; and a teleology toward the good are the primary necessary components for a machine to become a moral agent. In this context; other possible options will be rejected as necessary for moral agency; including simplistic notions of intelligence; computational power; and rule-following; complete freedom; a sense of God; and an immaterial soul. I (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  33.  28
    Ethics for Robots: how to design a moral algorithm.Derek Leben - 2018 - Routledge.
    Ethics for Robots describes and defends a method for designing and evaluating ethics algorithms for autonomous machines, such as self-driving cars and search and rescue drones. Derek Leben argues that such algorithms should be evaluated by how effectively they accomplish the problem of cooperation among self-interested organisms, and therefore, rather than simulating the psychological systems that have evolved to solve this problem, engineers should be tackling the problem itself, taking relevant lessons from our moral psychology. Leben draws on the moral (...)
    Direct download  
     
    Export citation  
     
    Bookmark   11 citations  
  34.  25
    On the moral permissibility of robot apologies.Makoto Kureha - forthcoming - AI and Society:1-11.
    Robots that incorporate the function of apologizing have emerged in recent years. This paper examines the moral permissibility of making robots apologize. First, I characterize the nature of apology based on analyses conducted in multiple scholarly domains. Next, I present a prima facie argument that robot apologies are not permissible because they may harm human societies by inducing the misattribution of responsibility. Subsequently, I respond to a possible response to the prima facie objection based on the interpretation that attributing (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35. On the moral status of social robots: considering the consciousness criterion.Kestutis Mosakas - 2021 - AI and Society 36 (2):429-443.
    While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  36.  15
    Not robots: children's perspectives on authenticity, moral agency and stimulant drug treatments.Ilina Singh - 2013 - Journal of Medical Ethics 39 (6):359-366.
    In this article, I examine children's reported experiences with stimulant drug treatments for attention deficit hyperactivity disorder in light of bioethical arguments about the potential threats of psychotropic drugs to authenticity and moral agency. Drawing on a study that involved over 150 families in the USA and the UK, I show that children are able to report threats to authenticity, but that the majority of children are not concerned with such threats. On balance, children report that stimulants improve their capacity (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  37. Can humanoid robots be moral?Sanjit Chakraborty - 2018 - Ethics in Science and Environmental Politics 18:49-60.
    The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output,’ in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated the extensive debate, i.e., ‘Can humanoid robots be moral?’, stems from the normative outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of morality (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  38.  45
    Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on K. E. Løgstrup.Simon N. Balle - 2022 - AI and Society 37 (2):535-548.
    Empirical research on human–robot interaction has demonstrated how humans tend to react to social robots with empathic responses and moral behavior. How should we ethically evaluate such responses to robots? Are people wrong to treat non-sentient artefacts as moral patients since this rests on anthropomorphism and ‘over-identification’ —or correct since spontaneous moral intuition and behavior toward nonhumans is indicative for moral patienthood, such that social robots become our ‘Others’?. In this research paper, I weave extant HRI studies that demonstrate (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  39. Can humanoid robots be moral?Sanjit Chakraborty - 2018 - Ethics in Science and Environmental Politics 18:49-60.
    The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output’, in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated extensive debate, i.e. ‘Can humanoid robots be moral?’, stems from the normative outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of morality and (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  40. Robots as moral agents?Catrin Misselhorn - 2013 - In Frank Rövekamp & Friederike Bosse (eds.), Ethics in Science and Society: German and Japanese Views. IUDICIUM Verlag.
     
    Export citation  
     
    Bookmark   6 citations  
  41.  26
    An explanation space to align user studies with the technical development of Explainable AI.Garrick Cabour, Andrés Morales-Forero, Élise Ledoux & Samuel Bassetto - 2023 - AI and Society 38 (2):869-887.
    Providing meaningful and actionable explanations for end-users is a situated problem requiring the intersection of multiple disciplines to address social, operational, and technical challenges. However, the explainable artificial intelligence community has not commonly adopted or created tangible design tools that allow interdisciplinary work to develop reliable AI-powered solutions. This paper proposes a formative architecture that defines the explanation space from a user-inspired perspective. The architecture comprises five intertwined components to outline explanation requirements for a task: (1) the end-users’ mental models, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  42. Moral Status and Intelligent Robots.John-Stewart Gordon & David J. Gunkel - 2021 - Southern Journal of Philosophy 60 (1):88-117.
    The Southern Journal of Philosophy, Volume 60, Issue 1, Page 88-117, March 2022.
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  43.  52
    Robots with Moral Status?David DeGrazia - 2022 - Perspectives in Biology and Medicine 65 (1):73-88.
  44.  50
    How to do robots with words: a performative view of the moral status of humans and nonhumans.Mark Coeckelbergh - 2023 - Ethics and Information Technology 25 (3):1-9.
    Moral status arguments are typically formulated as descriptive statements that tell us something about the world. But philosophy of language teaches us that language can also be used performatively: we do things with words and use words to try to get others to do things. Does and should this theory extend to what we say about moral status, and what does it mean? Drawing on Austin, Searle, and Butler and further developing relational views of moral status, this article explores what (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45.  14
    Robot minds and human ethics: the need for a comprehensive model of moral decision making. [REVIEW]Wendell Wallach - 2010 - Ethics and Information Technology 12 (3):243-250.
    Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  46.  30
    Blame-Laden Moral Rebukes and the Morally Competent Robot: A Confucian Ethical Perspective.Qin Zhu, Tom Williams, Blake Jackson & Ruchen Wen - 2020 - Science and Engineering Ethics 26 (5):2511-2526.
    Empirical studies have suggested that language-capable robots have the persuasive power to shape the shared moral norms based on how they respond to human norm violations. This persuasive power presents cause for concern, but also the opportunity to persuade humans to cultivate their own moral development. We argue that a truly socially integrated and morally competent robot must be willing to communicate its objection to humans’ proposed violations of shared norms by using strategies such as blame-laden rebukes, even if (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  47.  68
    The Moral Status of AGI-enabled Robots: A Functionality-Based Analysis.Mubarak Hussain - 2023 - Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 10 (1):105-127.
  48.  10
    Can robots be moral?Laszlo Versenyi - 1974 - Ethics 84 (3):248-259.
  49. ’How could you even ask that?’ Moral considerability, uncertainty and vulnerability in social robotics.Alexis Elder - 2020 - Journal of Sociotechnical Critique 1 (1):1-23.
    When it comes to social robotics (robots that engage human social responses via “eyes” and other facial features, voice-based natural-language interactions, and even evocative movements), ethicists, particularly in European and North American traditions, are divided over whether and why they might be morally considerable. Some argue that moral considerability is based on internal psychological states like consciousness and sentience, and debate about thresholds of such features sufficient for ethical consideration, a move sometimes criticized for being overly dualistic in its framing (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  50.  67
    Why Care About Robots? Empathy, Moral Standing, and the Language of Suffering.Mark Coeckelbergh - 2018 - Kairos 20 (1):141-158.
    This paper tries to understand the phenomenon that humans are able to empathize with robots and the intuition that there might be something wrong with “abusing” robots by discussing the question regarding the moral standing of robots. After a review of some relevant work in empirical psychology and a discussion of the ethics of empathizing with robots, a philosophical argument concerning the moral standing of robots is made that questions distant and uncritical moral reasoning about entities’ properties and that recommends (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   16 citations  
1 — 50 / 998