Machine Ethics

Edited by Jeffrey White (Okinawa Institute Of Science And Technology, Universidade Nova de Lisboa)
About this topic
Summary In the early 2000s, James Moor set out four classes of ethical machine, advising that the near-term focus of machine ethics research should be on "explicit ethical agents", agents designed from an understanding of human theoretical ethics to operate according with these theoretical principles. Above this class, the ultimate aim of inquiry into machine ethics is understanding human morality and natural science well enough to engineer a fully autonomous, moral machine. This sub-category focuses on supporting this inquiry. Other work on other sorts of computer applications and their ethical impacts appear in different categories, including Ethics of Artificial Intelligence, Moral Status of Artificial Systems, and also Robot Ethics, Algorithmic Fairness, Computer Ethics, and others. Machine ethics is ethics, and it is also a study of machines. Machine ethicists wonder why people, human beings, other organisms, do what they do when they do it, and what makes these things the right things to do - they are ethicists. In addition, machine ethicists work out how to articulate such processes in an independent artificial system (rather than by parenting a biological child, or training a human minion, as traditional alternatives). So, machine ethics researchers engage directly with rapidly advancing work in cognitive science and psychology alongside that in robotics and AI, applied ethics such as medical ethics and philosophy of mind, computer modeling and data science, and so on. Drawing from so many disciplines with all of these advancing rapidly and with their own impacts, machine ethics is in the middle of a maelstrom of current research activity. Advances in materials science and physical chemistry leverage advances in cognitive science and neurology which feed advances in AI and robotics, including in regards to its interpretability for illustration. Putting this all together is the challenge for the machine ethics researcher. This sub-category is intended to support efforts to meet this challenge.  
Key works Allen et al 2005Wallach et al 2008Tonkens 2012Tonkens 2009Müller & Bostrom 2014White 2013White 2015
Introductions Anderson & Anderson 2007, Segun 2021, Powers 2011, Moor 2006
Related

Contents
532 found
Order:
1 — 50 / 532
  1. Ética e Segurança da Inteligência Artificial: ferramentas práticas para se criar "bons" modelos.Nicholas Kluge Corrêa - manuscript
    A AI Robotics Ethics Society (AIRES) é uma organização sem fins lucrativos fundada em 2018 por Aaron Hui, com o objetivo de se promover a conscientização e a importância da implementação e regulamentação ética da AI. A AIRES é hoje uma organização com capítulos em universidade como UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University e a Pontifícia Universidade Católica do Rio Grande do Sul (Brasil). AIRES na PUCRS é (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. Can a robot lie?Markus Kneer - manuscript
    The potential capacity for robots to deceive has received considerable attention recently. Many papers focus on the technical possibility for a robot to engage in deception for beneficial purposes (e.g. in education or health). In this short experimental paper, I focus on a more paradigmatic case: Robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment with 399 participants which explores the following three (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. Beneficent Intelligence: A Capability Approach to Modeling Benefit, Assistance, and Associated Moral Failures through AI Systems.Alex John London & Hoda Heidari - manuscript
    The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals. Drawing on Sen and Nussbaum's capability approach, we present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit or assistance to stakeholders. Such systems enhance stakeholders' ability to advance their life plans and well-being while upholding their fundamental rights. We characterize two necessary conditions for morally (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves accountable (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. First human upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. Literature Review: What Artificial General Intelligence Safety Researchers Have Written About the Nature of Human Values.Alexey Turchin & David Denkenberger - manuscript
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have several theories. A (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Autonomous Reboot: the challenges of artificial moral agency and the ends of Machine Ethics.Jeffrey White - manuscript
    Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid "ethical nihilism" due to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. Artificial Intelligence Ethics and Safety: practical tools for creating "good" models.Nicholas Kluge Corrêa -
    The AI Robotics Ethics Society (AIRES) is a non-profit organization founded in 2018 by Aaron Hui to promote awareness and the importance of ethical implementation and regulation of AI. AIRES is now an organization with chapters at universities such as UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University, and the Pontifical Catholic University of Rio Grande do Sul (Brazil). AIRES at PUCRS is the first international chapter of AIRES, and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9. Does Predictive Sentencing Make Sense?Clinton Castro, Alan Rubel & Lindsey Schwartz - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper examines the practice of using predictive systems to lengthen the prison sentences of convicted persons when the systems forecast a higher likelihood of re-offense or re-arrest. There has been much critical discussion of technologies used for sentencing, including questions of bias and opacity. However, there hasn’t been a discussion of whether this use of predictive systems makes sense in the first place. We argue that it does not by showing that there is no plausible theory of punishment that (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. A qualified defense of top-down approaches in machine ethics.Tyler Cook - forthcoming - AI and Society:1-15.
    This paper concerns top-down approaches in machine ethics. It is divided into three main parts. First, I briefly describe top-down design approaches, and in doing so I make clear what those approaches are committed to and what they involve when it comes to training an AI to behave ethically. In the second part, I formulate two underappreciated motivations for endorsing them, one relating to predictability of machine behavior and the other relating to scrutability of machine decision-making. Finally, I present three (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11. Norms and Causation in Artificial Morality.Laura Fearnley - forthcoming - Joint Proceedings of Acm Iui:1-4.
    There has been an increasing interest into how to build Artificial Moral Agents (AMAs) that make moral decisions on the basis of causation rather than mere correction. One promising avenue for achieving this is to use a causal modelling approach. This paper explores an open and important problem with such an approach; namely, the problem of what makes a causal model an appropriate model. I explore why we need to establish criteria for what makes a model appropriate, and offer-up such (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. What makes full artificial agents morally different.Erez Firt - forthcoming - AI and Society:1-10.
    In the research field of machine ethics, we commonly categorize artificial moral agents into four types, with the most advanced referred to as a full ethical agent, or sometimes a full-blown Artificial Moral Agent (AMA). This type has three main characteristics: autonomy, moral understanding and a certain level of consciousness, including intentional mental states, moral emotions such as compassion, the ability to praise and condemn, and a conscience. This paper aims to discuss various aspects of full-blown AMAs and presents the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  14. Machine morality, moral progress, and the looming environmental disaster.Ben Kenward & Thomas Sinclair - forthcoming - Cognitive Computation and Systems.
    The creation of artificial moral systems requires us to make difficult choices about which of varying human value sets should be instantiated. The industry-standard approach is to seek and encode moral consensus. Here we argue, based on evidence from empirical psychology, that encoding current moral consensus risks reinforcing current norms, and thus inhibiting moral progress. However, so do efforts to encode progressive norms. Machine ethics is thus caught between a rock and a hard place. The problem is particularly acute when (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  15. Machine Ethics in Care: Could a Moral Avatar Enhance the Autonomy of Care-Dependent Persons?Catrin Misselhorn - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-14.
    It is a common view that artificial systems could play an important role in dealing with the shortage of caregivers due to demographic change. One argument to show that this is also in the interest of care-dependent persons is that artificial systems might significantly enhance user autonomy since they might stay longer in their homes. This argument presupposes that the artificial systems in question do not require permanent supervision and control by human caregivers. For this reason, they need the capacity (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16. Moral Disagreement and Artificial Intelligence.Pamela Robinson - forthcoming - AI and Society:1-14.
    Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without universal agreement about the relevant moral facts. For other kinds of disagreement, it is at least usually obvious what kind of solution is called for. What makes moral disagreement especially challenging is that there are three different ways of handling it. Moral solutions apply a moral theory or related principles and largely ignore the details of the disagreement. (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  17. Digital suffering: why it's a problem and how to prevent it.Bradford Saad & Adam Bradley - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    As ever more advanced digital systems are created, it becomes increasingly likely that some of these systems will be digital minds, i.e. digital subjects of experience. With digital minds comes the risk of digital suffering. The problem of digital suffering is that of mitigating this risk. We argue that the problem of digital suffering is a high stakes moral problem and that formidable epistemic obstacles stand in the way of solving it. We then propose a strategy for solving it: Access (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  18. Handbook of Research on Machine Ethics and Morality.Steven John Thompson (ed.) - forthcoming - Hershey, PA: IGI-Global.
    This book is dedicated to expert research topics, and analyses of ethics-related inquiry, at the machine ethics and morality level: key players, benefits, problems, policies, and strategies. Gathering some of the leading voices that recognize and understand the complexities and intricacies of human-machine ethics provides a resourceful compendium to be accessed by decision-makers and theorists concerned with identification and adoption of human-machine ethics initiatives, leading to needed policy adoption and reform for human-machine entities, their technologies, and their societal and legal (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  19. Augustine and an artificial soul.Jeffrey White - forthcoming - Embodied Intelligence 2023.
    Prior work proposes a view of development of purpose and source of meaning in life as a more or less temporally distal project ideal self-situation in terms of which intermediate situations are experienced and prospects evaluated. This work considers Augustine on ensoulment alongside current work into self as adapted routines to common social regularities of the sort that Augustine found deficient. How can we account for such diversity of self-reported value orientation in terms of common structural dynamics differently developed, embodied (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  20. And Then the Hammer Broke: Reflections on Machine Ethics from Feminist Philosophy of Science.Andre Ye - forthcoming - Pacific University Philosophy Conference.
    Vision is an important metaphor in ethical and political questions of knowledge. The feminist philosopher Donna Haraway points out the “perverse” nature of an intrusive, alienating, all-seeing vision (to which we might cry out “stop looking at me!”), but also encourages us to embrace the embodied nature of sight and its promises for genuinely situated knowledge. Current technologies of machine vision – surveillance cameras, drones (for war or recreation), iPhone cameras – are usually construed as instances of the former rather (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  21. Can’t Bottom-up Artificial Moral Agents Make Moral Judgements?Robert James M. Boyles - 2024 - Filosofija. Sociologija 35 (1).
    This article examines if bottom-up artificial moral agents are capable of making genuine moral judgements, specifically in light of David Hume’s is-ought problem. The latter underscores the notion that evaluative assertions could never be derived from purely factual propositions. Bottom-up technologies, on the other hand, are those designed via evolutionary, developmental, or learning techniques. In this paper, the nature of these systems is looked into with the aim of preliminarily assessing if there are good reasons to suspect that, on the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22. Dubito Ergo Sum: Exploring AI Ethics.Viktor Dörfler & Giles Cuthbert - 2024 - Hicss 57: Hawaii International Conference on System Sciences, Honolulu, Hi.
    We paraphrase Descartes’ famous dictum in the area of AI ethics where the “I doubt and therefore I am” is suggested as a necessary aspect of morality. Therefore AI, which cannot doubt itself, cannot possess moral agency. Of course, this is not the end of the story. We explore various aspects of the human mind that substantially differ from AI, which includes the sensory grounding of our knowing, the act of understanding, and the significance of being able to doubt ourselves. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  24. Moral sensitivity and the limits of artificial moral agents.Joris Graff - 2024 - Ethics and Information Technology 26 (1):1-12.
    Machine ethics is the field that strives to develop ‘artificial moral agents’ (AMAs), artificial systems that can autonomously make moral decisions. Some authors have questioned the feasibility of machine ethics, by questioning whether artificial systems can possess moral competence, or the capacity to reach morally right decisions in various situations. This paper explores this question by drawing on the work of several moral philosophers (McDowell, Wiggins, Hampshire, and Nussbaum) who have characterised moral competence in a manner inspired by Aristotle. Although (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25. ‘Virtue gone nuts’: Machine Ethics in Ian McEwan’s Machines Like Me (2019).Anna Margaretha Horatschek - 2024 - In Prem Saran Satsangi, Anna Margaretha Horatschek & Anand Srivastav (eds.), Consciousness Studies in Sciences and Humanities: Eastern and Western Perspectives. Springer Verlag. pp. 125-132.
    According to a 2016 survey conducted by Müller and Bostrom, 30% of top experts on Artificial Intelligence (AI) expect bad or very bad consequences for humanity, if super-intelligent High/Human-Level Machine Intelligence (HLMI) can be developed. As a counterpoint, machine and robot ethics are being developed in the European Parliament and in international organisations like the Association for the Advancement of Artificial Intelligence (AAAI) and the Institute of Electrical and Electronics Engineers (IEEE) to ensure that AI will benefit mankind. Ian McEwan’s (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  26. Exploring Affinity-Based Reinforcement Learning for Designing Artificial Virtuous Agents in Stochastic Environments.Ajay Vishwanath & Christian Omlin - 2024 - In Mina Farmanbar, Maria Tzamtzi, Ajit Kumar Verma & Antorweep Chakravorty (eds.), Frontiers of Artificial Intelligence, Ethics, and Multidisciplinary Applications: 1st International Conference on Frontiers of AI, Ethics, and Multidisciplinary Applications (FAIEMA), Greece, 2023. Springer Nature Singapore. pp. 25-38.
    Artificial virtuous agents are artificial intelligence agents capable of virtuous behavior. Virtues are defined as an excellence in moral character, for example, compassion, honesty, etc. Developing virtues in AI comes under the umbrella of machine ethics research, which aims to embed ethical theories into artificial intelligence systems. We have recently suggested the use of affinity-based reinforcement learning to impart virtuous behavior. Such a technique uses policy regularization on reinforcement learning algorithms, and it has advantages such as interpretability and convergence properties. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  27. Ethical Preferences in the Digital World: The EXOSOUL Questionnaire.Costanza Alfieri, Donatella Donati, Simone Gozzano, Lorenzo Greco & Marco Segala - 2023 - In Paul Lukowicz, Sven Mayer, Janin Koch, John Shawe-Taylor & Ilaria Tiddi (eds.), Ebook: HHAI 2023: Augmenting Human Intellect. IOS Press. pp. 290-99.
  28. Mental time-travel, semantic flexibility, and A.I. ethics.Marcus Arvan - 2023 - AI and Society 38 (6):2577-2596.
    This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ GenEth. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  29. Deep Learning Opacity, and the Ethical Accountability of AI Systems. A New Perspective.Gianfranco Basti & Giuseppe Vitiello - 2023 - In Raffaela Giovagnoli & Robert Lowe (eds.), The Logic of Social Practices II. Springer Nature Switzerland. pp. 21-73.
    In this paper we analyse the conditions for attributing to AI autonomous systems the ontological status of “artificial moral agents”, in the context of the “distributed responsibility” between humans and machines in Machine Ethics (ME). In order to address the fundamental issue in ME of the unavoidable “opacity” of their decisions with ethical/legal relevance, we start from the neuroethical evidence in cognitive science. In humans, the “transparency” and then the “ethical accountability” of their actions as responsible moral agents is not (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  30. Artificial Dispositions: Investigating Ethical and Metaphysical Issues.William A. Bauer & Anna Marmodoro (eds.) - 2023 - Bloomsbury.
    We inhabit a world not only full of natural dispositions independent of human design, but also artificial dispositions created by our technological prowess. How do these dispositions, found in automation, computation, and artificial intelligence applications, differ metaphysically from their natural counterparts? This collection investigates artificial dispositions: what they are, the roles they play in artificial systems, and how they impact our understanding of the nature of reality, the structure of minds, and the ethics of emerging technologies. It is divided into (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  31. Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. When Something Goes Wrong: Who is Responsible for Errors in ML Decision-making?Andrea Berber & Sanja Srećković - 2023 - AI and Society 38 (2):1-13.
    Because of its practical advantages, machine learning (ML) is increasingly used for decision-making in numerous sectors. This paper demonstrates that the integral characteristics of ML, such as semi-autonomy, complexity, and non-deterministic modeling have important ethical implications. In particular, these characteristics lead to a lack of insight and lack of comprehensibility, and ultimately to the loss of human control over decision-making. Errors, which are bound to occur in any decision-making process, may lead to great harm and human rights violations. It is (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  33. A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.Marc Champagne & Ryan Tonkens - 2023 - Science and Engineering Ethics 29 (4):1-26.
    As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  34. The seven troubles with norm-compliant robots.Tom N. Coggins & Steffen Steinert - 2023 - Ethics and Information Technology 25 (2):1-15.
    Many researchers from robotics, machine ethics, and adjacent fields seem to assume that norms represent good behavior that social robots should learn to benefit their users and society. We would like to complicate this view and present seven key troubles with norm-compliant robots: (1) norm biases, (2) paternalism (3) tyrannies of the majority, (4) pluralistic ignorance, (5) paths of least resistance, (6) outdated norms, and (7) technologically-induced norm change. Because discussions of why norm-compliant robots can be problematic are noticeably absent (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35. Current cases of AI misalignment and their implications for future risks.Leonard Dung - 2023 - Synthese 202 (5):1-23.
    How can one build AI systems such that they pursue the goals their designers want them to pursue? This is the alignment problem. Numerous authors have raised concerns that, as research advances and systems become more powerful over time, misalignment might lead to catastrophic outcomes, perhaps even to the extinction or permanent disempowerment of humanity. In this paper, I analyze the severity of this risk based on current instances of misalignment. More specifically, I argue that contemporary large language models and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36. Encountering Artificial Intelligence: Ethical and Anthropological Reflections.Matthew J. Gaudet, Paul Scherz, Noreen Herzfeld, Jordan Joseph Wales, Nathan Colaner, Jeremiah Coogan, Mariele Courtois, Brian Cutter, David E. DeCosse, Justin Charles Gable, Brian Green, James Kintz, Cory Andrew Labrecque, Catherine Moon, Anselm Ramelow, John P. Slattery, Ana Margarita Vega, Luis G. Vera, Andrea Vicini & Warren von Eschenbach - 2023 - Eugene, OR: Pickwick Press.
    What does it mean to consider the world of AI through a Christian lens? Rapid developments in AI continue to reshape society, raising new ethical questions and challenging our understanding of the human person. Encountering Artificial Intelligence draws on Pope Francis’ discussion of a culture of encounter and broader themes in Catholic social thought in order to examine how current AI applications affect human relationships in various social spheres and offers concrete recommendations for better implementation. The document also explores questions (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. Machine Ethics: Do Androids Dream of Being Good People?Gonzalo Génova, Valentín Moreno & M. Rosario González - 2023 - Science and Engineering Ethics 29 (2):1-17.
    Is ethics a computable function? Can machines learn ethics like humans do? If teaching consists in no more than programming, training, indoctrinating… and if ethics is merely following a code of conduct, then yes, we can teach ethics to algorithmic machines. But if ethics is not merely about following a code of conduct or about imitating the behavior of others, then an approach based on computing outcomes, and on the reduction of ethics to the compilation and application of a set (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38. Embodied Experience in Socially Participatory Artificial Intelligence.Mark Graves - 2023 - Zygon (4):928-951.
    As artificial intelligence (AI) becomes progressively more engaged with society, its shift from technical tool to participating in society raises questions about AI personhood. Drawing upon developmental psychology and systems theory, a mediating structure for AI proto-personhood is defined analogous to an early stage of human development. The proposed AI bridges technical, psychological, and theological perspectives on near-future AI and is structured by its hardware, software, computational, and sociotechnical systems through which it experiences its world as embodied (even for putatively (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. Make Them Rare or Make Them Care: Artificial Intelligence and Moral Cost-Sharing.Blake Hereth & Nicholas Evans - 2023 - In Daniel Schoeni, Tobias Vestner & Kevin Govern (eds.), Ethical Dilemmas in the Global Defense Industry. Oxford University Press.
    The use of autonomous weaponry in warfare has increased substantially over the last twenty years and shows no sign of slowing. Our chapter raises a novel objection to the implementation of autonomous weapons, namely, that they eliminate moral cost-sharing. To grasp the basics of our argument, consider the case of uninhabited aerial vehicles that act autonomously (i.e., LAWS). Imagine that a LAWS terminates a military target and that five civilians die as a side effect of the LAWS bombing. Because LAWS (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  40. The Moral Status of AGI-enabled Robots: A Functionality-Based Analysis.Mubarak Hussain - 2023 - Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 10 (1):105-127.
  41. Moral Attribution in Moral Turing Test.Mubarak Hussain - 2023 - International Conference on Computer Ethics: Philosophical Enquiry May 16-18, 2023 Illinois Institute of Technology, Chicago, Usa.
    This paper argues Moral Turing Test (MTT) developed by Allen et al. for evaluating morality in AI systems is designed inaptly. Different versions of the MTT focus on the conversational ability of an agent but not the performance of morally significant actions. Arnold and Scheutz also argue against the MTT and state that without focusing on the performance of morally significant actions, the MTT is insufficient. Morality is mainly about morally relevant actions because it does not matter how good a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  42. Implementing AI Ethics in the Design of AI-assisted Rescue Robots.Désirée Martin, Michael W. Schmidt & Rafaela Hillerbrand - 2023 - Ieee International Symposium on Ethics in Engineering, Science, and Technology (Ethics).
    For implementing ethics in AI technology, there are at least two major ethical challenges. First, there are various competing AI ethics guidelines and consequently there is a need for a systematic overview of the relevant values that should be considered. Second, if the relevant values have been identified, there is a need for an indicator system that helps assessing if certain design features are positively or negatively affecting their implementation. This indicator system will vary with regard to specific forms of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  43. the case for virtuous robots.Gibert Martin - 2023 - AI and Ethics 3:135-144.
    Is it possible to build virtuous robots? And is it a good idea? In this paper in machine ethics, I offer a positive answer to both questions. Although moral architectures based on deontology and utilitarianism have been most often considered, I argue that a virtue ethics approach may ultimately be more promising to program artificial moral agents (AMA). The basic idea is that a robot should behave as a virtuous person would (or recommend). Now, with the help of machine learning (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  44. AI, alignment, and the categorical imperative.Fritz McDonald - 2023 - AI and Ethics 3:337-344.
    Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as a deontic utilitarian principle. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45. Accountability in Artificial Intelligence: What It Is and How It Works.Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 1:1-12.
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, process, (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. The Possibilities of Machine Morality.Jonathan Pengelly - 2023 - Dissertation, Victoria University of Wellington
    This thesis shows morality to be broader and more diverse than its human instantiation. It uses the idea of machine morality to argue for this position. Specifically, it contrasts the possibilities open to humans with those open to machines to meaningfully engage with the moral domain. -/- This contrast identifies distinctive characteristics of human morality, which are not fundamental to morality itself, but constrain our thinking about morality and its possibilities. It also highlights the inherent potential of machine morality to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  48. Authenticity and co-design: On responsibly creating relational robots for children.Milo Phillips-Brown, Marion Boulicault, Jacqueline Kory-Westland, Stephanie Nguyen & Cynthia Breazeal - 2023 - In Mizuko Ito, Remy Cross, Karthik Dinakar & Candice Odgers (eds.), Algorithmic Rights and Protections for Children. MIT Press. pp. 85-121.
    Meet Tega. Blue, fluffy, and AI-enabled, Tega is a relational robot: a robot designed to form relationships with humans. Created to aid in early childhood education, Tega talks with children, plays educational games with them, solves puzzles, and helps in creative activities like making up stories and drawing. Children are drawn to Tega, describing him as a friend, and attributing thoughts and feelings to him ("he's kind," "if you just left him here and nobody came to play with him, he (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  49. Immune moral models? Pro-social rule breaking as a moral enhancement approach for ethical AI.Rajitha Ramanayake, Philipp Wicke & Vivek Nallur - 2023 - AI and Society 38 (2):801-813.
    We are moving towards a future where Artificial Intelligence (AI) based agents make many decisions on behalf of humans. From healthcare decision-making to social media censoring, these agents face problems, and make decisions with ethical and societal implications. Ethical behaviour is a critical characteristic that we would like in a human-centric AI. A common observation in human-centric industries, like the service industry and healthcare, is that their professionals tend to break rules, if necessary, for pro-social reasons. This behaviour among humans (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50. Moral machines: an impossible challenge? (Macchine morali: una sfida impossibile?).Luca Alberto Rappuoli - 2023 - Scintille 1 (1):71-74.
    This short essay offers an overview of the philosophical difficulties involved in answering the question 'Is it possible to develop an AI system capable of moral action?'.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 532