About this topic
Summary Machine ethics is about artificial moral agency. Machine ethicists wonder why people, human beings, other organisms, do what they do when they do it, what makes these things the right things to do, and how to articulate this process (ideally) in an independent artificial system (and not in a biological child, as an alternative). So, this category includes entries on agency, especially moral agency, and also on what it means to be an agent in general. On the empirical side, machine ethicists interpret rapidly advancing work in cognitive science and psychology alongside that of work in robotics and AI through traditional ethical frameworks, helping to frame robotics research in terms of ethical theory. For example, intelligent machines are (most) often modeled after biological systems, and in any event are often "made sense of" in terms of biological systems, so there is work that must be done in this process of interpretation and integration. More theoretical work wonders about the relative status afforded artificial agents given degrees of autonomy, origin, level of complexity, corporate-institutional and legal standing and so on, as well as research into the essence o consciousness and of moral agency regardless of natural or artificial instantiation. So understood, machine ethics is in the middle of a maelstrom of current research activity, with direct bearing on traditional ethics and with extensive popular implications as well. 
Key works Allen et al 2005Wallach et al 2008Tonkens 2012Tonkens 2009Müller & Bostrom 2014White 2013White 2015 
Related categories

347 found
Order:
1 — 50 / 347
  1. added 2020-05-18
    Applying a Principle of Explicability to AI Research in Africa: Should We Do It?Mary Carman & Benjamin Rosman - forthcoming - Ethics and Information Technology.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. added 2020-05-01
    Incorporating Ethics Into Artificial Intelligence.Amitai Etzioni & Oren Etzioni - 2017 - The Journal of Ethics 21 (4):403-418.
    This article reviews the reasons scholars hold that driverless cars and many other AI equipped machines must be able to make ethical decisions, and the difficulties this approach faces. It then shows that cars have no moral agency, and that the term ‘autonomous’, commonly applied to these machines, is misleading, and leads to invalid conclusions about the ways these machines can be kept ethical. The article’s most important claim is that a significant part of the challenge posed by AI-equipped machines (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  3. added 2020-04-24
    Ethics of Artificial Intelligence.Vincent C. Müller - forthcoming - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 1-20.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4. added 2020-04-13
    Shared Moral Foundations of Embodied Artificial Intelligence.Joe Cruz - 2019 - In Vincent Conitzer, Gillian Hadfield & Shannon Vallor (eds.), AIES '19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. pp. 139-146.
    Sophisticated AI's will make decisions about how to respond to complex situations, and we may wonder whether those decisions will align with the moral values of human beings. I argue that pessimistic worries about this value alignment problem are overstated. In order to achieve intelligence in its full generality and adaptiveness, cognition in AI's will need to be embodied in the sense of the Embodied Cognition research program. That embodiment will yield AI's that share our moral foundations, namely coordination, sociality, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. added 2020-04-06
    Artificial Beings Worthy of Moral Consideration in Virtual Environments: An Analysis of Ethical Viability.Stefano Gualeni - 2020 - Journal of Virtual Worlds Research 13 (1).
    This article explores whether and under which circumstances it is ethically viable to include artificial beings worthy of moral consideration in virtual environments. In particular, the article focuses on virtual environments such as those in digital games and training simulations – interactive and persistent digital artifacts designed to fulfill specific purposes, such as entertainment, education, training, or persuasion. The article introduces the criteria for moral consideration that serve as a framework for this analysis. Adopting this framework, the article tackles the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  6. added 2020-04-04
    Artificial Wisdom: A Philosophical Framework.Cheng-Hung Tsai - forthcoming - AI and Society.
    Human excellences such as intelligence, morality, and consciousness are investigated by philosophers as well as artificial intelligence researchers. One excellence that has not been widely discussed by AI researchers is practical wisdom, the highest human excellence, or the highest, seventh, stage in Dreyfus’s model of skill acquisition. In this paper, I explain why artificial wisdom matters and how artificial wisdom is possible (in principle and in practice) by responding to two philosophical challenges to building artificial wisdom systems. The result is (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7. added 2020-03-17
    Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two bottom-up approaches to (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8. added 2020-03-11
    Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9. added 2020-03-11
    The Bright Line of Ethical Agency.Stevens F. Wandmacher - 2016 - Techné: Research in Philosophy and Technology 20 (3):240-257.
    In his article The Nature, Importance, and Difficulty of Machine Ethics, James H. Moor distinguishes two lines of argument for those who wish to draw a “bright line” between full ethical agents, such as human beings, and “weaker” ethical agents, such as machines whose actions have significant moral ramifications. The first line of argument is that only full ethical agents are agents at all. The second is that no machine could have the presumed features necessary for ethical agency. This paper (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  10. added 2020-03-11
    Robots and Moral Agency.Linda Johansson - 2011 - Dissertation, Stockholm University
    Machine ethics is a field of applied ethics that has grown rapidly in the last decade. Increasingly advanced autonomous robots have expanded the focus of machine ethics from issues regarding the ethical development and use of technology by humans to a focus on ethical dimensions of the machines themselves. This thesis contains two essays, both about robots in some sense, representing these different perspectives of machine ethics. The first essay, “Is it Morally Right to use UAVs in War?” concerns an (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. added 2020-03-07
    Digital Well-Being and Manipulation Online.Michael Klenk - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach. Springer.
    Social media use is soaring globally. Existing research of its ethical implications predominantly focuses on the relationships amongst human users online, and their effects. The nature of the software-to-human relationship and its impact on digital well-being, however, has not been sufficiently addressed yet. This paper aims to close the gap. I argue that some intelligent software agents, such as newsfeed curator algorithms in social media, manipulate human users because they do not intend their means of influence to reveal the user’s (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. added 2020-02-07
    From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles Into Practices.Jessica Morley, Luciano Floridi, Libby Kinsey & Anat Elhalal - forthcoming - Science and Engineering Ethics:1-28.
    The debate about the ethical implications of Artificial Intelligence dates from the 1960s :741–742, 1960; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13. added 2020-02-04
    Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  14. added 2020-02-03
    Handbook of Research on Machine Ethics and Morality.Steven John Thompson (ed.) - forthcoming - Hershey, PA: IGI-Global.
    This book is dedicated to expert research topics, and analyses of ethics-related inquiry, at the machine ethics and morality level: key players, benefits, problems, policies, and strategies. Gathering some of the leading voices that recognize and understand the complexities and intricacies of human-machine ethics provides a resourceful compendium to be accessed by decision-makers and theorists concerned with identification and adoption of human-machine ethics initiatives, leading to needed policy adoption and reform for human-machine entities, their technologies, and their societal and legal (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  15. added 2020-01-22
    How Should Autonomous Vehicles Redistribute the Risks of the Road?Brian Berkey - 2019 - Wharton Public Policy Initiative Issue Brief 7 (9):1-6.
  16. added 2020-01-22
    A Softwaremodule for an Ethical Elder Care Robot. Design and Implementation.Catrin Misselhorn - 2019 - Ethics in Progress 10 (2):68-81.
    The development of increasingly intelligent and autonomous technologies will eventually lead to these systems having to face morally problematic situations. This is particularly true of artificial systems that are used in geriatric care environments. The goal of this article is to describe how one can approach the design of an elder care robot which is capable of moral decision-making and moral learning. A conceptual design for the development of such a system is provided and the steps that are necessary to (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. added 2020-01-22
    The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.David Watson - 2019 - Minds and Machines 29 (3):417-440.
    Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  18. added 2020-01-22
    Remarks on the Possibility of Ethical Reasoning in an Artificial Intelligence System by Means of Abductive Models.David Casacuberta & Alger Sans - 2019 - In Matthieu Fontaine, Cristina Barés-Gómez, Francisco Salguero-Lamillar, Lorenzo Magnani & Ángel Nepomuceno-Fernández (eds.), Model-Based Reasoning in Science and Technology. Springer Verlag.
  19. added 2020-01-22
    Moral Orthoses: A New Approach to Human and Machine Ethics.Marius Dorobantu & Yorick Wilks - 2019 - Zygon 54 (4):1004-1021.
  20. added 2020-01-22
    The HeartMath Coherence Model: Implications and Challenges for Artificial Intelligence and Robotics.Stephen D. Edwards - 2019 - AI and Society 34 (4):899-905.
    HeartMath is a contemporary, scientific, coherent model of heart intelligence. The aim of this paper is to review this coherence model with special reference to its implications for artificial intelligence and robotics. Various conceptual issues, implications and challenges for AI and robotics are discussed. In view of seemingly infinite human capacity for creative, destructive and incoherent behaviour, it is highly recommended that designers and operators be persons of heart intelligence, optimal moral integrity, vision and mission. This implies that AI and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. added 2020-01-22
    The Picture of Artificial Intelligence and the Secularization of Thought.King-Ho Leung - 2019 - Political Theology 20 (6):457-471.
    This article offers a critical interpretation of Artificial Intelligence (AI) as a philosophical notion which exemplifies a secular conception of thinking. One way in which AI notably differs from the conventional understanding of “thinking” is that, according to AI, “intelligence” or “thinking” does not necessarily require “life” as a precondition: that it is possible to have “thinking without life.” Building on Charles Taylor’s critical account of secularity as well as Hubert Dreyfus’ influential critique of AI, this article offers a theological (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  22. added 2020-01-22
    Reviewing Tests for Machine Consciousness.A. Elamrani & R. V. Yampolskly - 2019 - Journal of Consciousness Studies 26 (5-6):35-64.
    The accelerating advances in the fields of neuroscience, artificial intelligence, and robotics have been garnering interest and raising new philosophical, ethical, or practical questions that depend on whether or not there may exist a scientific method of probing consciousness in machines. This paper provides an analytic review of the existing tests for machine consciousness proposed in the academic literature over the past decade, and an overview of the diverse scientific communities involved in this enterprise. The tests put forward in their (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  23. added 2020-01-22
    Robots Like Me: Challenges and Ethical Issues in Aged Care.Ipke Wachsmuth - 2018 - Frontiers in Psychology 9 (432).
    This paper addresses the issue of whether robots could substitute for human care, given the challenges in aged care induced by the demographic change. The use of robots to provide emotional care has raised ethical concerns, e.g., that people may be deceived and deprived of dignity. In this paper it is argued that these concerns might be mitigated and that it may be sufficient for robots to take part in caring when they behave *as if* they care.
    Remove from this list   Direct download (8 more)  
     
    Export citation  
     
    Bookmark  
  24. added 2020-01-22
    Superintelligence as Moral Philosopher.J. Corabi - 2017 - Journal of Consciousness Studies 24 (5-6):128-149.
    Non-biological superintelligent artificial minds are scary things. Some theorists believe that if they came to exist, they might easily destroy human civilization, even if destroying human civilization was not a high priority for them. Consequently, philosophers are increasingly worried about the future of human beings and much of the rest of the biological world in the face of the potential development of superintelligent AI. This paper explores whether the increased attention philosophers have paid to the dangers of superintelligent AI is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  25. added 2020-01-22
    Outline of a Sensory-Motor Perspective on Intrinsically Moral Agents.Christian Balkenius, Lola Cañamero, Philip Pärnamets, Birger Johansson, Martin Butz & Andreas Olsson - 2016 - Adaptive Behavior 24 (5):306-319.
    We propose that moral behaviour of artificial agents could be intrinsically grounded in their own sensory-motor experiences. Such an ability depends critically on seven types of competencies. First, intrinsic morality should be grounded in the internal values of the robot arising from its physiology and embodiment. Second, the moral principles of robots should develop through their interactions with the environment and with other agents. Third, we claim that the dynamics of moral emotions closely follows that of other non-social emotions used (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. added 2020-01-22
    Moral Agency, Moral Responsibility, and Artifacts: What Existing Artifacts Fail to Achieve , and Why They, Nevertheless, Can Make Moral Claims Upon Us.Joel Parthemore & Blay Whitby - 2014 - International Journal of Machine Consciousness 6 (2):141-161.
    This paper follows directly from an earlier paper where we discussed the requirements for an artifact to be a moral agent and concluded that the artifactual question is ultimately a red herring. As...
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  27. added 2020-01-07
    Autonomous Vehicles, Trolley Problems, and the Law.Stephen S. Wu - 2020 - Ethics and Information Technology 22 (1):1-13.
    Autonomous vehicles have the potential to save tens of thousands of lives, but legal and social barriers may delay or even deter manufacturers from offering fully automated vehicles and thereby cost lives that otherwise could be saved. Moral philosophers use “thought experiments” to teach us about what ethics might say about the ethical behavior of AVs. If a manufacturer designing an AV decided to make what it believes is an ethical choice to save a large group of lives by steering (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. added 2019-10-25
    Robot Betrayal: A Guide to the Ethics of Robotic Deception.John Danaher - 2020 - Ethics and Information Technology 22 (2):117-128.
    If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  29. added 2019-10-20
    Artificial Pain May Induce Empathy, Morality, and Ethics in the Conscious Mind of Robots.Minoru Asada - 2019 - Philosophies 4 (3):38-0.
    In this paper, a working hypothesis is proposed that a nervous system for pain sensation is a key component for shaping the conscious minds of robots. In this article, this hypothesis is argued from several viewpoints towards its verification. A developmental process of empathy, morality, and ethics based on the mirror neuron system that promotes the emergence of the concept of self scaffolds the emergence of artificial minds. Firstly, an outline of the ideological background on issues of the mind in (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  30. added 2019-10-20
    Rule Based Fuzzy Cognitive Maps and Natural Language Processing in Machine Ethics.Rollin M. Omari & Masoud Mohammadian - 2016 - Journal of Information, Communication and Ethics in Society 14 (3):231-253.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31. added 2019-10-04
    Patiency is Not a Virtue: The Design of Intelligent Systems and Systems of Ethics.Joanna J. Bryson - 2018 - Ethics and Information Technology 20 (1):15-26.
    The question of whether AI systems such as robots can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies constantly reconstruct our artefacts, including our ethical systems. Consequently, the place of AI systems in society is a matter of normative, not descriptive ethics. Here I start from a functionalist assumption, that ethics is the set of behaviour that maintains a society. This assumption allows me to exploit the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  32. added 2019-10-04
    Ethics and Social Robotics.Raffaele Rodogno - 2016 - Ethics and Information Technology 18 (4):241-242.
  33. added 2019-10-04
    Against the Moral Turing Test: Accountable Design and the Moral Reasoning of Autonomous Systems.Thomas Arnold & Matthias Scheutz - 2016 - Ethics and Information Technology 18 (2):103-115.
    This paper argues against the moral Turing test as a framework for evaluating the moral performance of autonomous systems. Though the term has been carefully introduced, considered, and cautioned about in previous discussions :251–261, 2000; Allen and Wallach 2009), it has lingered on as a touchstone for developing computational approaches to moral reasoning :98–109, 2015). While these efforts have not led to the detailed development of an MTT, they nonetheless retain the idea to discuss what kinds of action and reasoning (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  34. added 2019-10-04
    Privacy, Deontic Epistemic Action Logic and Software Agents.V. Wiegel, M. Hoven & G. Lokhorst - 2006 - Ethics and Information Technology 7 (4):251-264.
  35. added 2019-09-26
    Refining the Ethics of Computer-Made Decisions: A Classification of Moral Mediation by Ubiquitous Machines.Marlies Van de Voort, Wolter Pieters & Luca Consoli - 2015 - Ethics and Information Technology 17 (1):41-56.
    In the past decades, computers have become more and more involved in society by the rise of ubiquitous systems, increasing the number of interactions between humans and IT systems. At the same time, the technology itself is getting more complex, enabling devices to act in a way that previously only humans could, based on developments in the fields of both robotics and artificial intelligence. This results in a situation in which many autonomous, intelligent and context-aware systems are involved in decisions (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36. added 2019-09-09
    Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations.Johannes Himmelreich - 2018 - Ethical Theory and Moral Practice 21 (3):669-684.
    Trolley cases are widely considered central to the ethics of autonomous vehicles. We caution against this by identifying four problems. Trolley cases, given technical limitations, rest on assumptions that are in tension with one another. Furthermore, trolley cases illuminate only a limited range of ethical issues insofar as they cohere with a certain design framework. Furthermore, trolley cases seem to demand a moral answer when a political answer is called for. Finally, trolley cases might be epistemically problematic in several ways. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  37. added 2019-08-10
    Distributive Justice as an Ethical Principle for Autonomous Vehicle Behavior Beyond Hazard Scenarios.Manuel Dietrich & Thomas H. Weisswange - 2019 - Ethics and Information Technology 21 (3):227-239.
    Through modern driver assistant systems, algorithmic decisions already have a significant impact on the behavior of vehicles in everyday traffic. This will become even more prominent in the near future considering the development of autonomous driving functionality. The need to consider ethical principles in the design of such systems is generally acknowledged. However, scope, principles and strategies for their implementations are not yet clear. Most of the current discussions concentrate on situations of unavoidable crashes in which the life of human (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38. added 2019-08-10
    Society-in-the-Loop: Programming the Algorithmic Social Contract.Iyad Rahwan - 2018 - Ethics and Information Technology 20 (1):5-14.
    Recent rapid advances in Artificial Intelligence and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  39. added 2019-08-10
    No Such Thing as Killer Robots.Michael Robillard - 2018 - Journal of Applied Philosophy 35 (4):705-717.
    There have been two recent strands of argument arguing for the pro tanto impermissibility of fully autonomous weapon systems. On Sparrow's view, AWS are impermissible because they generate a morally problematic ‘responsibility gap’. According to Purves et al., AWS are impermissible because moral reasoning is not codifiable and because AWS are incapable of acting for the ‘right’ reasons. I contend that these arguments are flawed and that AWS are not morally problematic in principle. Specifically, I contend that these arguments presuppose (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40. added 2019-08-10
    A Rawlsian Algorithm for Autonomous Vehicles.Derek Leben - 2017 - Ethics and Information Technology 19 (2):107-115.
    Autonomous vehicles must be programmed with procedures for dealing with trolley-style dilemmas where actions result in harm to either pedestrians or passengers. This paper outlines a Rawlsian algorithm as an alternative to the Utilitarian solution. The algorithm will gather the vehicle’s estimation of probability of survival for each person in each action, then calculate which action a self-interested person would agree to if he or she were in an original bargaining position of fairness. I will employ Rawls’ assumption that the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   8 citations  
  41. added 2019-08-10
    Irresponsibilities, Inequalities and Injustice for Autonomous Vehicles.Hin-Yan Liu - 2017 - Ethics and Information Technology 19 (3):193-207.
    With their prospect for causing both novel and known forms of damage, harm and injury, the issue of responsibility has been a recurring theme in the debate concerning autonomous vehicles. Yet, the discussion of responsibility has obscured the finer details both between the underlying concepts of responsibility, and their application to the interaction between human beings and artificial decision-making entities. By developing meaningful distinctions and examining their ramifications, this article contributes to this debate by refining the underlying concepts that together (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42. added 2019-08-10
    Moral Machines. Teaching Robots Right From Wrong. A Book Review.Dawid Lubiszewski - 2011 - Avant: Trends in Interdisciplinary Studies 2 (1).
  43. added 2019-08-09
    When AI Meets PC: Exploring the Implications of Workplace Social Robots and a Human-Robot Psychological Contract.Sarah Bankins & Paul Formosa - 2019 - European Journal of Work and Organizational Psychology 2019.
    The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44. added 2019-07-24
    Why Friendly AIs Won’T Be That Friendly: A Friendly Reply to Muehlhauser and Bostrom.Robert James M. Boyles & Jeremiah Joven Joaquin - 2019 - AI and Society:1–3.
    In “Why We Need Friendly AI”, Luke Muehlhauser and Nick Bostrom propose that for our species to survive the impending rise of superintelligent AIs, we need to ensure that they would be human-friendly. This discussion note offers a more natural but bleaker outlook: that in the end, if these AIs do arise, they won’t be that friendly.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  45. added 2019-06-06
    What's So Bad About Killer Robots?Alex Leveringhaus - 2018 - Journal of Applied Philosophy 35 (2):341-358.
    Robotic warfare has now become a real prospect. One issue that has generated heated debate concerns the development of ‘Killer Robots’. These are weapons that, once programmed, are capable of finding and engaging a target without supervision by a human operator. From a conceptual perspective, the debate on Killer Robots has been rather confused, not least because it is unclear how central elements of these weapons can be defined. Offering a precise take on the relevant conceptual issues, the article contends (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    Bookmark  
  46. added 2019-06-06
    Brain-Machine Interfaces and Personal Responsibility for Action – Maybe Not As Complicated After All.Søren Holm & Teck Chuan Voo - 2010 - Studies in Ethics, Law, and Technology 4 (3).
    This comment responds to Kevin Warwick’s article on predictability and responsibility with respect to brain-machine interfaces in action. It compares conventional responsibility for device use with the potential consequences of phenomenological human-machine integration which obscures the causal chain of an act. It explores two senses of “responsibility”: 1) when it is attributed to a person, suggesting the morally important way in which the person is a causal agent, and 2) when a person is accountable and, on the basis of fairness (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  47. added 2019-06-06
    Robot Morals and Human Ethics: The Seminar.Wendell Wallach - 2010 - Teaching Ethics 11 (1):87-92.
    Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, (...)
    Remove from this list   Direct download (8 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  48. added 2019-06-06
    The Ethics of Designing Artificial Agents.Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf - 2008 - Ethics and Information Technology 10 (2-3):115-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question (...)
    Remove from this list   Direct download (8 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  49. added 2019-06-06
    Nano-Enabled AI: Some Philosophical Issues.J. Storrs Hall - 2006 - International Journal of Applied Philosophy 20 (2):247-261.
    Improvements in computational hardware enabled by nanotechnology promise a dual revolution in coming decades: machines which are both more intelligent and more numerous than human beings. This possibility raises substantial concern over the moral nature of such intelligent machines. An analysis of the prospects involves at least two key philosophical issues. The first, intentionality in formal systems, turns on whether a “mere machine” can be a mind whose thoughts have true meaning and understanding. Second, what is the moral nature of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50. added 2019-06-06
    Artificial Intelligence Modeling of Spontaneous Self Learning: An Application of Dialectical Philosophy.Karina Stokes - 1996 - International Journal of Applied Philosophy 10 (2):1-6.
1 — 50 / 347