About this topic
Summary Machine ethics is about artificial moral agency. Machine ethicists wonder why people, human beings, other organisms, do what they do when they do it, what makes these things the right things to do, and how to articulate this process (ideally) in an independent artificial system (and not in a biological child, as an alternative). So, this category includes entries on agency, especially moral agency, and also on what it means to be an agent in general. On the empirical side, machine ethicists interpret rapidly advancing work in cognitive science and psychology alongside that of work in robotics and AI through traditional ethical frameworks, helping to frame robotics research in terms of ethical theory. For example, intelligent machines are (most) often modeled after biological systems, and in any event are often "made sense of" in terms of biological systems, so there is work that must be done in this process of interpretation and integration. More theoretical work wonders about the relative status afforded artificial agents given degrees of autonomy, origin, level of complexity, corporate-institutional and legal standing and so on, as well as research into the essence o consciousness and of moral agency regardless of natural or artificial instantiation. So understood, machine ethics is in the middle of a maelstrom of current research activity, with direct bearing on traditional ethics and with extensive popular implications as well. 
Key works Allen et al 2005Wallach et al 2008Tonkens 2012Tonkens 2009Müller & Bostrom 2014White 2013White 2015 
Related categories

299 found
Order:
1 — 50 / 299
  1. added 2019-01-17
    Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - forthcoming - AI and Society:1-17.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
  2. added 2018-12-31
    HRI Ethics and Type-Token Ambiguity: What Kind of Robotic Identity is Most Responsible?Thomas Arnold & Matthias Scheutz - forthcoming - Ethics and Information Technology.
  3. added 2018-12-31
    Special Operations Remote Advise and Assist: An Ethics Assessment.Deane-Peter Baker - forthcoming - Ethics and Information Technology.
  4. added 2018-12-31
    Autonomous Weapons Systems, Killer Robots and Human Dignity.Amanda Sharkey - forthcoming - Ethics and Information Technology.
  5. added 2018-12-31
    Algorithmic Paranoia: The Temporal Governmentality of Predictive Policing.Bonnie Sheehey - forthcoming - Ethics and Information Technology.
  6. added 2018-12-31
    Using Value Sensitive Design to Understand Transportation Choices and Envision a Future Transportation System.Kari Edison Watkins - forthcoming - Ethics and Information Technology.
  7. added 2018-12-31
    What has the Trolley Dilemma Ever Done for Us ? On Some Recent Debates About the Ethics of Self-Driving Cars.Andreas Wolkenstein - 2018 - Ethics and Information Technology 20 (3):163-173.
  8. added 2018-12-27
    Formalisation and Evaluation of Alan Gewirth's Proof for the Principle of Generic Consistency in Isabelle/HOL.David Fuenmayor & Christoph Benzmüller - unknown
    An ambitious ethical theory ---Alan Gewirth's "Principle of Generic Consistency"--- is encoded and analysed in Isabelle/HOL. Gewirth's theory has stirred much attention in philosophy and ethics and has been proposed as a potential means to bound the impact of artificial general intelligence.
  9. added 2018-12-20
    Fare e funzionare. Sull'analogia di robot e organismo.Fabio Fossa - 2018 - InCircolo - Rivista di Filosofia E Culture 6:73-88.
    In this essay I try to determine the extent to which it is possible to conceive robots and organisms as analogous entities. After a cursory preamble on the long history of epistemological connections between machines and organisms I focus on Norbert Wiener’s cybernetics, where the analogy between modern machines and organisms is introduced most explicitly. The analysis of issues pertaining to the cybernetic interpretation of the analogy serves then as a basis for a critical assessment of its reprise in contemporary (...)
  10. added 2018-12-19
    Nihilism and Technology. [REVIEW]Steven Umbrello - forthcoming - Prometheus: Critical Studies in Innovation.
  11. added 2018-12-18
    Beneficial Artificial Intelligence Coordination by Means of a Value Sensitive Design Approach.Steven Umbrello - 2019 - Big Data and Cognitive Computing 3 (1):5.
    This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values in to AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report of the UK Select Committee on Artificial Intelligence. This empirical investigation shows that the different and often disparate stakeholder groups that are implicated in AI design and use share some common values that can be used to (...)
  12. added 2018-12-13
    Virtuous Vs. Utilitarian Artificial Moral Agents.William A. Bauer - forthcoming - AI and Society:1-9.
    Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated financial trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), for an artificial moral agent based on (...)
  13. added 2018-12-12
    The Problem of Superintelligence: Political, Not Technological.Wolfhart Totschnig - forthcoming - AI and Society:1-14.
    The thinkers who have reflected on the problem of a coming superintelligence have generally seen the issue as a technological problem, a problem of how to control what the superintelligence will do. I argue that this approach is probably mistaken because it is based on questionable assumptions about the behavior of intelligent agents and, moreover, potentially counterproductive because it might, in the end, bring about the existential catastrophe that it is meant to prevent. I contend that the problem posed by (...)
  14. added 2018-11-26
    Making Metaethics Work for AI: Realism and Anti-Realism.Michal Klincewicz & Lily E. Frank - 2018 - In Mark Coeckelbergh, M. Loh, J. Funk, M. Seibt & J. Nørskov (eds.), Envisioning Robots in Society – Power, Politics, and Public Space. Amsterdam, Netherlands: IOS Press. pp. 311-318.
    Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place.
  15. added 2018-11-16
    Machine Metaphors and Ethics in Synthetic Biology.Joachim Boldt - 2018 - Life Sciences, Society and Policy 14 (1):1-13.
    The extent to which machine metaphors are used in synthetic biology is striking. These metaphors contain a specific perspective on organisms as well as on scientific and technological progress. Expressions such as “genetically engineered machine”, “genetic circuit”, and “platform organism”, taken from the realms of electronic engineering, car manufacturing, and information technology, highlight specific aspects of the functioning of living beings while at the same time hiding others, such as evolutionary change and interdependencies in ecosystems. Since these latter aspects are (...)
  16. added 2018-11-16
    Machine Ethics : Eight Concerns.Andreas Matthias - unknown
  17. added 2018-11-16
    Machine Medical Ethics.Gary Comstock - 2015 - Springer.
  18. added 2018-11-16
    Machine Medical Ethics.Simon Van Rysewyk & Matthijs Pontier (eds.) - 2015 - Springer.
  19. added 2018-11-16
    Ethics and Artificial Life: From Modeling to Moral Agents. [REVIEW]John P. Sullins - 2005 - Ethics and Information Technology 7 (3):139-148.
    Artificial Life has two goals. One attempts to describe fundamental qualities of living systems through agent based computer models. And the second studies whether or not we can artificially create living things in computational mediums that can be realized either, virtually in software, or through biotechnology. The study of ALife has recently branched into two further subdivisions, one is “dry” ALife, which is the study of living systems “in silico” through the use of computer simulations, and the other is “wet” (...)
  20. added 2018-11-07
    The Motivations and Risks of Machine Ethics.Karina Vold, Stephen Cave, Rune Nyrup & Adrian Weller - forthcoming - Proceedings of the IEEE.
    Many authors have proposed constraining the behaviour of intelligent systems with ‘machine ethics’ to ensure positive social outcomes from the development of such systems. This paper critically analyses the prospects for machine ethics, identifying several inherent limitations. While machine ethics may increase the probability of ethical behaviour in some situations, it cannot guarantee it due to the nature of ethics, the computational limitations of computational agents and the complexity of the world. In addition, machine ethics, even if it were to (...)
  21. added 2018-11-07
    Can Humanoid Robots Be Moral?Sanjit Chakraborty - 2018 - Ethics in Science, Environment and Politics 18:49-60.
    The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output’, in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated extensive debate, i.e. ‘Can humanoid robots be moral?’, stems from the normative outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of morality and creativity interplay with (...)
  22. added 2018-11-07
    Ethical Machines?Ariela Tubert - 2018 - Seattle University Law Review 41 (4).
    This Article explores the possibility of having ethical artificial intelligence. It argues that we face a dilemma in trying to develop artificial intelligence that is ethical: either we have to be able to codify ethics as a set of rules or we have to value a machine’s ability to make ethical mistakes so that it can learn ethics like children do. Neither path seems very promising, though perhaps by thinking about the difficulties with each we may come to a better (...)
  23. added 2018-10-13
    Against Leben's Rawlsian Collision Algorithm for Autonomous Vehicles.Geoff Keeling - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Springer. pp. 259-272.
    Suppose that an autonomous vehicle encounters a situation where (i) imposing a risk of harm on at least one person is unavoidable; and (ii) a choice about how to allocate risks of harm between different persons is required. What does morality require in these cases? Derek Leben defends a Rawlsian answer to this question. I argue that we have reason to reject Leben’s answer.
  24. added 2018-08-21
    Challenges to Engineering Moral Reasoners : Time and Context.Michal Klincewicz - 2017 - In Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. pp. 244-259.
    Programming computers to engage in moral reasoning is not a new idea (Anderson and Anderson 2011a). Work on the subject has yielded concrete examples of computable linguistic structures for a moral grammar (Mikhail 2007), the ethical governor architecture for autonomous weapon systems (Arkin 2009), rule-based systems that implement deontological principles (Anderson and Anderson 2011b), systems that implement utilitarian principles, and a hybrid approach to programming ethical machines (Wallach and Allen 2008). This chapter considers two philosophically informed strategies for engineering software (...)
  25. added 2018-08-21
    Introduction: Philosophy and Theory of Artificial Intelligence.Vincent C. Müller - 2012 - Minds and Machines 22 (2):67-69.
    The theory and philosophy of artificial intelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume of Minds and Machines presents leading invited papers from a conference on the “Philosophy and Theory of Artificial Intelligence” that was held in October 2011 in Thessaloniki. Artificial Intelligence is perhaps unique among engineering subjects in that it has raised very basic questions about the nature of computing, perception, reasoning, learning, language, action, interaction, (...)
  26. added 2018-08-21
    The Construction of 'Reality' in the Robot: Constructivist Perspectives on Situated Artificial Intelligence and Adaptive Robotics. [REVIEW]Tom Ziemke - 2001 - Foundations of Science 6 (1-3):163-233.
    This paper discusses different approaches incognitive science and artificial intelligenceresearch from the perspective of radicalconstructivism, addressing especially theirrelation to the biologically based theories ofvon Uexküll, Piaget as well as Maturana andVarela. In particular recent work in New AI and adaptive robotics on situated and embodiedintelligence is examined, and we discuss indetail the role of constructive processes asthe basis of situatedness in both robots andliving organisms.
  27. added 2018-07-30
    Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
  28. added 2018-07-05
    Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical (...)
  29. added 2018-07-03
    Evolution: The Computer Systems Engineer Designing Minds.Aaron Sloman - 2011 - Avant: Trends in Interdisciplinary Studies 2 (2):45–69.
    What we have learnt in the last six or seven decades about virtual machinery, as a result of a great deal of science and technology, enables us to offer Darwin a new defence against critics who argued that only physical form, not mental capabilities and consciousness could be products of evolution by natural selection. The defence compares the mental phenomena mentioned by Darwin’s opponents with contents of virtual machinery in computing systems. Objects, states, events, and processes in virtual machinery which (...)
  30. added 2018-07-03
    The Energetic Dimension of Emotions: An Evolution-Based Computer Simulation with General Implications.Luc Ciompi & Martin Baatz - 2008 - Biological Theory 3 (1):42-50.
    Viewed from an evolutionary standpoint, emotions can be understood as situation-specific patterns of energy consumption related to behaviors that have been selected by evolution for their survival value, such as environmental exploration, flight or fight, and socialization. In the present article, the energy linked with emotions is investigated by a strictly energy-based simulation of the evolution of simple autonomous agents provided with random cognitive and motor capacities and operating among food and predators. Emotions are translated into evolving patterns of energy (...)
  31. added 2018-07-02
    A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. (...)
  32. added 2018-06-15
    Giving Robots a Voice: Testimony, Intentionality, and the Law.Billy Wheeler - 2017 - In Steve Thompson (ed.), Androids, Cyborgs, and Robots in Contemporary Society and Culture. Hershey, PA, USA: pp. 1-34.
    Humans are becoming increasingly dependent on the ‘say-so' of machines, such as computers, smartphones, and robots. In epistemology, knowledge based on what you have been told is called ‘testimony' and being able to give and receive testimony is a prerequisite for engaging in many social roles. Should robots and other autonomous intelligent machines be considered epistemic testifiers akin to those of humans? This chapter attempts to answer this question as well as explore the implications of robot testimony for the criminal (...)
  33. added 2018-06-12
    The Problem of Machine Ethics in Artificial Intelligence.Rajakishore Nath & Vineet Sahu - forthcoming - AI and Society:1-9.
    The advent of the intelligent robot has occupied a significant position in society over the past decades and has given rise to new issues in society. As we know, the primary aim of artificial intelligence or robotic research is not only to develop advanced programs to solve our problems but also to reproduce mental qualities in machines. The critical claim of artificial intelligence advocates is that there is no distinction between mind and machines and thus they argue that there are (...)
  34. added 2018-06-12
    An Agent-Oriented Account of Piaget’s Theory of Interactional Morality.Antônio Carlos da Rocha Costa - forthcoming - AI and Society:1-28.
    In this paper, we present a formal interpretive account of Jean Piaget’s theory of the morality that regulates social exchanges, which we call interactional morality. First, we place Piaget’s conception in the context of his epistemological and sociological works. Then, we review the core of that conception: the two types of interactional moralities that Piaget identified to be usual in social exchanges, and the role that the notion of respect-for-the-other plays in their definition. Next, we analyze the main features of (...)
  35. added 2018-06-12
    Implementation of Moral Uncertainty in Intelligent Machines.Kyle Bogosian - 2017 - Minds and Machines 27 (4):591-608.
    The development of artificial intelligence will require systems of ethical decision making to be adapted for automatic computation. However, projects to implement moral reasoning in artificial moral agents so far have failed to satisfactorily address the widespread disagreement between competing approaches to moral philosophy. In this paper I argue that the proper response to this situation is to design machines to be fundamentally uncertain about morality. I describe a computational framework for doing so and show that it efficiently resolves common (...)
  36. added 2018-06-12
    Sopholab: Experimental Computational Philosophy.V. Wiegel - 2007 - Dissertation,
    In this book, the extend to which we can equip artificial agents with moral reasoning capacity is investigated. Attempting to create artificial agents with moral reasoning capabilities challenges our understanding of morality and moral reasoning to its utmost. It also helps philosophers dealing with the inherent complexity of modern organizations. Modern society with large multi-national organizations and extensive information infrastructures provides a backdrop for moral theories that is hard to encompass through mere theorising. Computerized support for theorising is needed to (...)
  37. added 2018-06-06
    Designing in Ethics. [REVIEW]Steven Umbrello - forthcoming - Prometheus: Critical Studies in Innovation 36 (1).
    Designing in Ethics provides a compilation of well-curated essays that tackle the ethical issues that surround technological design and argue that ethics must form a constitutive part of the designing process and a foundation in our institutions and practices. The appropriation of a design approach to applied ethics is argued as a means by which ethical issues that implicate technological artifact may be achieved.
  38. added 2018-05-19
    Mental Time-Travel, Semantic Flexibility, and A.I. Ethics.Marcus Arvan - forthcoming - AI and Society:1-20.
    This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ _GenEth_. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...)
  39. added 2018-04-16
    Do Machines Have Prima Facie Duties?Gary Comstock - 2015 - In Machine Medical Ethics. London: Springer. pp. 79-92.
    A properly programmed artificially intelligent agent may eventually have one duty, the duty to satisfice expected welfare. We explain this claim and defend it against objections.
  40. added 2018-03-19
    A Real‐World Rational Agent: Unifying Old and New AI.Paul F. M. J. Verschure & Philipp Althaus - 2003 - Cognitive Science 27 (4):561-590.
  41. added 2018-03-17
    Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take (...)
  42. added 2018-03-08
    Formally Stating the AI Alignment Problem.I. I. I. G. Gordon Worley - manuscript
  43. added 2018-03-06
    Cognition in Context: Phenomenology, Situated Robotics and the Frame Problem.Michael Wheeler - 2008 - International Journal of Philosophical Studies 16 (3):323 – 349.
    The frame problem is the difficulty of explaining how non-magical systems think and act in ways that are adaptively sensitive to context-dependent relevance. Influenced centrally by Heideggerian phenomenology, Hubert Dreyfus has argued that the frame problem is, in part, a consequence of the assumption (made by mainstream cognitive science and artificial intelligence) that intelligent behaviour is representation-guided behaviour. Dreyfus' Heideggerian analysis suggests that the frame problem dissolves if we reject representationalism about intelligence and recognize that human agents realize the property (...)
  44. added 2018-02-17
    Oversold, Unregulated, and Unethical: Why We Need to Respond to Robot Nannies.Blay Whitby - 2010 - Interaction Studiesinteraction Studies Social Behaviour and Communication in Biological and Artificial Systems 11 (2):290-294.
  45. added 2018-02-17
    Robot Nannies Get a Wheel in the Door: A Response to the Commentaries.Noel Sharkey & Amanda Sharkey - 2010 - Interaction Studiesinteraction Studies Social Behaviour and Communication in Biological and Artificial Systems 11 (2):302-313.
  46. added 2018-01-13
    Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
  47. added 2017-12-30
    Two Challenges for CI Trustworthiness and How to Address Them.Kevin Baum, Eva Schmidt & A. Köhl Maximilian - 2017 - Proceedings of the 1st Workshop on Explainable Computational Intelligence (XCI 2017).
    We argue that, to be trustworthy, Computa- tional Intelligence (CI) has to do what it is entrusted to do for permissible reasons and to be able to give rationalizing explanations of its behavior which are accurate and gras- pable. We support this claim by drawing par- allels with trustworthy human persons, and we show what difference this makes in a hypo- thetical CI hiring system. Finally, we point out two challenges for trustworthy CI and sketch a mechanism which could be (...)
  48. added 2017-12-01
    Transparent, Explainable, and Accountable AI for Robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
  49. added 2017-10-10
    What Makes Any Agent a Moral Agent? Reflections on Machine Consciousness and Moral Agency.Joel Parthemore & Blay Whitby - 2013 - International Journal of Machine Consciousness 5 (2):105-129.
    In this paper, we take moral agency to be that context in which a particular agent can, appropriately, be held responsible for her actions and their consequences. In order to understand moral agency, we will discuss what it would take for an artifact to be a moral agent. For reasons that will become clear over the course of the paper, we take the artifactual question to be a useful way into discussion but ultimately misleading. We set out a number of (...)
  50. added 2017-10-10
    Danielson, Peter, Artificial Morality: Virtuous Robots for Virtual Games (London: Routledge, 1992) Pp. Xiv, 240, A $32.95 (Paper). [REVIEW]Scott Shalkowski & Robert Pargetter - 1994 - Australasian Journal of Philosophy 72 (1).
1 — 50 / 299