Science and Engineering Ethics 25 (3):719-735 (2019)

Authors
Scott Robbins
Universität Bonn
Abstract
Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. Although some scholars have challenged the very initiative to develop AMAs, what is currently missing from the debate is a closer examination of the reasons offered by machine ethicists to justify the development of AMAs. This closer examination is especially needed because of the amount of funding currently being allocated to the development of AMAs coupled with the amount of attention researchers and industry leaders receive in the media for their efforts in this direction. The stakes in this debate are high because moral robots would make demands on society; answers to a host of pending questions about what counts as an AMA and whether they are morally responsible for their behavior or not. This paper shifts the burden of proof back to the machine ethicists demanding that they give good reasons to build AMAs. The paper argues that until this is done, the development of commercially available AMAs should not proceed further.
Keywords No keywords specified (fix it)
Categories (categorize this paper)
ISBN(s)
DOI 10.1007/s11948-018-0030-8
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy


Upload a copy of this paper     Check publisher's policy     Papers currently archived: 58,518
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

A Darwinian Dilemma for Realist Theories of Value.Sharon Street - 2006 - Philosophical Studies 127 (1):109-166.
Trust and Antitrust.Annette Baier - 1986 - Ethics 96 (2):231-260.
On the Morality of Artificial Agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
The Role of Trust in Knowledge.John Hardwig - 1991 - Journal of Philosophy 88 (12):693-708.

View all 35 references / Add more references

Citations of this work BETA

Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward Zalta (ed.), Stanford Encyclopedia of Philosophy. Palo Alto, Cal.: CSLI, Stanford University. pp. 1-70.

View all 15 citations / Add more citations

Similar books and articles

A Challenge for Machine Ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
Moral Machines?Michael S. Pritchard - 2012 - Science and Engineering Ethics 18 (2):411-417.
Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
Robot Morals and Human Ethics: The Seminar.Wendell Wallach - 2010 - Teaching Ethics 11 (1):87-92.
Artificial Moral Agents: An Intercultural Perspective.Michael Nagenborg - 2007 - International Review of Information Ethics 7 (9):129-133.
What do we owe to intelligent robots?John-Stewart Gordon - 2020 - AI and Society 35 (1):209-223.
A Dilemma for Moral Deliberation in AI in Advance.Ryan Jenkins & Duncan Purves - forthcoming - International Journal of Applied Philosophy.
When is a Robot a Moral Agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.

Analytics

Added to PP index
2019-06-25

Total views
44 ( #232,202 of 2,421,654 )

Recent downloads (6 months)
7 ( #103,153 of 2,421,654 )

How can I increase my downloads?

Downloads

My notes