Ethics and Information Technology 10 (2-3):115-121 (2008)
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility.
|Keywords||artificial agent design of artificial agents intentionality learning machine complexity neural nets responsibility|
|Categories||categorize this paper)|
References found in this work BETA
A System of Logic, Ratiocinative and Inductive.John Stuart Mill - 1843 - University of Toronto Press.
Citations of this work BETA
Developing Artificial Agents Worthy of Trust: Would You Buy a Used Car From This Artificial Agent? [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
Autonomous Weapons and Distributed Responsibility.Marcus Schulzke - 2013 - Philosophy and Technology 26 (2):203-219.
Artificial Moral Agents Are Infeasible with Foreseeable Technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
Robots: Ethical by Design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
Robots of Just War: A Legal Perspective.Ugo Pagallo - 2011 - Philosophy and Technology 24 (3):307-323.
Similar books and articles
The Influence of Epistemology on the Design of Artificial Agents.M. H. Lee & N. J. Lacey - 2003 - Minds and Machines 13 (3):367-395.
Modelling Trust in Artificial Agents, A First Step Toward the Analysis of E-Trust.Mariarosaria Taddeo - 2010 - Minds and Machines 20 (2):243-257.
Artificial Morality: Top-Down, Bottom-Up, and Hybrid Approaches. [REVIEW]Colin Allen, Iva Smit & Wendell Wallach - 2005 - Ethics and Information Technology 7 (3):149-155.
Norms in Artificial Decision Making.Magnus Boman - 1999 - Artificial Intelligence and Law 7 (1):17-35.
On the Moral Equality of Artificial Agents.Christopher Wareham - 2011 - International Journal of Technoethics 2 (1):35-42.
Manufacturing Morality A General Theory of Moral Agency Grounding Computational Implementations: The ACTWith Model.Jeffrey White - 2013 - In Floares (ed.), Computational Intelligence. Nova Publications. pp. 1-65.
The Epistemological Foundations of Artificial Agents.Nicola Lacey & M. Lee - 2003 - Minds and Machines 13 (3):339-365.
On the Morality of Artificial Agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
Un-Making Artificial Moral Agents.Deborah G. Johnson & Keith W. Miller - 2008 - Ethics and Information Technology 10 (2-3):123-133.
Added to index2009-01-28
Total downloads52 ( #99,355 of 2,158,679 )
Recent downloads (6 months)3 ( #132,835 of 2,158,679 )
How can I increase my downloads?