David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Ezio Di Nucci
Jack Alan Reynolds
Learn more about PhilPapers
Ethics and Information Technology 10 (2-3):115-121 (2008)
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility.
|Keywords||artificial agent design of artificial agents intentionality learning machine complexity neural nets responsibility|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
John Stuart Mill (1843). A System of Logic, Ratiocinative and Inductive. University of Toronto Press.
Citations of this work BETA
F. S. Grodzinsky, K. W. Miller & M. J. Wolf (2011). Developing Artificial Agents Worthy of Trust: Would You Buy a Used Car From This Artificial Agent? [REVIEW] Ethics and Information Technology 13 (1):17-27.
Marcus Schulzke (2013). Autonomous Weapons and Distributed Responsibility. Philosophy and Technology 26 (2):203-219.
Ugo Pagallo (2011). Robots of Just War: A Legal Perspective. Philosophy and Technology 24 (3):307-323.
Gordana Dodig Crnkovic & Baran Çürüklü (2012). Robots: Ethical by Design. Ethics and Information Technology 14 (1):61-71.
U. Pagallo (2012). Cracking Down on Autonomy: Three Challenges to Design in IT Law. [REVIEW] Ethics and Information Technology 14 (4):319-328.
Similar books and articles
M. H. Lee & N. J. Lacey (2003). The Influence of Epistemology on the Design of Artificial Agents. Minds and Machines 13 (3):367-395.
Mariarosaria Taddeo (2010). Modelling Trust in Artificial Agents, A First Step Toward the Analysis of E-Trust. Minds and Machines 20 (2):243-257.
Colin Allen, Iva Smit & Wendell Wallach (2005). Artificial Morality: Top-Down, Bottom-Up, and Hybrid Approaches. [REVIEW] Ethics and Information Technology 7 (3):149-155.
Magnus Boman (1999). Norms in Artificial Decision Making. Artificial Intelligence and Law 7 (1):17-35.
Christopher Wareham (2011). On the Moral Equality of Artificial Agents. International Journal of Technoethics 2 (1):35-42.
Jeffrey White (2013). Manufacturing Morality A General Theory of Moral Agency Grounding Computational Implementations: The ACTWith Model. In Floares (ed.), Computational Intelligence. Nova Publications 1-65.
Nicola Lacey & M. Lee (2003). The Epistemological Foundations of Artificial Agents. Minds and Machines 13 (3):339-365.
Luciano Floridi & J. W. Sanders (2004). On the Morality of Artificial Agents. Minds and Machines 14 (3):349-379.
Deborah G. Johnson & Keith W. Miller (2008). Un-Making Artificial Moral Agents. Ethics and Information Technology 10 (2-3):123-133.
S. Grodzinsky Frances, W. Miller Keith & J. Wolf Marty (forthcoming). The Ethics of Designing Artificial Agents. Ethics and Information Technology.
Added to index2009-01-28
Total downloads41 ( #100,207 of 1,902,212 )
Recent downloads (6 months)3 ( #280,998 of 1,902,212 )
How can I increase my downloads?