David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
Ethics and Information Technology 10 (2-3):123-133 (2008)
Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that adopting certain levels of abstraction out of context can be dangerous when the level of abstraction obscures the humans who constitute computer systems. We arrive at this critique of Floridi and Sanders by examining the debate over the moral status of computer systems using the notion of interpretive flexibility. We frame the debate as a struggle over the meaning and significance of computer systems that behave independently, and not as a debate about the ‘true’ status of autonomous systems. Our analysis leads to the conclusion that while levels of abstraction are useful for particular purposes, when it comes to agency and responsibility, computer systems should be conceptualized and identified in ways that keep them tethered to the humans who create and deploy them.
|Keywords||artificial moral agents autonomy computer modeling computers and society independence levels of abstraction sociotechnical systems|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
No references found.
Citations of this work BETA
Marcus Schulzke (2013). Autonomous Weapons and Distributed Responsibility. Philosophy and Technology 26 (2):203-219.
Thomas M. Powers (2013). On the Moral Agency of Computers. Topoi 32 (2):227-236.
Similar books and articles
Emma Rooksby (2009). How to Be a Responsible Slave: Managing the Use of Expert Information Systems. [REVIEW] Ethics and Information Technology 11 (1):81-90.
S. Grodzinsky Frances, W. Miller Keith & J. Wolf Marty (forthcoming). The Ethics of Designing Artificial Agents. Ethics and Information Technology.
Luciano Floridi & J. W. Sanders (2004). On the Morality of Artificial Agents. Minds and Machines 14 (3):349-379.
John P. Sullins (2005). Ethics and Artificial Life: From Modeling to Moral Agents. [REVIEW] Ethics and Information Technology 7 (3):139-148.
Deborah G. Johnson (2006). Computer Systems: Moral Entities but Not Moral Agents. [REVIEW] Ethics and Information Technology 8 (4):195-204.
Christopher Wareham (2011). On the Moral Equality of Artificial Agents. International Journal of Technoethics 2 (1):35-42.
Colin Allen, Iva Smit & Wendell Wallach (2005). Artificial Morality: Top-Down, Bottom-Up, and Hybrid Approaches. [REVIEW] Ethics and Information Technology 7 (3):149-155.
Rafael Capurro (2008). On Floridi's Metaphysical Foundation of Information Ecology. Ethics and Information Technology 10 (2-3):167-173.
Gordana Dodig Crnkovic & Baran Çürüklü (2012). Robots: Ethical by Design. [REVIEW] Ethics and Information Technology 14 (1):61-71.
Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf (2008). The Ethics of Designing Artificial Agents. Ethics and Information Technology 10 (2-3):115-121.
Added to index2009-01-28
Total downloads33 ( #59,867 of 1,413,407 )
Recent downloads (6 months)2 ( #94,438 of 1,413,407 )
How can I increase my downloads?