David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Ezio Di Nucci
Jack Alan Reynolds
Learn more about PhilPapers
Minds and Machines 14 (3):349-379 (2004)
Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on mind-less morality we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the Method of Abstraction for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The Method of Abstraction is explained in terms of an interface or set of features or observables at a given LoA. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the transition rules by which state is changed) at a given LoA. Morality may be thought of as a threshold defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary cost of this facility is the extension of the class of agents and moral agents to embrace AAs.
|Keywords||artificial agents computer ethics levels of abstraction moral responsibility|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
No references found.
Citations of this work BETA
Luciano Floridi (2008). The Method of Levels of Abstraction. Minds and Machines 18 (3):303-329.
Gunnar Björnsson & Kendy Hess (2016). Corporate Crocodile Tears? On the Reactive Attitudes of Corporate Agents. Philosophy and Phenomenological Research (3):n/a-n/a.
Adam Waytz, Kurt Gray, Nicholas Epley & Daniel Wegner (2010). Causes and Consequences of Mind Perception. Trends in Cognitive Sciences 14 (8):383-388.
Mariarosaria Taddeo (2010). Modelling Trust in Artificial Agents, A First Step Toward the Analysis of E-Trust. Minds and Machines 20 (2):243-257.
Luciano Floridi (2005). The Ontological Interpretation of Informational Privacy. Ethics and Information Technology 7 (4):185-200.
Similar books and articles
Susan Stark (2004). A Change of Heart: Moral Emotions, Transformation, and Moral Virtue. Journal of Moral Philosophy 1 (1):31-50.
Steffen Wettig & Eberhard Zehender (2004). A Legal Analysis of Human and Electronic Agents. Artificial Intelligence and Law 12 (1-2):111-135.
Christopher Wareham (2011). On the Moral Equality of Artificial Agents. International Journal of Technoethics 2 (1):35-42.
Magnus Boman (1999). Norms in Artificial Decision Making. Artificial Intelligence and Law 7 (1):17-35.
Gordana Dodig Crnkovic & Baran Çürüklü (2012). Robots: Ethical by Design. [REVIEW] Ethics and Information Technology 14 (1):61-71.
Susanne Bobzien (2006). Moral Responsibility and Moral Development in Epicurus’ Philosophy. In B. Reis & S. Haffmans (eds.), The Virtuous Life in Greek Ethics. CUP
Colin Allen, Iva Smit & Wendell Wallach (2005). Artificial Morality: Top-Down, Bottom-Up, and Hybrid Approaches. [REVIEW] Ethics and Information Technology 7 (3):149-155.
Deborah G. Johnson & Keith W. Miller (2008). Un-Making Artificial Moral Agents. Ethics and Information Technology 10 (2-3):123-133.
S. Grodzinsky Frances, W. Miller Keith & J. Wolf Marty (forthcoming). The Ethics of Designing Artificial Agents. Ethics and Information Technology.
Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf (2008). The Ethics of Designing Artificial Agents. Ethics and Information Technology 10 (2-3):115-121.
Added to index2009-01-28
Total downloads94 ( #43,182 of 1,796,260 )
Recent downloads (6 months)19 ( #38,046 of 1,796,260 )
How can I increase my downloads?