Skip to main content
Log in

The ethics of designing artificial agents

  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Abbreviations

LoA1:

Level of abstraction 1 refers to a user’s view of an autonomous system

LoA2:

Level of abstraction 2 refers to the designer’s view of an autonomous system

MTP:

Mapping table processing refers to a technique for considering the internal workings of an autonomous system

References

  • Daniel C. Dennett. Intentional Systems. In John Haugelande, editor, Mind Design. Bradford Books, Montgomery, Vermont, 1981. Also see http://www.cs.umu.se/kurser/TDBC12/HT99/dennett2.html

  • D. Fisher and F. Lipson. Emergent Algorithms – A New Method for Enhancing Survivability in Unbounded Systems. In Proceedings of the Hawaii International Conference on System Sciences, p. 7043, 1999

  • Luciano Floridi and J.W. Sanders. On the Morality of Artificial Agents. Minds and Machines, 14(3): 349–379, Springer, Netherlands, 2004. http://www.wolfson.ox.ac.uk/~floridi/pdf/omaa.pdf

  • R. Gore, P.F. Reynolds, Jr., L. Tang, and D.C. Brogan. Explanation Exploration: Exploring Emergent Behavior. In Proceedings of the 21st International Workshop on Principles of Advanced and Distributed Simulation Workshop on Parallel and Distributed Simulation, pp. 113–122. IEEE Computer Society, Washington, DC, 2007

  • P. Heck, S. Ghosh. A Study of Synthetic Creativity: Behavior Modeling and Simulation of an Ant Colony. IEEE Intelligent Systems, 15(6):58–66, 2000

    Article  Google Scholar 

  • P. Jacob. Intentionality. In Stanford Encyclopedia of Philosophy, 2003. http://plato.stanford.edu/entries/intentionality/. Accessed April 9, 2007

  • J.S. Mill. A System of Logic Ratiocinative and Inductive. John W. Parker and Son: London, 1872

    Google Scholar 

  • D.L. Parnas, A.J. van Schouwen, S.P. Kwan. Evaluation of Safety-Critical Software. Communications of the ACM, 33(6): 636–648, 1990

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the participants of the SCSU Research Symposium on Artificial Agency, who helped us through our conceptual muddles. The useful idea of defining intentionality* and learning* was Dr. Kenneth Himma’s.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Frances S. Grodzinsky.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Grodzinsky, F.S., Miller, K.W. & Wolf, M.J. The ethics of designing artificial agents. Ethics Inf Technol 10, 115–121 (2008). https://doi.org/10.1007/s10676-008-9163-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-008-9163-9

Keywords

Navigation