Skip to main content
Log in

Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches

  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing artificial morality and the differing criteria for success that are appropriate to different strategies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • C Allen G Varner J Zinser (2000) ArticleTitleA Prolegomena to Any Future Artificial Moral Agent Journal of Experimental and Theoretical Artificial Intelligence 12 251–261 Occurrence Handle10.1080/09528130050111428

    Article  Google Scholar 

  • R Axelrod W Hamilton (1981) ArticleTitleThe Evolution of Cooperation Science 211 1390–1396 Occurrence Handle84f:92030

    MathSciNet  Google Scholar 

  • P Danielson (1992) Artificial Morality: Virtuous Robots for Virtual Games Routledge New York

    Google Scholar 

  • P Danielson (1998) Modeling Rationality, Morality and Evolution Oxford University Press New York

    Google Scholar 

  • L Floridi (1999) ArticleTitleInformation Ethics: On the Philosophical Foundation of Computer Ethics Ethics and Information Technology 1 37–56 Occurrence Handle10.1023/A:1010018611096

    Article  Google Scholar 

  • L Floridi J. W. Sanders (2004) ArticleTitleOn the Morality of Artificial Agents Minds and Machines 14 349–379 Occurrence Handle10.1023/B:MIND.0000035461.63578.9d

    Article  Google Scholar 

  • H Foerster ParticleVon (1992) ArticleTitleEthics and Second-order Cybernetics Cybernetics & Human Knowing 1 9–25

    Google Scholar 

  • J Gips (1995) Towards the Ethical Robot K Ford C Glymour P Hayes (Eds) Android Epistemology MIT Press Cambridge, MA 243–252

    Google Scholar 

  • B Skyrms (1996) Evolution of the Social Contract Cambridge University Press New York

    Google Scholar 

  • A Turing (1950) ArticleTitleComputing Machinery and Intelligence Mind 59 433–460 Occurrence Handle12,208c

    MathSciNet  Google Scholar 

  • W. Wallach. Artificial Morality: Bounded Rationality, Bounded Morality and Emotions. In I. Smit, G. Lasker and W. Wallach, editors, Proceedings of the Intersymp 2004 Workshop on Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, pp. 1–6, Baden-Baden, Germany, IIAS, Windsor, Ontario, 2004.

  • E.O Wilson (1975) Sociobiology: The New Synthesis Harvard/Belknap Cambridge, MA

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Colin Allen.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Allen, C., Smit, I. & Wallach, W. Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches. Ethics Inf Technol 7, 149–155 (2005). https://doi.org/10.1007/s10676-006-0004-4

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-006-0004-4

Keywords

Navigation