Skip to main content

Advertisement

Log in

Robots: ethical by design

  • Original paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. http://world.honda.com/ASIMO/technology/spec.html, http://hansonrobotics.wordpress.com/about/.

  2. See e.g. http://www.aaai.org/Press/Reports/Symposia/Fall/fs-05-06.php AAAI Fall 2005 Symposium on Machine Ethics and http://uhaweb.hartford.edu/anderson/machineethicsconsortium.html. Machine Ethics Consortium.

  3. This understanding of the necessary connection of responsibility with blame builds on the underlying supposition that the error always is a problem of an individual agent and not a problem of a system as a whole. It also implies that the system of individual agents is regulated by order and punishment. This is fundamentally different from the modern safety culture approaches that, starting from individual responsibility, emphasize global properties of system safety.

  4. Davis (2010) e.g. distinguishes among nine senses of “responsibility”, one of those being (e) a responsibility as domain of tasks (things that one is supposed to do) –which is a type of responsibility we argue should be ascribed to robots.

  5. http://en.wikipedia.org/wiki/Safety_by_design.

  6. http://patientsafetyed.duhs.duke.edu/module_c/what_do_we_mean.html.

  7. The Fukushima disaster reminds us how risky the nuclear industry is and how highly reliable it is under normal conditions. It also gives us reason to think about the consequences of rare catastrophic events.

References

  • Adam, A. (2005). Delegating and distributing morality: Can we inscribe privacy protection in a machine? Ethics and Information Technology, 7, 233–242.

    Article  MathSciNet  Google Scholar 

  • Adam, A. (2008). Ethics for things. Ethics and Information Technology, 10(2–3), 149–154.

    Article  Google Scholar 

  • Akan, B., Çürüklü, B., Spampinato, G., Asplund, L., et al. (2010). Towards Robust Human Robot Collaboration in Industrial Environments. In Proceedings 5th ACM/IEEE International Conference on Human-Robot Interaction (pp. 71–72).

  • Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology, 7, 149–155.

    Article  Google Scholar 

  • Allen, C., Smit, I., & Wallach, W. (2006). Why machine ethics? IEEE Intelligent Systems, July/August 2006, pp. 12–17.

  • Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251–261.

    Article  MATH  Google Scholar 

  • Anderson, M., & Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15–25.

    Google Scholar 

  • Arkin, R. C. (1998). Behavior-based robotics. Cambridge: MIT Press.

    Google Scholar 

  • Asaro, P. M. (2007). Robots and responsibility from a legal perspective. Proceedings of the IEEE 2007 International Conference on Robotics and Automation, Workshop on RoboEthics, Rome.

  • Aurum, A., & Wohlin, C. (2003). The fundamental nature of requirements engineering activities as a decision-making process. Information and Software Technology, 45(14), 945–954.

    Article  Google Scholar 

  • Beavers, A. (2011). Moral machines and the threat of ethical nihilism. In Patrick Lin, George Bekey, & Keith Abney (Eds.), Robot ethics: The ethical and social implication of robotics. Cambridge, MA: MIT Press.

    Google Scholar 

  • Becker, B. (2006). Social robots—emotional agents: Some remarks on naturalizing man-machine interaction. International Review of Information Ethics (IRIE), 6, 37–45.

    Google Scholar 

  • Brey, P. (2006). Freedom and privacy in ambient intelligence. Ethics and Information Technology, 7(3), 157–166.

    Article  Google Scholar 

  • Brey, P. (2008). Technological design as an evolutionary process. Philosophy and Design, 1, 61–75.

    Article  Google Scholar 

  • Bynum, T. W., & Rogerson, S. (Eds.). (2004). Computer ethics and professional responsibility (pp. 98–106). Kundli, India: Blackwell.

    Google Scholar 

  • Capurro, R., & Nagenborg, M. (Eds.). (2009). Ethics and robotics. Amsterdam: IOS Press.

    Google Scholar 

  • Clark, A. (2003). Natural-born cyborgs: Minds, technologies, and the future of human intelligence. Oxford: Oxford University Press.

    Google Scholar 

  • Coeckelbergh, M. (2009). Virtual moral agency, virtual moral responsibility: On the moral significance of the appearance, perception, and performance of artificial agents. AI & Society, 24, 188–189.

    Article  Google Scholar 

  • Coeckelbergh, M. (2010). Moral appearances: Emotions, robots, and human morality. In: Ethics and Information Technology, published on-line ISSN 1388-1957, 18 March 2010.

  • Coleman, K. G. (2008). Computing and moral responsibility. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2008 Edition). http://plato.stanford.edu/archives/fall2008/entries/computing-responsibility/.

  • Crutzen, C. K. M. (2006). Invisibility and the meaning of ambient intelligence. International Journal of Information Ethics, 6(006), 52–60. Ethics in Robotics.

    Google Scholar 

  • Çürüklü, B., Dodig-Crnkovic, G., & Akan, B. (2010). Towards industrial robots with human like moral responsibilities. In Proceedings 5th ACM/IEEE International Conference on Human-Robot Interaction (pp. 85–86).

  • Danielson, P. (1992). Artificial morality virtuous robots for virtual games. London: Routledge.

    Google Scholar 

  • Davis, M. (2010). Ain’t no one here but us social forces: Constructing the professional responsibility of engineers. Science and Engineering Ethics, Issn: 1353-3452, pp. 1–22.

  • Dennett, D. C. (1973). Mechanism and responsibility. In T. Honderich (Ed.), Essays on freedom of action. Boston: Routledge & Keegan Paul.

    Google Scholar 

  • Dennett, D. C. (1994). The myth of original intentionality. In E. Dietrich (Ed.), Thinking computers and virtual persons: Essays on the intentionality of machines (pp. 91–107). San Diego, CA and London: Academic Press.

    Google Scholar 

  • Dodig-Crnkovic, G. (1999). ABB atom’s criticality safety handbook, ICNC’99 Sixth International Conference on Nuclear Criticality Safety, Versailles, France.

  • Dodig-Crnkovic, G. (2005). On the importance of teaching professional ethics to computer science students, computing and philosophy conference, E-CAP 2004, Pavia, Italy. In L. Magnani (Ed.), Computing and philosophy. Associated International Academic Publishers.

  • Dodig-Crnkovic, G. (2006). Professional ethics in computing and intelligent systems. Proceedings of the Ninth Scandinavian Conference on Artificial Intelligence (SCAI 2006), Espoo, Finland, Oct 25–27.

  • Dodig-Crnkovic, G., & Persson, D. (2008). Sharing moral responsibility with robots: A pragmatic approach. In A. Holst, P. Kreuger & P. Funk (Eds.), Tenth Scandinavian Conference on Artificial Intelligence SCAI 2008. Vol. 173, Frontiers in Artificial Intelligence and Applications.

  • Edgar, S. L. (1997). Morality and machines: Perspectives on computer ethics. Sudbury, MA: Jones and Bartlett Publishers.

    Google Scholar 

  • Eshleman, A. (2009). Moral responsibility. In Edward N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2009 Edition). http://plato.stanford.edu/archives/win2009/entries/moral-responsibility/.

  • Fellous, J.-M., & Arbib, M. A. (Eds.). (2005). Who needs emotions?: The brain meets the robot. Oxford: Oxford University Press.

    Google Scholar 

  • Floridi, L. (2007). Distributed morality in multiagent systems. Paper Presented at CEPE 2007, San Diego. http://cepe2007.sandiego.edu/abstractDetail.asp?ID=40.

  • Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.

    Article  Google Scholar 

  • Gates, B. (2007). A robot in every home. Scientific American, 296, 58–65.

    Article  Google Scholar 

  • Grodzinsky, F. S., Miller, K. W., & Wolf, M. J. (2008). The ethics of designing artificial agents. Ethics and Information Technology, 11(1), 115–121.

    Article  Google Scholar 

  • Grodzinsky, F., Miller, K., & Wolf, M. (2011). Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. Ethics and Information Technology, 13(1), 17–27.

    Google Scholar 

  • Hansson, S. O. (1997). The limits of precaution. Foundations of Science, 2, 293–306.

    Article  Google Scholar 

  • Hansson, S. O. (1999). Adjusting scientific practices to the precautionary principle. Human and Ecological Risk Assessment, 5, 909–921.

    Article  Google Scholar 

  • Huff, C. (2004). Unintentional power in the design of computing systems. In T. W. Bynum & S. Rogerson (Eds.), Computer ethics and professional responsibility (pp. 98–106). Kundli, India: Blackwell Publishing.

    Google Scholar 

  • Huff, C. (2010). “Why a sociotechnical system?” http://computingcases.org/general_tools/sia/socio_tech_system.html.

  • Järvik, M. (2003). How to understand moral responsibility?, Trames, No. 3, Teaduste Akadeemia Kirjastus, pp. 147–163.

  • Johnson, D. G. (1994). Computer ethics. Upper Saddle River, NJ: Prentice-Hall, Inc.

    Google Scholar 

  • Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8, 195–204.

    Article  Google Scholar 

  • Johnson, D. G., & Miller, K. W. (2006). A dialogue on responsibility, moral agency, and IT systems, Proceedings of the 2006 ACM symposium on Applied computing table of content (pp. 272–276). Dijon, France.

  • Johnson, D. G., & Powers, T. M. (2005). Computer systems and responsibility: A normative look at technological complexity. Ethics and Information Technology, 7, 99–107.

    Article  Google Scholar 

  • Larsson, M. (2004). Predicting quality attributes in component-based software systems, PhD Thesis. Sweden: Mälardalen University Press. ISBN: 91-88834-33-6.

  • Latour, B. (1992). Where are the missing masses, sociology of a few mundane artefacts, originally. In Wiebe Bijker & John Law (Eds.), Shaping technology-building society. Studies in sociotechnical change (pp. 225–259). Cambridge, Mass: MIT Press.

    Google Scholar 

  • Levy, D. N. L. (2006). Robots unlimited: Life in a virtual age. Natick, Massachusetts: A K Peters, Ltd.

    Google Scholar 

  • Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design. http://ethics.calpoly.edu/ONR_report.pdf.

  • Lin, P., Bekey, G., & Abney, K. (2009). Robots in war: Issues of risk and ethics. In Rafael Capurro & Michael Nagenborg (Eds.), Ethics and robotics. Heidelberg, Germany: AKA Verlag/IOS Press.

    Google Scholar 

  • Magnani, L. (2007). Distributed morality and technological artifacts. 4th International Conference on Human being in Contemporary Philosophy, Volgograd. http://volgograd2007.goldenideashome.com/2%20Papers/Magnani%20Lorenzo%20p.pdf.

  • Marino, D., & Tamburrini, G. (2006). Learning robots and human responsibility. International Review of Information Ethics (IRIE), 6, 46–51.

    Google Scholar 

  • Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6, 175–183.

    Article  Google Scholar 

  • McKenna, M. (2009). Compatibilism. In: Edward N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2009 Edition). http://plato.stanford.edu/archives/win2009/entries/compatibilism/.

  • Miller, K. W. (2011). Moral responsibility for computing artifacts: The rules. IT Professional, 13(3), 57–59.

    Article  MATH  Google Scholar 

  • Minsky, M. (2006). The emotion machine: Commonsense thinking, artificial intelligence, and the future of the human mind. NY: Simon & Schuster, Inc.

    Google Scholar 

  • Mitcham, C. (1995). Computers, information and ethics: A review of issues and literature. Science and Engineering Ethics, 1(2), 113–132.

    Article  Google Scholar 

  • Montague, P. (1998). The precautionary principle, Rachel’s environment and health weekly, No. 586. http://www.biotech-info.net/rachels_586.html.

  • Moor, J. (1985). What is computer ethics? Metaphilosophy, 16(4), 266–275.

    Article  Google Scholar 

  • Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE intelligent systems, IEEE computer society, pp. 18–21.

  • Moravec, H. (1999). Robot: Mere machine to transcendent mind. Oxford, New York: Oxford University Press.

    Google Scholar 

  • Nagenborg, M. (2007). Artificial moral agents: An intercultural perspective. International Review of Information Ethics, 7(09), 129–133.

    Google Scholar 

  • Nissenbaum, H. (1994). Computing and accountability. Communications of the ACM, 37(1), 73–80.

    Article  Google Scholar 

  • Nobre, F. S., Tobias, A. M., & Walker, D. S. (2009). Organizational and technological implications of cognitive machines: Designing future information management systems. IGI Global. 1–338. doi:10.4018/978-1-60566-302-9.

  • Nof, S. Y. (Ed.). (1999). Handbook of industrial robotics (2nd ed.). Hoboken, New Jersey: Wiley.

    Google Scholar 

  • Nuseibeh, B., & Easterbrook, S. (2000). Requirements engineering: A roadmap. Proceedings of International Conference on Software Engineering (ICSE-2000) (pp. 4–11). ACM Press: Limerick, Ireland.

  • Pimple, K. D. (2011). Surrounded by machines. Communications of the ACM, 54(3), 29–31.

    Article  Google Scholar 

  • Russell, S., & Norvig, P. (2003). Artificial intelligence—A modern approach. Upper Saddle River, NJ: Pearson Education.

    Google Scholar 

  • Scheutz, M. (2002). Computationalism new directions (pp. 1–223). Cambridge, Mass: MIT Press.

    Google Scholar 

  • Shrader-Frechette, K. (2003). Technology and ethics. In R. C. Scharff & V. Dusek (Eds.), Philosophy of technology—The technological condition (pp. 187–190). Padstow, United Kingdom: Blackwell Publishing.

    Google Scholar 

  • Silver, D. A. (2005). Strawsonian defense of corporate moral responsibility. American Philosophical Quarterly, 42, 279–295.

    Google Scholar 

  • Siponen, M. (2004). A pragmatic evaluation of the theory of information ethics. Ethics and Information Technology, 6(4), 279–290.

    Article  Google Scholar 

  • Som, C., Hilty, L. M., & Ruddy, T. F. (2004). The precautionary principle in the information society. Human and Ecological Risk Assessment, 10(5), 787–799.

    Article  Google Scholar 

  • Sommerville, I. (2007). Models for responsibility assignment. In G. Dewsbury & J. Dobson (Eds.), Responsibility and dependable systems. Kluwer: Springer. ISBN 1846286255.

    Google Scholar 

  • Stahl, B. C. (2004). Information, ethics, and computers: The problem of autonomous moral agents. Minds and Machines, 14, 67–83.

    Article  Google Scholar 

  • Strawson, P. F. (1974). Freedom and resentment, in freedom and resentment and other essays. London: Methuen.

    Google Scholar 

  • Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6(12), 23–30.

    Google Scholar 

  • Vallverdú, J., & Casacuberta, D. (2009). Handbook of research on synthetic emotions and sociable robotics: New applications in affective computing and artificial intelligence. IGI Global. doi:10.4018/978-1-60566-354-8.

  • Van de Poel, I. R., & Verbeek, P. P. (Eds.) (2006). Special issue on ethics and engineering design. Science, Technology and Human Values 31(3), 223–380.

    Google Scholar 

  • Verbeek, P.-P. (2008). Morality in design: Design ethics and the morality of technological artifacts. In P. E. Vermaas, P. A. Kroes, A. Light, & S. Moore (Eds.), Philosophy and design (pp. 91–103). Berlin, Germany: Springer.

    Chapter  Google Scholar 

  • Veruggio, G. (2006). The EURON Roboethics Roadmap, Humanoids’06, December 6, 2006, Genoa, Italy.

  • Veruggio, G., & Operto, F. (2008). Roboethics, chapter 64 in springer handbook of robotics. Berlin, Heidelberg: Springer.

    Google Scholar 

  • Wallach, C., & Allen, W. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.

    Google Scholar 

  • Warwick, K. (2009). Today it’s a cute friend. Tomorrow it could be the dominant life form. Times of London 2009-02-25. http://www.timesonline.co.uk/tol/comment/columnists/guest_contributors/article5798625.ece.

Download references

Acknowledgments

The authors want to thank Mark Coeckelbergh for the enlightening discussion of several ideas central to this paper—distributed responsibility, artifactual morality and the role of emotional intelligence. We also gratefully acknowledge Keith Miller’s insightful response to an earlier version of this article. Last, but not least, we greatly appreciate the three anonymous reviewers’ valuable comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gordana Dodig Crnkovic.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Dodig Crnkovic, G., Çürüklü, B. Robots: ethical by design. Ethics Inf Technol 14, 61–71 (2012). https://doi.org/10.1007/s10676-011-9278-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-011-9278-2

Keywords

Navigation