Abstract
Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots.
Similar content being viewed by others
Notes
See e.g. http://www.aaai.org/Press/Reports/Symposia/Fall/fs-05-06.php AAAI Fall 2005 Symposium on Machine Ethics and http://uhaweb.hartford.edu/anderson/machineethicsconsortium.html. Machine Ethics Consortium.
This understanding of the necessary connection of responsibility with blame builds on the underlying supposition that the error always is a problem of an individual agent and not a problem of a system as a whole. It also implies that the system of individual agents is regulated by order and punishment. This is fundamentally different from the modern safety culture approaches that, starting from individual responsibility, emphasize global properties of system safety.
Davis (2010) e.g. distinguishes among nine senses of “responsibility”, one of those being (e) a responsibility as domain of tasks (things that one is supposed to do) –which is a type of responsibility we argue should be ascribed to robots.
The Fukushima disaster reminds us how risky the nuclear industry is and how highly reliable it is under normal conditions. It also gives us reason to think about the consequences of rare catastrophic events.
References
Adam, A. (2005). Delegating and distributing morality: Can we inscribe privacy protection in a machine? Ethics and Information Technology, 7, 233–242.
Adam, A. (2008). Ethics for things. Ethics and Information Technology, 10(2–3), 149–154.
Akan, B., Çürüklü, B., Spampinato, G., Asplund, L., et al. (2010). Towards Robust Human Robot Collaboration in Industrial Environments. In Proceedings 5th ACM/IEEE International Conference on Human-Robot Interaction (pp. 71–72).
Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology, 7, 149–155.
Allen, C., Smit, I., & Wallach, W. (2006). Why machine ethics? IEEE Intelligent Systems, July/August 2006, pp. 12–17.
Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251–261.
Anderson, M., & Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15–25.
Arkin, R. C. (1998). Behavior-based robotics. Cambridge: MIT Press.
Asaro, P. M. (2007). Robots and responsibility from a legal perspective. Proceedings of the IEEE 2007 International Conference on Robotics and Automation, Workshop on RoboEthics, Rome.
Aurum, A., & Wohlin, C. (2003). The fundamental nature of requirements engineering activities as a decision-making process. Information and Software Technology, 45(14), 945–954.
Beavers, A. (2011). Moral machines and the threat of ethical nihilism. In Patrick Lin, George Bekey, & Keith Abney (Eds.), Robot ethics: The ethical and social implication of robotics. Cambridge, MA: MIT Press.
Becker, B. (2006). Social robots—emotional agents: Some remarks on naturalizing man-machine interaction. International Review of Information Ethics (IRIE), 6, 37–45.
Brey, P. (2006). Freedom and privacy in ambient intelligence. Ethics and Information Technology, 7(3), 157–166.
Brey, P. (2008). Technological design as an evolutionary process. Philosophy and Design, 1, 61–75.
Bynum, T. W., & Rogerson, S. (Eds.). (2004). Computer ethics and professional responsibility (pp. 98–106). Kundli, India: Blackwell.
Capurro, R., & Nagenborg, M. (Eds.). (2009). Ethics and robotics. Amsterdam: IOS Press.
Clark, A. (2003). Natural-born cyborgs: Minds, technologies, and the future of human intelligence. Oxford: Oxford University Press.
Coeckelbergh, M. (2009). Virtual moral agency, virtual moral responsibility: On the moral significance of the appearance, perception, and performance of artificial agents. AI & Society, 24, 188–189.
Coeckelbergh, M. (2010). Moral appearances: Emotions, robots, and human morality. In: Ethics and Information Technology, published on-line ISSN 1388-1957, 18 March 2010.
Coleman, K. G. (2008). Computing and moral responsibility. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2008 Edition). http://plato.stanford.edu/archives/fall2008/entries/computing-responsibility/.
Crutzen, C. K. M. (2006). Invisibility and the meaning of ambient intelligence. International Journal of Information Ethics, 6(006), 52–60. Ethics in Robotics.
Çürüklü, B., Dodig-Crnkovic, G., & Akan, B. (2010). Towards industrial robots with human like moral responsibilities. In Proceedings 5th ACM/IEEE International Conference on Human-Robot Interaction (pp. 85–86).
Danielson, P. (1992). Artificial morality virtuous robots for virtual games. London: Routledge.
Davis, M. (2010). Ain’t no one here but us social forces: Constructing the professional responsibility of engineers. Science and Engineering Ethics, Issn: 1353-3452, pp. 1–22.
Dennett, D. C. (1973). Mechanism and responsibility. In T. Honderich (Ed.), Essays on freedom of action. Boston: Routledge & Keegan Paul.
Dennett, D. C. (1994). The myth of original intentionality. In E. Dietrich (Ed.), Thinking computers and virtual persons: Essays on the intentionality of machines (pp. 91–107). San Diego, CA and London: Academic Press.
Dodig-Crnkovic, G. (1999). ABB atom’s criticality safety handbook, ICNC’99 Sixth International Conference on Nuclear Criticality Safety, Versailles, France.
Dodig-Crnkovic, G. (2005). On the importance of teaching professional ethics to computer science students, computing and philosophy conference, E-CAP 2004, Pavia, Italy. In L. Magnani (Ed.), Computing and philosophy. Associated International Academic Publishers.
Dodig-Crnkovic, G. (2006). Professional ethics in computing and intelligent systems. Proceedings of the Ninth Scandinavian Conference on Artificial Intelligence (SCAI 2006), Espoo, Finland, Oct 25–27.
Dodig-Crnkovic, G., & Persson, D. (2008). Sharing moral responsibility with robots: A pragmatic approach. In A. Holst, P. Kreuger & P. Funk (Eds.), Tenth Scandinavian Conference on Artificial Intelligence SCAI 2008. Vol. 173, Frontiers in Artificial Intelligence and Applications.
Edgar, S. L. (1997). Morality and machines: Perspectives on computer ethics. Sudbury, MA: Jones and Bartlett Publishers.
Eshleman, A. (2009). Moral responsibility. In Edward N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2009 Edition). http://plato.stanford.edu/archives/win2009/entries/moral-responsibility/.
Fellous, J.-M., & Arbib, M. A. (Eds.). (2005). Who needs emotions?: The brain meets the robot. Oxford: Oxford University Press.
Floridi, L. (2007). Distributed morality in multiagent systems. Paper Presented at CEPE 2007, San Diego. http://cepe2007.sandiego.edu/abstractDetail.asp?ID=40.
Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.
Gates, B. (2007). A robot in every home. Scientific American, 296, 58–65.
Grodzinsky, F. S., Miller, K. W., & Wolf, M. J. (2008). The ethics of designing artificial agents. Ethics and Information Technology, 11(1), 115–121.
Grodzinsky, F., Miller, K., & Wolf, M. (2011). Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. Ethics and Information Technology, 13(1), 17–27.
Hansson, S. O. (1997). The limits of precaution. Foundations of Science, 2, 293–306.
Hansson, S. O. (1999). Adjusting scientific practices to the precautionary principle. Human and Ecological Risk Assessment, 5, 909–921.
Huff, C. (2004). Unintentional power in the design of computing systems. In T. W. Bynum & S. Rogerson (Eds.), Computer ethics and professional responsibility (pp. 98–106). Kundli, India: Blackwell Publishing.
Huff, C. (2010). “Why a sociotechnical system?” http://computingcases.org/general_tools/sia/socio_tech_system.html.
Järvik, M. (2003). How to understand moral responsibility?, Trames, No. 3, Teaduste Akadeemia Kirjastus, pp. 147–163.
Johnson, D. G. (1994). Computer ethics. Upper Saddle River, NJ: Prentice-Hall, Inc.
Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8, 195–204.
Johnson, D. G., & Miller, K. W. (2006). A dialogue on responsibility, moral agency, and IT systems, Proceedings of the 2006 ACM symposium on Applied computing table of content (pp. 272–276). Dijon, France.
Johnson, D. G., & Powers, T. M. (2005). Computer systems and responsibility: A normative look at technological complexity. Ethics and Information Technology, 7, 99–107.
Larsson, M. (2004). Predicting quality attributes in component-based software systems, PhD Thesis. Sweden: Mälardalen University Press. ISBN: 91-88834-33-6.
Latour, B. (1992). Where are the missing masses, sociology of a few mundane artefacts, originally. In Wiebe Bijker & John Law (Eds.), Shaping technology-building society. Studies in sociotechnical change (pp. 225–259). Cambridge, Mass: MIT Press.
Levy, D. N. L. (2006). Robots unlimited: Life in a virtual age. Natick, Massachusetts: A K Peters, Ltd.
Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design. http://ethics.calpoly.edu/ONR_report.pdf.
Lin, P., Bekey, G., & Abney, K. (2009). Robots in war: Issues of risk and ethics. In Rafael Capurro & Michael Nagenborg (Eds.), Ethics and robotics. Heidelberg, Germany: AKA Verlag/IOS Press.
Magnani, L. (2007). Distributed morality and technological artifacts. 4th International Conference on Human being in Contemporary Philosophy, Volgograd. http://volgograd2007.goldenideashome.com/2%20Papers/Magnani%20Lorenzo%20p.pdf.
Marino, D., & Tamburrini, G. (2006). Learning robots and human responsibility. International Review of Information Ethics (IRIE), 6, 46–51.
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6, 175–183.
McKenna, M. (2009). Compatibilism. In: Edward N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2009 Edition). http://plato.stanford.edu/archives/win2009/entries/compatibilism/.
Miller, K. W. (2011). Moral responsibility for computing artifacts: The rules. IT Professional, 13(3), 57–59.
Minsky, M. (2006). The emotion machine: Commonsense thinking, artificial intelligence, and the future of the human mind. NY: Simon & Schuster, Inc.
Mitcham, C. (1995). Computers, information and ethics: A review of issues and literature. Science and Engineering Ethics, 1(2), 113–132.
Montague, P. (1998). The precautionary principle, Rachel’s environment and health weekly, No. 586. http://www.biotech-info.net/rachels_586.html.
Moor, J. (1985). What is computer ethics? Metaphilosophy, 16(4), 266–275.
Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE intelligent systems, IEEE computer society, pp. 18–21.
Moravec, H. (1999). Robot: Mere machine to transcendent mind. Oxford, New York: Oxford University Press.
Nagenborg, M. (2007). Artificial moral agents: An intercultural perspective. International Review of Information Ethics, 7(09), 129–133.
Nissenbaum, H. (1994). Computing and accountability. Communications of the ACM, 37(1), 73–80.
Nobre, F. S., Tobias, A. M., & Walker, D. S. (2009). Organizational and technological implications of cognitive machines: Designing future information management systems. IGI Global. 1–338. doi:10.4018/978-1-60566-302-9.
Nof, S. Y. (Ed.). (1999). Handbook of industrial robotics (2nd ed.). Hoboken, New Jersey: Wiley.
Nuseibeh, B., & Easterbrook, S. (2000). Requirements engineering: A roadmap. Proceedings of International Conference on Software Engineering (ICSE-2000) (pp. 4–11). ACM Press: Limerick, Ireland.
Pimple, K. D. (2011). Surrounded by machines. Communications of the ACM, 54(3), 29–31.
Russell, S., & Norvig, P. (2003). Artificial intelligence—A modern approach. Upper Saddle River, NJ: Pearson Education.
Scheutz, M. (2002). Computationalism new directions (pp. 1–223). Cambridge, Mass: MIT Press.
Shrader-Frechette, K. (2003). Technology and ethics. In R. C. Scharff & V. Dusek (Eds.), Philosophy of technology—The technological condition (pp. 187–190). Padstow, United Kingdom: Blackwell Publishing.
Silver, D. A. (2005). Strawsonian defense of corporate moral responsibility. American Philosophical Quarterly, 42, 279–295.
Siponen, M. (2004). A pragmatic evaluation of the theory of information ethics. Ethics and Information Technology, 6(4), 279–290.
Som, C., Hilty, L. M., & Ruddy, T. F. (2004). The precautionary principle in the information society. Human and Ecological Risk Assessment, 10(5), 787–799.
Sommerville, I. (2007). Models for responsibility assignment. In G. Dewsbury & J. Dobson (Eds.), Responsibility and dependable systems. Kluwer: Springer. ISBN 1846286255.
Stahl, B. C. (2004). Information, ethics, and computers: The problem of autonomous moral agents. Minds and Machines, 14, 67–83.
Strawson, P. F. (1974). Freedom and resentment, in freedom and resentment and other essays. London: Methuen.
Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6(12), 23–30.
Vallverdú, J., & Casacuberta, D. (2009). Handbook of research on synthetic emotions and sociable robotics: New applications in affective computing and artificial intelligence. IGI Global. doi:10.4018/978-1-60566-354-8.
Van de Poel, I. R., & Verbeek, P. P. (Eds.) (2006). Special issue on ethics and engineering design. Science, Technology and Human Values 31(3), 223–380.
Verbeek, P.-P. (2008). Morality in design: Design ethics and the morality of technological artifacts. In P. E. Vermaas, P. A. Kroes, A. Light, & S. Moore (Eds.), Philosophy and design (pp. 91–103). Berlin, Germany: Springer.
Veruggio, G. (2006). The EURON Roboethics Roadmap, Humanoids’06, December 6, 2006, Genoa, Italy.
Veruggio, G., & Operto, F. (2008). Roboethics, chapter 64 in springer handbook of robotics. Berlin, Heidelberg: Springer.
Wallach, C., & Allen, W. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.
Warwick, K. (2009). Today it’s a cute friend. Tomorrow it could be the dominant life form. Times of London 2009-02-25. http://www.timesonline.co.uk/tol/comment/columnists/guest_contributors/article5798625.ece.
Acknowledgments
The authors want to thank Mark Coeckelbergh for the enlightening discussion of several ideas central to this paper—distributed responsibility, artifactual morality and the role of emotional intelligence. We also gratefully acknowledge Keith Miller’s insightful response to an earlier version of this article. Last, but not least, we greatly appreciate the three anonymous reviewers’ valuable comments.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Dodig Crnkovic, G., Çürüklü, B. Robots: ethical by design. Ethics Inf Technol 14, 61–71 (2012). https://doi.org/10.1007/s10676-011-9278-2
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10676-011-9278-2