Skip to main content
Log in

Mind the gap: responsible robotics and the problem of responsibility

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. (1) It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (2) It then considers three instances where recent innovations in robotics challenge this standard operating procedure by opening gaps in the usual way of assigning responsibility. The innovations considered in this section include: autonomous technology, machine learning, and social robots. (3) The essay concludes by evaluating the three different responses—instrumentalism 2.0, machine ethics, and hybrid responsibility—that have been made in face of these difficulties in an effort to map out the opportunities and challenges of and for responsible robotics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. This effort is informed by and consistent with the overall purpose and aim of philosophy, strictly speaking. Philosophers as different (and, at times, even antagonistic, especially to each other) as Heidegger (1962), Dennett (1996), Moore (2005), and Žižek (2006), have all, at one time or another, described philosophy as a critical endeavor that is more interested in developing questions than in providing definitive answers. “There are,” as Žižek (2006, p. 137) describes it, “not only true or false solutions, there are also false questions. The task of philosophy is not to provide answers or solutions, but to submit to critical analysis the questions themselves, to make us see how the very way we perceive a problem is an obstacle to its solution.” This is the task and objective of the essay—to identify the range of questions regarding responsibility that can and should be asked in the face of recent technological innovation. If, in the end, readers emerge from the experience with more questions—“more” not only in quantity but also (and more importantly) in terms of the quality of inquiry—then it will have been successful and achieved its end.

  2. Because of the recent proliferation of and popularity surrounding connectionist architecture, neural networks, and machine learning, there are numerous examples from which one could select, including natural language generation (NLG) algorithms, black box trading, computational creativity, self-driving vehicles, and autonomous weapons. In fact, one might have expected this essay to have focused on the latter—autonomous weapons—mainly because of the way the responsibility gap, or what has also been called “the accountability gap,” has been positioned, addressed, and documented in the literature on this subject (Arkin 2009; Asaro 2012; Beard 2014; Hammond 2015; Krishnan 2009; Lokhorst and van den Hoven 2012; Schulzke 2013; Sharkey 2012; Sparrow 2007; Sullins 2010). I have, however, made the deliberate decision to employ other, perhaps more mundane, examples like AlphaGo and Tay.ai. And I have done so for two reasons. First, questions concerning machine autonomy and responsibility, although important for and well-documented in the literature concerning autonomous weapons, is something that is not (and should not be) limited to weapon systems. Recognizing this fact requires that we explicitly identify and consider other domains where these question appear and are relevant—domains where the issues might be less dramatic but no less significant. Second, and more importantly, I wanted to deal with technologies that are actually in operation and not under development. Despite its popularity in investigations of machine agency and responsibility, autonomous weapons are still somewhat speculative and in development. Rather than address what might happen with technologies that could be developed and deployed, I wanted to address what has happened with technologies that are already here and in operation.

  3. Just to be clear, the problem with social robots is not that they are or might be capable of becoming moral subjects. The problem is that they are neither instruments nor moral subjects. They occupy an in-between position that effectively blurs the boundary that had typically separated the one from the other. The problem, then, is not that social robots might achieve moral status equal to or on par with human beings. That remains a topic of and for science fiction. The problem is that social robots complicate the way one decides who has moral status and what does not, which is a more difficult/interesting philosophical question. For more on this subject, see Coeckelbergh (2012), Gunkel (2012), and Floridi (2013).

  4. There is some debate concerning this matter. What Coeckelbergh (2010, p. 236) calls “psychopathy”— e.g. “follow rules but act without fear, compassion, care, and love”—Arkin (2009) celebrates as a considerable improvement in moral processing and decision making. Here is how Sharkey (2012, p. 121) characterizes Arkin’s efforts to develop an “artificial conscience” for robotic soldiers: “It turns out that the plan for this conscience is to create a mathematical decision space consisting of constraints, represented as prohibitions and obligations derived from the laws of war and rules of engagement (Arkin 2009). Essentially this consists of a bunch of complex conditionals (if-then statements)….Arkin believes that a robot could be more ethical than a human because its ethics are strictly programmed into it, and it has no emotional involvement with the action.” For more on this debate and the effect it has on moral consideration, see Gunkel (2012).

References

  • Anderson, M., & Anderson, S. L. (2007). The status of machine ethics: A report from the AAAI symposium. Minds & Machines, 17(1), 1–10.

    Google Scholar 

  • Anderson, M., & Anderson, S. L. (2011). Machine ethics. Cambridge: Cambridge University Press.

    Google Scholar 

  • Arkin, R. C. (2009). Governing lethal behavior in autonomous robots. Boca Raton: CRC Press.

    Google Scholar 

  • Asaro, P. (2012). On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross, 94(886), 687–709.

    Google Scholar 

  • Beard, J. M. (2014). Autonomous weapons and human responsibilities. Georgetown Journal of International Law, 45(1), 617–681.

    Google Scholar 

  • Breazeal, C. L. (2004). Designing sociable robots. Cambridge, MA: MIT Press.

    MATH  Google Scholar 

  • Bringsjord, S. (2007). Ethical robots: The future can heed us. AI & Society, 22(4), 539–550.

    Google Scholar 

  • Brooks, R. A. (2002). Flesh and machines: How robots will change us. New York: Pantheon Books.

    Google Scholar 

  • Bryson, J. J. (2010). Robots should be slaves. In Y. Wilks (Ed.), Close engagements with artificial companions: Key social, psychological, ethical and design issues (pp. 63–74). Amsterdam: John Benjamins.

    Google Scholar 

  • Calverley, D. J. (2008). Imaging a non-biological machine as a legal person. AI & Society, 22(4), 523–537.

    Google Scholar 

  • Coeckelbergh, M. (2010). Moral appearances: Emotions, robots, and human morality. Ethics and Information Technology, 12(3), 235–241.

    Google Scholar 

  • Coeckelbergh, M. (2012). Growing moral relations: Critique of moral status ascription. New York: Palgrave Macmillan.

    Google Scholar 

  • Committee on Legal Affairs. Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics. European Parliament, 2016. http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN.

  • Darling, K. (2012). Extending legal protection to social robots. IEEE Spectrum. http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/extending-legal-protection-to-social-robots.

  • Datteri, E. (2013). Predicting the long-term effects of human-robot interaction: A reflection on responsibility in medical robotics. Science and Engineering Ethics, 19(1), 139–160.

    Google Scholar 

  • Dennett, D. C. (1996). Kinds of minds: Toward and understanding of consciousness. New York: Perseus Books.

    Google Scholar 

  • Derrida, J. (2005). Paper machine (trans. by R. Bowlby). Stanford, CA: Stanford University Press.

    Google Scholar 

  • Feenberg, A. (1991). Critical theory of technology. New York: Oxford University Press.

    Google Scholar 

  • Floridi, L. (2013). The ethic of information. Oxford: Oxford University Press.

    Google Scholar 

  • French, P. (1979). The corporation as a moral person. American Philosophical Quarterly, 16(3), 207–215.

    MathSciNet  Google Scholar 

  • Garreau, J. (2007). Bots on the Ground: In the Field of Battle (or Even Above it), Robots are a Soldier’s Best Friend. The Washington Post, Retrieved May 6, 2007, from http://www.washingtonpost.com/wp-dyn/content/article/2007/05/05/AR2007050501009.html.

  • Gladden, M. E. (2016). The diffuse intelligent other: An ontology of nonlocalizable robots as moral and legal actors. In M. Nørskov (Ed.), Social robots: Boundaries, potential, challenges (pp. 177–198). Burlington, VT: Ashgate.

    Google Scholar 

  • Go Ratings. (2016). https://www.goratings.org/.

  • Goertzel, B. (2002). Thoughts on AI morality. Dynamical Psychology: An International, Interdisciplinary Journal of Complex Mental Processes, May 2002. http://www.goertzel.org/dynapsyc/2002/AIMorality.htm.

  • Google DeepMind. (2016). AlphaGo. https://deepmind.com/alpha-go.html.

  • Gunkel, D. J. (2007). Thinking otherwise: Ethics, technology and other subjects. Ethics and Information Technology, 9(3), 165–177.

    Google Scholar 

  • Gunkel, D. J. (2012). The machine question: Critical perspectives on ai, robots and ethics. Cambridge, MA: MIT Press.

    Google Scholar 

  • Hall, J. S. (2001). Ethics for machines. KurzweilAI.net. http://www.kurzweilai.net/ethics-for-machines.

  • Hammond, D. N. (2015). Autonomous weapons and the problem of state accountability. Chicago Journal of International Law, 15(2), 652–687.

    Google Scholar 

  • Hanson, F. A. (2009). Beyond the skin bag: On the moral responsibility of extended agencies. Ethics and Information Technology, 11(1), 91–99.

    Google Scholar 

  • Heidegger, M. (1962). Being and time (trans. by John Macquarrie and Edward Robinson). New York: Harper and Row.

    Google Scholar 

  • Heidegger, M. (1977). The Question concerning technology and other essays (trans. by William Lovitt). New York: Harper and Row.

    Google Scholar 

  • Hemmersbaugh, P. A. NHTSA Letter to Chris Urmson, Director, Self-Driving Car Project, Google, Inc. https://isearch.nhtsa.gov/files/Google - compiled response to 12 Nov 15 interp request - 4 Feb 16 final.htm.

  • Jibo. (2014). https://www.jibo.com.

  • Johnson, D. G. (1985). Computer ethics. Upper Saddle River, NJ: Prentice Hall.

    Google Scholar 

  • Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195–204.

    Google Scholar 

  • Johnson, D. G., & Miller, K. W. (2008). Un-making artificial moral agents. Ethics and Information Technology, 10(2–3), 123–133.

    Google Scholar 

  • Kant, I. (1963). Duties to animals and spirits. lectures on ethics (trans. by L. Infield) (pp. 239–241). New York: Harper and Row.

    Google Scholar 

  • Keynes, J. M. (2010). Economic possibilities for our grandchildren. In Essays in persuasion (pp. 321–334). New York: Palgrave Macmillan.

    Google Scholar 

  • Krishnan, A. (2009). Killer robots: Legality and ethicality of autonomous weapons. Burlington: Ashgate.

    Google Scholar 

  • Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford: Oxford University Press.

    Google Scholar 

  • Lee, P. Learning from Tay’s introduction. Official Microsoft Blog, 25 March 2016. https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/.

  • Lokhorst, G. J., & van den Hoven, J. (2012). Responsibility for military robots. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robots (pp. 145–155). Cambridge, MA: MIT Press.

    Google Scholar 

  • Lyotard, J. F. (1993). The postmodern condition: A report on knowledge (trans. by Geoff Bennington and Brian Massumi). Minneapolis, MN: University of Minnesota Press.

    Google Scholar 

  • Marx, K. (1977). Capital (trans. by Ben Fowkes). New York: Vintage Books.

    Google Scholar 

  • Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.

    Google Scholar 

  • Metz, C. Google’s AI Wins a Pivotal Second Game in Match with Go Grandmaster. Wired, March 2016. http://www.wired.com/2016/03/googles-ai-wins-pivotal-game-two-match-go-grandmaster/.

  • Microsoft. (2016). Meet Tay—Microsoft AI. Chatbot with Zero Chill. https://www.tay.ai/.

  • Moore, G. E. (2005). Principia ethica. New York: Barnes & Noble Books.

    Google Scholar 

  • Mowshowitz, A. (2008). Technology as excuse for questionable ethics. AI & Society, 22(3), 271–282.

    Google Scholar 

  • Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics, 2(1), 25–42.

    Google Scholar 

  • Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. Cambridge: Cambridge University Press.

    Google Scholar 

  • Riceour, P. (2007). Reflections on the just (trans. by David Pellauer). Chicago: University of Chicago Press.

    Google Scholar 

  • Risely, J. (2016). Microsoft’s Millennial Chatbot Tay.ai Pulled Offline After Internet Teaches Her Racism. GeekWire. http://www.geekwire.com/2016/even-robot-teens-impressionable-microsofts-tay-ai-pulled-internet-teaches-racism/.

  • Rosenthal-von der Pütten, A. M., Krämer, N. C., Hoffmann, L., Sobieraj, S., & Eimler, S. C. (2013). An experimental study on emotional reactions towards a robot. International Journal of Social Robotics, 5(1), 17–34.

    Google Scholar 

  • Ross, P. E. (2016). A google car can qualify as a legal driver. IEEE Spectrum. http://spectrum.ieee.org/cars-that-think/transportation/self-driving/an-ai-can-legally-be-defined-as-a-cars-driver.

  • Schulzke, M. (2013). Autonomous weapons and distributed responsibility. Philosophy & Technology, 26(2), 203–219.

    Google Scholar 

  • Sharkey, N. (2012). Killing made easy: From joysticks to politics. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robots (pp. 111–128). Cambridge, MA: MIT Press.

    Google Scholar 

  • Singer, P. (1975). Animal liberation: A new ethics for our treatment of animals. New York: New York Review Book.

    Google Scholar 

  • Singer, P. W. (2009). Wired for war: The robotics revolution and conflict in the twenty-first century. New York: Penguin Books.

    Google Scholar 

  • Siponen, M. (2004). A pragmatic evaluation of the theory of information ethics. Ethics and Information Technology, 6(4), 279–290.

    Google Scholar 

  • Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.

    Google Scholar 

  • Stahl, B. C. (2006). Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. Ethics and Information Technology, 8(4), 205–213.

    Google Scholar 

  • Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6(12), 23–30.

    Google Scholar 

  • Sullins, J. P. (2010). Robowarfare: Can robots be more ethical than humans on the battlefield? Ethics and Information Technology, 12(3), 263–275.

    Google Scholar 

  • Suzuki, Y., Galli, L., Ikeda, A., Itakura, S., & Kitazaki, M. (2015). Measuring empathy for human and robot hand pain using electroencephalography. Scientific Reports, 5(1), 15924. doi:10.1038/srep15924.

  • Turing, A. (1999). Computing machinery and intelligence. In P. A. Meyer (Ed.), Computer media and communication: A reader (pp. 37–58). Oxford: Oxford University Press.

    Google Scholar 

  • van de Poel, I., Nihle´n Fahlquist, J., Doorn, N., Zwart, S., & Royakkers, L. (2012). The problem of many hands: Climate change as an example. Science Engineering Ethics, 18(1), 49–67.

    Google Scholar 

  • Verbeek, P. P. (2011). Moralizing technology: Understanding and designing the morality of things. Chicago: University of Chicago Press.

    Google Scholar 

  • Wagenaar, W. A., & Groenewegen, J. (1987). Accidents at sea: Multiple causes and impossible consequences. International Journal of Man-Machine Studies, 27, 587–598.

    Google Scholar 

  • Wallach, W. (2015). A dangerous master: How to keep technology from slipping beyond our control. New York: Basic Books.

    Google Scholar 

  • Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.

    Google Scholar 

  • Wiener, N. (1988). The human use of human beings: Cybernetics and society. Boston: Ad Capo Press.

    Google Scholar 

  • Winner, L. (1977). Autonomous technology: Technics-out-of-control as a theme in political thought. Cambridge, MA: MIT Press.

    Google Scholar 

  • Winograd. T. (1990). Thinking machines: Can there be? Are we? In D. Partridge & Y. Wilks (Eds.), The foundations of artificial intelligence: A sourcebook (pp. 167–189). Cambridge: Cambridge University Press.

  • Žižek, S. (2006). Philosophy, the “Unknown Knowns,” and the public use of reason. Topoi, 25(1–2), 137–142.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David J. Gunkel.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gunkel, D.J. Mind the gap: responsible robotics and the problem of responsibility. Ethics Inf Technol 22, 307–320 (2020). https://doi.org/10.1007/s10676-017-9428-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-017-9428-2

Keywords

Navigation