Abstract
Empirical research on human–robot interaction (HRI) has demonstrated how humans tend to react to social robots with empathic responses and moral behavior. How should we ethically evaluate such responses to robots? Are people wrong to treat non-sentient artefacts as moral patients since this rests on anthropomorphism and ‘over-identification’ (Bryson and Kime, Proc Twenty-Second Int Jt Conf Artif Intell Barc Catalonia Spain 16–22:1641–1646, 2011)—or correct since spontaneous moral intuition and behavior toward nonhumans is indicative for moral patienthood, such that social robots become our ‘Others’ (Gunkel, Robot rights, MIT Press, London, 2018; Coeckelbergh, Kairos J Philos Sci 20:141–158, 2018)?. In this research paper, I weave extant HRI studies that demonstrate empathic responses toward robots with the recent debate on moral status for robots, on which the ethical evaluation of moral behavior toward them is dependent. Patienthood for robots has standardly been thought to obtain on some intrinsic ground, such as being sentient, conscious, or having interest. But since these attempts neglect moral experience and are curbed by epistemic difficulties, I take inspiration from Coeckelbergh and Gunkel’s ‘relational approach’ to explore an alternative way of accounting for robot patienthood based on extrinsic premises. Based on the ethics of Danish theologian K. E. Løgstrup (1905–1981) I argue that empathic responses can be interpreted as sovereign expressions of life and that these expressions benefit human subjects—even if they emerge from social interaction afforded by robots we have anthropomorphized. I ultimately develop an argument in defense of treating robots as moral patients.
Similar content being viewed by others
Notes
What is at issue here is thus a separate question from moral agency for robots, even if some researchers treat them together (Rodogno 2016), consider them as subsets for “full moral status” (Gamez et al. 2020), or find that “moral rights” should be granted to robots once they are competent moral agents (Gordon 2020).
And it is likely that adopting what Dennett has called the intentional stance toward robots contribute in this process (Perez-Osorio and Wykowska 2020).
In the light of these findings, it is interesting to note how the violent end of the hitchbot-project attracted so much empathic responses from people that was never even in contact with the robot (VanderMaas 2015). The resulting #RIPHitchBot and #Vengebot outcries on social media when the HitchBot was eventually found in a ditch, dismembered and decapitated, shows remarkable empathy—the outcries even match the youtube-laments over the ‘torture’ of the Boston Dynamics canine-inspired robot ‘Spot’ (Coeckelbergh 2018).
Another possibility not entertained in the empirical literature is that what people interact with when perceiving another mind in the robot, is something like a sum total of the mind of the designers and programmers who made the artefact. Similar to how one can feel connected to and in dialogue with the artist by engaging with their work whether it is a painting, a novel, theatre play, or a piece of music. Perhaps robots even derive moral status this way. To limit an already broad scope, I will leave it for future research to develop on this idea.
I take any physical robotic artifact with a social interface, autonomous movement/behavior, and with capacities to recognize and interact with other entities as a social robot. This is what I have in mind when I in the following when I simply write ‘robot’, unless explicitly stated otherwise. A minimal definition is sufficient for my purposes here, as I’m not interested in the robot per se, but in human responses.
We should also be mindful of the cultural underpinnings of anthropomorphism; culture plays a role in determining which physical traits are associate with mind and agency. The cultural dimension has been testified at least since Xenophanes, who ironized that Greek gods were pale and blue-eyed while African deities were black-skinned and snub-nosed, and also remarked that if horses and lions had gods and the ability to paint them, they would probably look strikingly like horses and lions.
Even if you could argue that a robot’s agency is just an extension of its makers’.
I do not employ a strict distinction between ethics and morality, but tend to use ethics as a meta-discourse of morality; as the philosophical and analytical dealing with the norms and manners of human social behavior. Behavior can thus be moral, while deliberating about morality is an ethical enquiry.
Giving a phenomenological account of empathy is obviously a very different undertaking that measuring its neurological substrates as some of these studies does. In this sense, taking up Løgstrup is coming from a completely different interest, even if he did recognize that empathy was underpinned by “biological processes that cause ripples in our minds” (2015, 201, own translation). But I take it that keeping phenomenological definitions as descriptively close to the empirical observations as possible renders the analysis more probable.
Pursuing goals, exercising freedom, maintaining meaningful social relations, achieving pleasure, avoiding pain and so forth are often counted among interest (for humans at least). But which interest are more significant and which ones AI’s and robots can be said to have are difficulties still debated.
A similar argument is put forward by Gordon (2020) who essentially argue that autonomous deliberation and decision-making behavior warrant moral status. Behavior, in this case autonomous decision-making, not properties, is sufficient for being “full ethical agents”. And since the AI of robots are already making autonomous decision, we can soon rightly consider them subjects of morality. Conferring moral patiency is then just around the corner, and Gordon provides a four-part cumulative argument in favor of that.
I acknowledge there are more or perhaps better arguments (relative to ones worries and aims) such as the charge of anthropocentric bias, but I cannot consider them all here. Other arguments are explored in e.g. (Gunkel 2018; Coeckelbergh 2018; Danaher 2019). Another reason one could take issue with present property approaches is the implicit substance metaphysics they often build on. Conceiving of individual subjects as constituted by processes rather than substances (e.g. Eck and Levine 2017), allow for emergence of properties. Properties (e.g. those we base moral status on) would not be fixed to certain biological entities made of the right substance, but would be substrate indiscriminatory. At any rate, exploring this is beyond the present scope.
The ‘Two Accounts’ is mentioned in the Ethical Demand (2010) [1956] and later developed in Etiske begreber og problemer (2014) [1971]. “But there are two accounts to keep and to distinguish from each other. The account of our given life and the account of our ego” (2014, own translation). Note that the Danish ‘konto’ translated as ‘account’ does not mean ‘explanation’, but rather means ‘a record’, as in bank account.
Cf. Luther’s de servo arbitrio.
Or, if you have the same religious background as Løgstrup, you have the Christian god. But since God in this tradition demands you to always love your neighbor, the ethical import is the same as not believing in a creator god.
For precisely this reason, Løgstrup was very skeptical of virtue cultivation, as I shall return to below.
Niekerk has brought out and analyzed the idea of the ‘realization of self’ that, according to Løgstrup’s Controverting Kierkegaard, the sovereign expressions of life bring about (Niekerk 2017). Becoming a self is not a task for our reflection, is the charge Løgstrup levels against Kierkegaard (Løgstrup 2013). Consummating the sovereign expressions accomplishes this, as they not only lift me for a moment out of my self-encircling thoughts and feelings; I’m becoming a self as I surrender to them.
“Even our very identity rests on them [the sovereign expressions of life]” (Løgstrup 2015, 112, own translation).
“We are captives within ourselves. We can only be set free by fellow man” (journal entry by Løgstrup quoted in Rabjerg 2017).
I suspect one could argue from a Løgstrupian perspective that sovereign expressions require third-person benefits, that they only come as complete packages; that having ‘half an expression’ amounts to having nothing.
Nowhere is this point illustrated as well as in the discussion on sex robots. It is often argued that erotic partners designed to always accommodate and mirror the users every fantasy is little more than self-gratification. If ‘no one’s home’ and we simply stare into our own reflection, will very agreeable social robots after all contribute to our incurvature rather than displacing it and opening us up toward the world? If we really just respond to an echo of our own reflection when interacting with robots, such activity amount to nothing more than, in Løgstrup terminology, self-encircling feelings and motions rather than sovereign expressions.
References
Airenti G (2015) The cognitive bases of anthropomorphism: from relatedness to empathy. Int J Soc Robot 7(1):117–127. https://doi.org/10.1007/s12369-014-0263-x
Basl J (2019) The death of the ethic of life. Oxford University Press, Oxford
Basl J, Bowen J (2020) AI as a moral right-holder. In: Dubber MD, Pasquale F, Das S (eds) The Oxford handbook of ethics of AI. Oxford University Press, pp 288–306
Bryson JJ, Kime PP (2011) Just an artifact: why machines are perceived as moral agents. Proc Twenty-Second IntJtConfArtifIntellBarc Catalonia Spain 16–22:1641–1646. https://doi.org/10.5591/978-1-57735-516-8/IJCAI11-276
Cappuccio ML, Peeters A, McDonald W (2019) Sympathy for dolores: moral consideration for robots based on virtue and recognition. PhilosTechnol 33(1):9–31. https://doi.org/10.1007/s13347-019-0341-y
Chappell T (2011) On the very idea of criteria for personhood: criteria for personhood. South J Philos 49(1):1–27. https://doi.org/10.1111/j.2041-6962.2010.00042.x
Clark C (1987) Sympathy biography and sympathy margin. Am J Sociol 93(2):290–321
Coeckelbergh M (2014) The moral standing of machines: towards a relational and non-Cartesian moral hermeneutics. PhilosTechnol. https://doi.org/10.1007/s13347-013-0133-8
Coeckelbergh M (2018) Why care about robots? empathy, moral standing, and the language of suffering. Kairos J PhilosSci 20(1):141–158. https://doi.org/10.2478/kjps-2018-0007
Coeckelbergh M (2020) How to use virtue ethics for thinking about the moral standing of social robots: a relational interpretation in terms of practices, habits, and performance. Int J Soc Robot. https://doi.org/10.1007/s12369-020-00707-z
Coeckelbergh M, Gunkel D (2014) Facing animals: a relational, other-oriented approach to moral standing. J Agric Environ Ethics 27:715–733. https://doi.org/10.1007/s10806-013-9486-3
Crowell CR, Deska JC, Villano M, Zenk J, Roddy JT Jr (2019) Anthropomorphism of robots: study of appearance and agency. JMIR Hum Factors 6:2. https://doi.org/10.2196/12629
Damiano L, Dumouchel P (2018) Anthropomorphism in human–robot co-evolution. Front Psychol 9:468. https://doi.org/10.3389/fpsyg.2018.00468
Danaher J (2019) Welcoming robots into the moral circle: a defence of ethical behaviourism. SciEng Ethics. https://doi.org/10.1007/s11948-019-00119-x
Darling K (2016) Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In: Calo R, Froomkin A, Kerr I (eds) Robot Law. Edward Elgar Publishing, pp 213–232
Davis M (1983) Measuring individual differences in empathy: evidence for a multidimensional approach. J PersSocPsychol 44(1):113–126
Dennett D (1998) Brainstorms: philosophical essays on mind and psychology. MIT Press, Cambridge
Donath J (2020) Ethical issues in our relationship with artificial entities. In: Dubber MD, Pasquale F, Das S (eds) The oxford handbook of ethics of AI. Oxford University Press, pp 51–73
Eberl JT (2017) The ontological and moral significance of persons. Sci Fides 5(2):217. https://doi.org/10.12775/SetF.2017.016
Eck D, Levine A (2017) Prioritizing otherness: the line between vacuous individuality and hollow collectivism. In: Hakli R, Seibt J (eds) Sociality and normativity for robots. Springer International Publishing, Cham, pp 67–87
Epley N, Waytz A, Cacioppo J (2007) On seeing human: a three-factor theory of anthropomorphism. Psychol Rev 114:864–886. https://doi.org/10.1037/0033-295X.114.4.864
Fink H (2017) What is ethically demanded?: K. E. Løgstrup’s philosophy of moral life. University of Notre Dame Press, Notre Dame
Gamez P, Shank DB, Arnold C, North M (2020) Artificial virtue: the machine question and perceptions of moral character in artificial moral agents. AI Soc 35(4):795–809. https://doi.org/10.1007/s00146-020-00977-1
Gazzola V, Rizzolatti G, Wicker B, Keysers C (2007) The anthropomorphic brain: the mirror neuron system responds to human and robotic actions. Neuroimage 35(4):1674–1684. https://doi.org/10.1016/j.neuroimage.2007.02.003
Gellers J (2020) Rights for robots: artificial intelligence, animal and environmental law, 1st edn. Routledge, Milton Park, Abingdon, Oxon, New York
Gordon J-S (2020) What do we owe to intelligent robots? AI Soc 35(1):209–223. https://doi.org/10.1007/s00146-018-0844-6
Graaf MMA de, Allouch SB (2016) Anticipating our future robot society: the evaluation of future robot applications from a user’s perspective. In: 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). pp 755–762
Gunkel D (2014) A vindication of the rights of machines. PhilosTechnol 27(1):113–132. https://doi.org/10.1007/s13347-013-0121-z
Gunkel D (2017) The other question: can and should robots have rights? Ethics InfTechnol 20(2):87–99. https://doi.org/10.1007/s10676-017-9442-4
Gunkel D (2018) Robot rights. MIT Press, London
Gunkel DJ, Bryson J (2014) Introduction to the special issue on machine morality: the machine as moral agent and patient. PhilosTechnol 27(1):5–8. https://doi.org/10.1007/s13347-014-0151-1
Kertész C, Turunen M (2017) What can we learn from the long-term users of a social robot? Lect Notes ComputSciSubserLect Notes ArtifIntellLect Notes Bioinforma 10652:657–665. https://doi.org/10.1007/978-3-319-70022-9_65
Krach S, Hegel F, Wrede B, Sagerer G, Binkofski F, Kircher T (2008) Can machines think? Interaction and perspective taking with robots investigated via fMRI. PLoS ONE 3(7):e2597. https://doi.org/10.1371/journal.pone.0002597
Krämer NC, der Pütten AMR, Hoffmann L (2015) Social effects of virtual and robot companions. In: ShyamSundar S (ed) The handbook of the psychology of communication technology. John Wiley & Sons, Ltd, pp 137–159
Leite I, Martinho C, Paiva A (2013) Social robots for long-term interaction: a survey. Int J Soc Robot 5(2):291–308. https://doi.org/10.1007/s12369-013-0178-y
Löffler D, Hurtienne J, Nord I (2019) Blessing Robot BlessU2: a discursive design study to understand the implications of social robots in religious contexts. Int J Soc Robot. https://doi.org/10.1007/s12369-019-00558-3
Løgstrup KE (1966) SartresogKierkegaardsskildringaf den dæmoniskeindesluttethed. Vindrosen 13:28–42
Løgstrup KE (2010) Den etiskefordring. Klim, Aarhus
Løgstrup KE (2013) Opgør med Kierkegaard, 4th edn. Klim, Aarhus
Løgstrup KE (2014) Etiskebegreberogproblemer. Klim, Aarhus
Løgstrup KE (2015) SkabelseogTilintetgørelse. Metafysik IV: Religionsfilosofiskebetragtninger, 4th edn. ForlagetKlim, Aarhus
Maibom HL (2014) Introduction: (almost) everything you ever wanted to know about empathy. In: Maibom HL (ed) Empathy and morality. Oxford University Press, pp 1–40
McCurry J (2018) Japan: robot dogs get solemn Buddhist send-off at funerals. In: the Guardian. http://www.theguardian.com/world/2018/may/03/japan-robot-dogs-get-solemn-buddhist-send-off-at-funerals. Accessed 2 Sep 2020
Misselhorn C (2009) Empathy with inanimate objects and the uncanny valley. Minds Mach 19(3):345–359. https://doi.org/10.1007/s11023-009-9158-2
Neely E (2014) Machines and the moral community. PhilosTechnol 27(1):97–111. https://doi.org/10.1007/s13347-013-0114-y
Niekerk K (2017) Løgstrup’s conception of the sovereign expressions of life. In: Fink H, Stern R (eds) What is ethically demanded?: K. E. Logstrup’s philosophy of moral life. University of Notre Dame Press, pp 186–215
Nyholm S (2020) Humans and robots: ethics, agency, and anthropomorphism. Rowman and Littlefield International, London, New York
Perez-Osorio J, Wykowska A (2020) Adopting the intentional stance toward natural and artificial agents. PhilosPsychol 33(3):369–395. https://doi.org/10.1080/09515089.2019.1688778
Rabjerg B (2017) Løgstrup’s ontological ethics. An analysis of human interdependent existence. Res Cogitans 12(1):93–110
Rabjerg B, Stern R (2018) Freedom from the Self: Luther and Løgstrup on Sin as “Incurvatus in Se.” Open Theol 4(1):268–280. https://doi.org/10.1515/opth-2018-0020
Redstone J (2016) Making sense of empathy with sociable robots: a new look at the “imaginative perception of emotion.” Social robots: boundaries, potential, challenges. Routledge, London, pp 19–39
Riek LD, Rabinowitch T-C, Chakrabarti B, Robinson P (2009) How anthropomorphism affects empathy toward robots. In: Proceedings of the 4th ACM/IEEE international conference on Human robot interaction. Association for Computing Machinery, La Jolla, California, USA, pp 245–246
Rodogno R (2016) Robots and the limits of morality. In: Nørskov M (ed) Social robots: boundaries, potential, challenges, 1st edn. Ashgate, Farnham, Surrey, UK, Burlington
Rosenthal-von der Pütten AM, Krämer NC (2015) Individuals’ evaluations of and attitudes towards potentially uncanny robots. Int J Soc Robotics 7(5):799–824. https://doi.org/10.1007/s12369-015-0321-z
Rosenthal-von der Pütten AM, Krämer NC, Hoffmann L, Sobieraj S, Eimler SC (2013) An experimental study on emotional reactions towards a robot. Int J Soc Robot 5(1):17–34. https://doi.org/10.1007/s12369-012-0173-8
Rosenthal-von der Pütten AM, Schulte FP, Eimler SC, Sobieraj S, Hoffmann L, Maderwald S, Brand M, Krämer NC (2014) Investigations on empathy towards humans and robots using fMRI. Comput Hum Behav 33:201–212. https://doi.org/10.1016/j.chb.2014.01.004
Searle JR (1980) Minds, brains, and programs. Behav Brain Sci 3(3):417–424. https://doi.org/10.1017/S0140525X00005756
Seibt J, Rodogno R (2019) Understanding emotions and their significance through social robots, and vice versa. Techne Res PhilosTechnol 23:257–269. https://doi.org/10.5840/techne2019233104
Singer P (2011) The expanding circle: ethics, evolution, and moral progress, 1st Princeton University Press pbk. Princeton University Press, Princeton
Smedegaard CV (2019) Reframing the Role of Novelty within Social HRI: from Noise to Information. In: 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). pp 411–420
Sparrow R (2020) Virtue and vice in our relationships with robots: is there an asymmetry and how might it be explained? Int J Soc Robot. https://doi.org/10.1007/s12369-020-00631-2
Stern R (2019) The radical demand in logstrup’s ethics. Oxford University Press, New York
Stokes P (2016) The problem of spontaneous goodness: from Kierkegaard to Løgstrup (via Zhuangzi and Eckhart). ContPhilos Rev 49(2):139–159. https://doi.org/10.1007/s11007-016-9377-1
Suzuki Y, Galli L, Ikeda A, Itakura S, Kitazaki M (2015) Measuring empathy for human and robot hand pain using electroencephalography. Sci Rep 5(1):1–9. https://doi.org/10.1038/srep15924
Thornton S (2020) Ontology and ethics: Løgstrup between heidegger and levinas. Monist 103(1):117–134. https://doi.org/10.1093/monist/onz030
Tisseron S, Tordo F, Baddoura R (2015) Testing empathy with robots: a model in four dimensions and sixteen items. Int J Soc Robot 7(1):97–102. https://doi.org/10.1007/s12369-014-0268-5
Torrance S (2008) Ethics and consciousness in artificial agents. AI Soc 22:495–521. https://doi.org/10.1007/s00146-007-0091-8
Torrance S (2014) Artificial consciousness and artificial ethics: between realism and social relationism. PhilosTechnol 27(1):9–29. https://doi.org/10.1007/s13347-013-0136-5
Turing AM (1950) Computing machinery and intelligence. Mind LIX(236):433–460. https://doi.org/10.1093/mind/LIX.236.433
Turkle S (2011) Alone together: why we expect more from technology and less from each other. Basic Books, New York
Ugazio G, Majdandžić J, Lamm C (2014) Are empathy and morality Linked? In: Maibom HL (ed) Empathy and morality. Oxford University Press, pp 155–171
Vallor S (2015) Moral deskilling and upskilling in a new machine age: reflections on the ambiguous future of character. PhilosTechnol 28(1):107–124. https://doi.org/10.1007/s13347-014-0156-9
Vallor S (2016) Technology and the virtues: a philosophical guide to a future worth wanting. Oxford University Press, Oxford
van Camp J (2019) My jibo is dying and it’s breaking my heart | WIRED. https://www.wired.com/story/jibo-is-dying-eulogy/. Accessed 2 Sep 2020
VanderMaas J (2015) hitchBOT USA tour comes to an early end in Philadelphia. http://cdn1.hitchbot.me/wp-content/uploads/2015/08/hitchBOT-USA-Trip-End-Press-Release-FINAL.pdf. Accessed 1 Sept 2020
Verbeek P-P (2017) Designing the morality of things: the ethics of behaviour-guiding technology. In: van den Hoven J, Miller S, Pogge T (eds) Designing in Ethics, 1st edn. Cambridge University Press, pp 78–94
Wang Y, Quadflieg S (2015) In our own image? Emotional and neural processing differences when observing human–human vs human–robot interactions. SocCogn Affect Neurosci 10(11):1515–1524. https://doi.org/10.1093/scan/nsv043
Wolf J (2017) Phenomenology in Løgstrup’s Creation Theology. In: Gregersen NH, Uggla BK, Wyller T (eds) Reformation theology for a post-secular age løgstrup, prenter, wingren, and the future of scandinavian creation theology. Vandenhoeck & Ruprecht, Göttingen
Young JE, Sung J, Voida A, Sharlin E, Igarashi T, Christensen HI, Grinter RE (2011) Evaluating human-robot interaction. Int J Soc Robot 3(1):53–67. https://doi.org/10.1007/s12369-010-0081-8
Acknowledgements
The author would like to thank Ulrik Nissen, Raffaele Rodogno and Jakob Donskov and the blind peer reviewers for helpful suggestions on earlier versions of the manuscript.
Funding
Not applicable.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declares no known conflict of interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Balle, S.N. Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on K. E. Løgstrup. AI & Soc 37, 535–548 (2022). https://doi.org/10.1007/s00146-021-01211-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-021-01211-2