Skip to main content
Log in

Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on K. E. Løgstrup

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Empirical research on human–robot interaction (HRI) has demonstrated how humans tend to react to social robots with empathic responses and moral behavior. How should we ethically evaluate such responses to robots? Are people wrong to treat non-sentient artefacts as moral patients since this rests on anthropomorphism and ‘over-identification’ (Bryson and Kime, Proc Twenty-Second Int Jt Conf Artif Intell Barc Catalonia Spain 16–22:1641–1646, 2011)—or correct since spontaneous moral intuition and behavior toward nonhumans is indicative for moral patienthood, such that social robots become our ‘Others’ (Gunkel, Robot rights, MIT Press, London, 2018; Coeckelbergh, Kairos J Philos Sci 20:141–158, 2018)?. In this research paper, I weave extant HRI studies that demonstrate empathic responses toward robots with the recent debate on moral status for robots, on which the ethical evaluation of moral behavior toward them is dependent. Patienthood for robots has standardly been thought to obtain on some intrinsic ground, such as being sentient, conscious, or having interest. But since these attempts neglect moral experience and are curbed by epistemic difficulties, I take inspiration from Coeckelbergh and Gunkel’s ‘relational approach’ to explore an alternative way of accounting for robot patienthood based on extrinsic premises. Based on the ethics of Danish theologian K. E. Løgstrup (1905–1981) I argue that empathic responses can be interpreted as sovereign expressions of life and that these expressions benefit human subjects—even if they emerge from social interaction afforded by robots we have anthropomorphized. I ultimately develop an argument in defense of treating robots as moral patients.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. What is at issue here is thus a separate question from moral agency for robots, even if some researchers treat them together (Rodogno 2016), consider them as subsets for “full moral status” (Gamez et al. 2020), or find that “moral rights” should be granted to robots once they are competent moral agents (Gordon 2020).

  2. Several commentators have regarded Løgstrup as natural comparator to Levinas (e.g. Thornton 2020), and Gunkel (2017) similarly suggest exposing his Levinasian position to that of Løgstrup.

  3. And it is likely that adopting what Dennett has called the intentional stance toward robots contribute in this process (Perez-Osorio and Wykowska 2020).

  4. In the light of these findings, it is interesting to note how the violent end of the hitchbot-project attracted so much empathic responses from people that was never even in contact with the robot (VanderMaas 2015). The resulting #RIPHitchBot and #Vengebot outcries on social media when the HitchBot was eventually found in a ditch, dismembered and decapitated, shows remarkable empathy—the outcries even match the youtube-laments over the ‘torture’ of the Boston Dynamics canine-inspired robot ‘Spot’ (Coeckelbergh 2018).

  5. Another possibility not entertained in the empirical literature is that what people interact with when perceiving another mind in the robot, is something like a sum total of the mind of the designers and programmers who made the artefact. Similar to how one can feel connected to and in dialogue with the artist by engaging with their work whether it is a painting, a novel, theatre play, or a piece of music. Perhaps robots even derive moral status this way. To limit an already broad scope, I will leave it for future research to develop on this idea.

  6. I take any physical robotic artifact with a social interface, autonomous movement/behavior, and with capacities to recognize and interact with other entities as a social robot. This is what I have in mind when I in the following when I simply write ‘robot’, unless explicitly stated otherwise. A minimal definition is sufficient for my purposes here, as I’m not interested in the robot per se, but in human responses.

  7. We should also be mindful of the cultural underpinnings of anthropomorphism; culture plays a role in determining which physical traits are associate with mind and agency. The cultural dimension has been testified at least since Xenophanes, who ironized that Greek gods were pale and blue-eyed while African deities were black-skinned and snub-nosed, and also remarked that if horses and lions had gods and the ability to paint them, they would probably look strikingly like horses and lions.

  8. Even if you could argue that a robot’s agency is just an extension of its makers’.

  9. Sometimes theorists divide empathy into an affective and a cognitive variant (Davis 1983; Maibom 2014). But the latter is defined very close to ‘theory of mind’, and I shall prefer this term when considering cognitive aspects of empathy (Tisseron et al. 2015; Redstone 2016).

  10. I do not employ a strict distinction between ethics and morality, but tend to use ethics as a meta-discourse of morality; as the philosophical and analytical dealing with the norms and manners of human social behavior. Behavior can thus be moral, while deliberating about morality is an ethical enquiry.

  11. Giving a phenomenological account of empathy is obviously a very different undertaking that measuring its neurological substrates as some of these studies does. In this sense, taking up Løgstrup is coming from a completely different interest, even if he did recognize that empathy was underpinned by “biological processes that cause ripples in our minds” (2015, 201, own translation). But I take it that keeping phenomenological definitions as descriptively close to the empirical observations as possible renders the analysis more probable.

  12. Pursuing goals, exercising freedom, maintaining meaningful social relations, achieving pleasure, avoiding pain and so forth are often counted among interest (for humans at least). But which interest are more significant and which ones AI’s and robots can be said to have are difficulties still debated.

  13. A similar argument is put forward by Gordon (2020) who essentially argue that autonomous deliberation and decision-making behavior warrant moral status. Behavior, in this case autonomous decision-making, not properties, is sufficient for being “full ethical agents”. And since the AI of robots are already making autonomous decision, we can soon rightly consider them subjects of morality. Conferring moral patiency is then just around the corner, and Gordon provides a four-part cumulative argument in favor of that.

  14. Though they are here focused on animal Others, they both apply the same idea for robotic Others (Coeckelbergh 2014, 2018; Gunkel 2017, 2018).

  15. I acknowledge there are more or perhaps better arguments (relative to ones worries and aims) such as the charge of anthropocentric bias, but I cannot consider them all here. Other arguments are explored in e.g. (Gunkel 2018; Coeckelbergh 2018; Danaher 2019). Another reason one could take issue with present property approaches is the implicit substance metaphysics they often build on. Conceiving of individual subjects as constituted by processes rather than substances (e.g. Eck and Levine 2017), allow for emergence of properties. Properties (e.g. those we base moral status on) would not be fixed to certain biological entities made of the right substance, but would be substrate indiscriminatory. At any rate, exploring this is beyond the present scope.

  16. For a fuller exposition of Løgstrups ethical thinking in English, see (Fink 2017; Rabjerg 2017; Wolf 2017; Niekerk 2017; Stern 2019).

  17. The ‘Two Accounts’ is mentioned in the Ethical Demand (2010) [1956] and later developed in Etiske begreber og problemer (2014) [1971]. “But there are two accounts to keep and to distinguish from each other. The account of our given life and the account of our ego” (2014, own translation). Note that the Danish ‘konto’ translated as ‘account’ does not mean ‘explanation’, but rather means ‘a record’, as in bank account.

  18. Cf. Luther’s de servo arbitrio.

  19. Or, if you have the same religious background as Løgstrup, you have the Christian god. But since God in this tradition demands you to always love your neighbor, the ethical import is the same as not believing in a creator god.

  20. For precisely this reason, Løgstrup was very skeptical of virtue cultivation, as I shall return to below.

  21. Niekerk has brought out and analyzed the idea of the ‘realization of self’ that, according to Løgstrup’s Controverting Kierkegaard, the sovereign expressions of life bring about (Niekerk 2017). Becoming a self is not a task for our reflection, is the charge Løgstrup levels against Kierkegaard (Løgstrup 2013). Consummating the sovereign expressions accomplishes this, as they not only lift me for a moment out of my self-encircling thoughts and feelings; I’m becoming a self as I surrender to them.

  22. “Even our very identity rests on them [the sovereign expressions of life]” (Løgstrup 2015, 112, own translation).

  23. “We are captives within ourselves. We can only be set free by fellow man” (journal entry by Løgstrup quoted in Rabjerg 2017).

  24. The notion of ‘imaginative perception’, suggested by Misselhorn (2009) and developed by Redstone (2016), propose to make sense of empathy with sociable robots. The central idea is that empathy toward robots is triggered as humans imaginatively perceive emotions in them.

  25. I suspect one could argue from a Løgstrupian perspective that sovereign expressions require third-person benefits, that they only come as complete packages; that having ‘half an expression’ amounts to having nothing.

  26. Nowhere is this point illustrated as well as in the discussion on sex robots. It is often argued that erotic partners designed to always accommodate and mirror the users every fantasy is little more than self-gratification. If ‘no one’s home’ and we simply stare into our own reflection, will very agreeable social robots after all contribute to our incurvature rather than displacing it and opening us up toward the world? If we really just respond to an echo of our own reflection when interacting with robots, such activity amount to nothing more than, in Løgstrup terminology, self-encircling feelings and motions rather than sovereign expressions.

References

Download references

Acknowledgements

The author would like to thank Ulrik Nissen, Raffaele Rodogno and Jakob Donskov and the blind peer reviewers for helpful suggestions on earlier versions of the manuscript.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Simon N. Balle.

Ethics declarations

Conflict of interest

The author declares no known conflict of interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Balle, S.N. Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on K. E. Løgstrup. AI & Soc 37, 535–548 (2022). https://doi.org/10.1007/s00146-021-01211-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-021-01211-2

Keywords

Navigation