Skip to main content

Advertisement

Log in

Understanding A.I. — Can and Should we Empathize with Robots?

  • Published:
Review of Philosophy and Psychology Aims and scope Submit manuscript

Abstract

Expanding the debate about empathy with human beings, animals, or fictional characters to include human-robot relationships, this paper proposes two different perspectives from which to assess the scope and limits of empathy with robots: the first is epistemological, while the second is normative. The epistemological approach helps us to clarify whether we can empathize with artificial intelligence or, more precisely, with social robots. The main puzzle here concerns, among other things, exactly what it is that we empathize with if robots do not have emotions or beliefs, since they do not have a consciousness in an elaborate sense. However, by comparing robots with fictional characters, the paper shows that we can still empathize with robots and that many of the existing accounts of empathy and mindreading are compatible with such a view. By so doing, the paper focuses on the significance of perspective-taking and claims that we also ascribe to robots something like a perspectival experience. The normative approach examines the moral impact of empathizing with robots. In this regard, the paper critically discusses three possible responses: strategic, anti-barbarizational, and pragmatist. The latter position is defended by stressing that we are increasingly compelled to interact with robots in a shared world and that to take robots into our moral consideration should be seen as an integral part of our self- and other-understanding.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. A group at the MIT Media Lab and the IEEE standards association argues for the concept of “extended” intelligence instead of “artificial”. By means of such a new narrative of “extended”, they want to guarantee that robots do not substitute but rather support human beings and cooperate with them. Together they established the Council on Extended Intelligence CXI, see https://globalcxi.org (last accessed 12.12.2019).

  2. An ERC-funded project located at the University of Glasgow and headed by Emily Cross examines particularly the socializing of human beings with artificial intelligence and the importance of interaction and relationships with robots for social cognition. One focus lies on the ability of robots to be companions, http://www.so-bots.com (last access 20.12.2019).

  3. Concerning the phenomenon of the “uncanny valley” see below.

  4. At least when we follow an anti-physicalist position.

  5. The paper concentrates mainly on humanoid robots. One reason for this is that it helps to constrain the scope of the paper; another reason is the assumption that humanlike features indeed facilitate our social interaction with artificial intelligence and make it more plausible that we treat robots as social partners. However, we can also empathize with more abstract forms of A.I. by ascribing to them emotional states and motives (see Isik, Koldeewyn, Beeler and Kanwisher 2017). I am very thankful to one reviewer for this remark.

  6. It is very controversial whether empathy presupposes or implies affective mirroring, theoretical mindreading, simulative perspective-taking, emotional understanding and/or experiential comprehension, and there is currently no end to this debate in sight (see e.g. Zahavi 2018). Many philosophers stress that mindreading is something distinct from empathy, and that empathy is “something extra”. Here, however, I have tried to apply all the different approaches. My own position is a phenomenological one, though.

  7. One problem of the whole debate is, though, that there is no conceptual consensus what empathy is and implies. The ERC-funded project on social robots, for instance, defines empathy as involving both emotional matching and prosocial behavior. In philosophy, though, empathy is usually not seen as a moral emotion or attitude (see Cross et al. 2018; Zahavi 2018).

  8. For instance, by referring to the classical positions of Stein or Dilthey and combining direct perception with imaginative re-presentation (“Vergegenwärtigung”) (see also Gallagher 2019).

  9. Kanske (2018) distinguishes between affective empathy proper and cognitive theory of mind. Whereas the first capacity would enable us to feel what others feel, the other would help us to understand what others think or believe. Although I recognize the differences, I will not distinguish mentalizing from empathizing here, but will examine different forms of understanding other minds under the umbrella term of empathy since this is the central term in the current philosophical debate.

  10. The paper focuses on the epistemological question. It will not answer the metaphysical question whether robots or A.I. have a consciousness.

  11. Concerning the deployment of Deep Learning Systems in medicine, among other features, it is necessary to trust the intelligent machine and to understand what it is going to do, for instance in a medical robot-patient interaction.

  12. However, empirically it remains uncertain whether robots indeed must be humanlike in HRI (Brinck and Balkenius 2018).

  13. One problem, of course, is how we understand the term “understanding”. Monika Dullstein (2012) has shown that Theory of Mind accounts use quite a different notion from phenomenological accounts.

  14. It is difficult to give an exact translation of Stein’s concept of “Vergegenwärtigung”. The English translation (Stein 1989) uses “representation” or “representational act” (Stein 1989: 8) as a non-primordial represented “givenness” of others’ or indirect experiences (analogous to memory, expectation, and fantasy) (ibid.). In the debate it is often overlooked that Stein proposes a step model of empathy, according to which the first level is direct perception of the other’s experience, with the second level being a kind of reflection and perspective-taking (Stein 1989: 10).

  15. Gallagher recently defined empathy as follows: “Empathy might […] not only [count] as something that happens, but as a method; and that […] involves putting oneself into the other’s perspective or situation” (2018). In so doing, Gallagher expanded his narrative approach into a perspectival approach (combining the narrative with the subjective perspective).

  16. The narrativistic version of phenomenological approaches, though, implies an imaginative component which enables us to comprehend the intentional structure by narrative imagination, e.g. if an intersubjective interaction is not given (Gallagher and Gallagher 2019).

  17. It is a similar question to that in the so-called “zombie thought experiment”, which discusses whether we can assume or ascribe a consciousness in the case of zombies – which are like us in all physical respects but have no conscious experiences in a rich sense (Chalmers 1996; Dennett 1991).

  18. Empathy as perspective-taking is indeed a capacity which enables viewers to comprehend the characters’ narratives and perspectives. However, as a form of sensitive understanding as to why the character is feeling, thinking, and acting as she does, it is also an outcome. Thus, that empathy is both a process and an outcome has been argued by Coplan (2011) and Goldie (2000).

  19. Misselhorn made a similar argument by noting that “in seeing the T-ing of an inanimate object we imagine perceiving a human T-ing” (2009: 353).

  20. Again, similar arguments could be put forward for other A.I. forms of non-human agents, e.g. abstract virtual shapes. The focus of this paper is on humanoid robots with which human beings cooperate and collaborate. For this to be successful, human beings might ascribe to A.I. not only basic mental states, but also a perspective and a narrative. This might be important for collective intentionality and collective attention.

  21. Kant writes: “If a man shoots his dog because the animal is no longer capable of service, he does not fail in his duty to the dog, for the dog cannot judge, but his act is inhuman and damages in himself that humanity which it is his duty to show towards mankind. If he is not to stifle his human feelings, he must practice kindness towards animals, for he who is cruel to animals becomes hard also in his dealings with men” (Kant 1997: 212).

  22. The phenomenon that empathizers can become even more cruel the more humanlike robots are is called the “uncanny valley” (see Misselhorn 2009; Mori 2005).

  23. Or as Susan Schneider calls them: “future minds” (in press).

  24. Coeckelbergh proposes a similar approach to mine but takes inspiration from Wittgenstein’s concepts of a form of life and language-games. Yet, his paper lacks a clear definition of what he thinks empathy implies (e.g. whether empathy indeed involves caring for the other’s well-being, as his paper seems to suggest).

References

  • Baron-Cohen, S. 1995. Mindblindness. An essay on autism and theory of mind. Cambridge, MA: MIT Press.

    Google Scholar 

  • Batson, C.D. 2009. These things called empathy: Eight related but distinct phenomena. In The social neuroscience of empathy, ed. J. Decety and W. Ickes, 3–15. Cambridge, MA: MIT Press.

    Google Scholar 

  • Benford, G., and E. Malartre. 2007. Beyond human. Tom Doherty Associates: Living with robots and cyborgs. New York.

  • Boddington, P., P. Millican, and M. Wooldridge. 2017. Minds and machines special issue: Ethics and artificial intelligence. Minds and Machines 27 (4): 569–574.

    Google Scholar 

  • Boden, M.A. 2016. AI. Its nature and future. Oxford: Oxford University Press.

    Google Scholar 

  • Breazeal, C.L. 2002. Designing sociable robots. Cambridge, MA: MIT Press.

    Google Scholar 

  • Breithaupt, F. 2019. The dark sides of empathy. Ithaca: Cornell University Press.

    Google Scholar 

  • Bretan, M., G. Hoffman, and G. Weinberg. 2015. Emotionally expressive dynamic physical behaviors in robots. International Journal of Human-Computer Studies 78: 1–16.

    Google Scholar 

  • Brinck, I., and C. Balkenius. 2018. Mutual recognition in human-robot interaction: A deflationary account. Philosophy and Technology: 1–18. https://doi.org/10.1007/s13347-018-0339-x.

  • Chalmers, D.J. 1996. The conscious mind. Oxford: Oxford University Press.

    Google Scholar 

  • Coeckelberg, M. 2018. Why care about robots? Empathy, moral standing, and the language of suffering. Kairos. Journal of Philosophy & Science 20: 141–158.

    Google Scholar 

  • Colombetti, G. 2013. The feeling body. Affective science meets the enactive mind. Cambridge, MA: MIT Press.

    Google Scholar 

  • Coplan, A. 2011. Understanding empathy, 3–18. Its features and effects. In Empathy. Philosophical and psychological perspectives. Oxford: Oxford University Press.

  • Coplan, A., and P. Goldie. 2011. Empathy. Philosophical and psychological perspectives. Oxford: Oxford University Press.

    Google Scholar 

  • Cross, E.S, Riddoch, K.A., Pratts, J, Titone, S, Chaudhury, B, and Hortensius, R. 2018. A neurocognitive investigation of the impact of socialising with a robot on empathy for pain. Preprint. https://doi.org/10.1101/470534.

  • Darling, K. 2016. Extending legal protection to social robots: The effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In Robot law, ed. M. Froomkin, R. Calo, and I. Kerr. Cheltenham: Edward Elgar.

    Google Scholar 

  • Darwall, S. 1998. Empathy, sympathy, care. Philosophical Studies 89: 261–282.

    Google Scholar 

  • De Sousa, R. 1987. The rationality of emotion. Cambridge, MA: MIT Press.

    Google Scholar 

  • De Vignemont, F., and P. Jacob. 2012. What is it like to feel another’s pain? Philosophy of Science 79 (2): 295–316.

    Google Scholar 

  • De Vignemont, F., and T. Singer. 2006. The empathic brain: How, when and why? Trends in cognitive sciences 10(10): 435–441.

  • Dennett, D. 1991. Consciousness explained. Boston: Little, Brown, and Co..

    Google Scholar 

  • Dullstein, M. 2012. The second person in the theory of mind debate. Review of Philosophy and Psychology 3 (2): 231–248.

    Google Scholar 

  • Dullstein, M. 2013. Direct perception and simulation: Stein’s account of empathy. Review of Philosophy and Psychology 4: 333–350.

    Google Scholar 

  • Dumouchel, P., and L. Damiano. 2017. Living with robots. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Engelen, E.M. 2018. Can we share an us-feeling with a digital machine? Emotional sharing and the recognition of one as another. Interdisciplinary Science Reviews 43 (2): 125–135.

    Google Scholar 

  • Engelen, E.M., and B. Röttger-Rössler. 2012. Current disciplinary and interdisciplinary debates on empathy. Emotion Review 4 (1): 3–8.

    Google Scholar 

  • Fodor, J. 1987. Psychosemantics. The problem of meaning in the philosophy of mind. Cambridge, MA: MIT Press.

    Google Scholar 

  • Gallagher, S. 2008. Direct perception in the interactive context. Consciousness and Cognition 17 (2): 535–543.

    Google Scholar 

  • Gallagher, S. 2017. Empathy and theories of direct perception. In The Routledge handbook of philosophy of empathy, ed. H. Maibom, 158–168. New York: Routledge.

    Google Scholar 

  • Gallagher, S., and J. Gallagher. 2019. Acting oneself as another: An actor’s empathy for her character. Topoi (online first), https//doi.org/https://doi.org/10.1007/s11245-018-96247.

  • Gallagher, S., and D. Hutto. 2008. Understanding others through primary interaction and narrative practice. In The shared mind: Perspectives on intersubjectivity, ed. J. Zlatev, T. Racine, C. Sinha, and E. Itkonen, 17–38. Amsterdam/Philadelphia: John Benjamins Publishing Company.

    Google Scholar 

  • Gallese, V. 2001. The ‘shared manifold’ hypothesis: From mirror neurons to empathy. Journal of Consciousness Studies 8: 33–50.

    Google Scholar 

  • Goldie, P. 2000. The emotions. Oxford: Oxford University Press.

    Google Scholar 

  • Goldie, P. 2012. The mess inside. Narrative, emotion, and the mind. Oxford: Oxford University Press.

    Google Scholar 

  • Goldman, A. 2006. Simulating minds: The philosophy, psychology, and neuroscience of mindreading. Oxford: Oxford University Press.

    Google Scholar 

  • Goldman, A. 2011. Two routes to empathy: Insights from cognitive neuroscience. In Empathy: Philosophical and psychological perspectives, ed. A. Coplan and P. Goldie, 31–44. Oxford: Oxford University Press.

    Google Scholar 

  • Gopnik, A., and H.M. Wellman. 1994. The theory theory. In Mapping the mind: Domain specificity in cognition and culture, ed. L.A. Hirschfeld and S.A. Gelman, 257–293. Cambridge: Cambridge University Press.

    Google Scholar 

  • Gruen, L. 2009. Attending to nature: Empathetic engagement with the more than human world. Ethics and the Environment 14 (2): 23–38.

    Google Scholar 

  • Gruen, L. 2017. The moral status of animals. In The Stanford encyclopedia of philosophy (Fall 2017 edition), ed. E. N. Zalta, https://plato.stanford.edu/archives/fall2017/entries/moral-animal/.

  • Hickok, G. 2014. The myth of mirror neurons: The real neuroscience of communication and cognition. New York: W. W. Norton & Company.

    Google Scholar 

  • Hoffmann, M., and R. Pfeifer. 2018. Robots as powerful allies for the study of embodied cognition from the bottom up. In The Oxford handbook of 4E cognition, ed. A. Newen, L. de Bruin, and S. Gallagher. Oxford: Oxford University Press.

    Google Scholar 

  • Hutto, D.D. 2008. The narrative practice hypothesis: Clarifications and implications. Philosophical Explorations 11 (3): 175–192.

    Google Scholar 

  • Iacoboni, M. 2011. Within each other: Neural mechanisms for empathy in the primate brain. In Empathy: Philosophical and psychological perspectives, ed. A. Coplan and P. Goldie, 45–57. Oxford: Oxford University Press.

    Google Scholar 

  • Iacoboni, M., R.P. Woods, et al. 1999. Cortical mechanisms of human imitation. Science 286: 2526–2528.

    Google Scholar 

  • Kanske, P. 2018. The social mind: Disentangling affective and cognitive routes to understanding others. Interdisciplinary Science Reviews 43 (2): 115–124.

    Google Scholar 

  • Kant, I. 1997. Lectures on Ethics, ed. and trans. P. Heath and J. B. Schneewind. Cambridge: Cambridge University press.

  • Kasparov, G. 2017. Deep thinking: Where machine intelligence ends and human creativity begins. New York: Public Affairs.

    Google Scholar 

  • Leite, A., A. Pereira, S. Mascarenhas, C. Martinho, R. Prada, and A. Paiva. 2013. The influence of empathy in human-robot relations. International Journal of Human-Computer Studies 71 (3): 250–260.

    Google Scholar 

  • Lin, P., R. Jenkins, and K. Abney. 2017. Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford: Oxford University Press.

    Google Scholar 

  • Loh, J. 2019. Roboterethik. Eine Einführung. Berlin: Suhrkamp.

    Google Scholar 

  • MacLennan, B.J. 2014. Ethical treatment of robots and the hard problem of robot emotions. International Journal of Synthetic Emotions 5 (1): 9–16.

    Google Scholar 

  • Maibom, H. 2017. The Routledge handbook of philosophy of empathy. London: Routledge.

    Google Scholar 

  • Misselhorn, C. 2009. Empathy with inanimate objects and the uncanny valley. Minds and Machines 19 (3): 345–359.

    Google Scholar 

  • Misselhorn, C. In press. Is empathy with robots morally relevant? In Emotional machines: Perspectives from affective computing and emotional human-machine interaction, ed. C. Misselhorn and M. Klein. Wiesbaden.

  • Mori, M. 2005. On the uncanny valley. In Proceedings of the Humanoids-2005 workshop: Views of the uncanny valley. Tsukuba: Japan.

    Google Scholar 

  • Nagel, T. 1974. What is it like to be a bat? The Philosophical Review 83 (4): 435–450.

    Google Scholar 

  • Newen, A. 2015. Understanding others: The person model theory. In In Open MIND: 26(T), ed. T. Metzinger and J. M. Windt. Frankfurt am Main: MIND Group.

    Google Scholar 

  • Newen, A., L. De Bruin, and S. Gallagher. 2018. The Oxford handbook of 4E cognition. Oxford: Oxford University Press.

    Google Scholar 

  • Nussbaum, M. 2011. Upheavals of thought: The intelligence of emotions. Cambridge: Cambridge University Press.

    Google Scholar 

  • Plantinga, C. 2009. Moving viewers: American film and the spectator’s experience. Berkeley: University of California Press.

    Google Scholar 

  • Rorty, R. 2001. Redemption from egotism: James and Proust as spiritual exercises. Telos 3 (3): 243–263.

    Google Scholar 

  • Scheutz, M. 2011. Architectural roles of affect and how to evaluate them in artificial agents. International Journal of Synthetic Emotions 2 (2): 48–65.

    Google Scholar 

  • Schmetkamp, S. 2017. Gaining perspectives on our lives: moods and aesthetic experience. Philosophia 45(4):1681–1695.

  • Schmetkamp, S. 2019. Theorien der Empathie - Ein Einführung. Hamburg: Junius Publisher.

  • Schneider, S. In press. Future minds: Enhancing and transcending the brain.

  • Slote, M. 2017. The many faces of empathy. Philosophia 45 (3): 843–855.

    Google Scholar 

  • Smith, M. 1995. Engaging characters: Fiction, emotion, and the cinema. Oxford: Clarendon Press.

    Google Scholar 

  • Sobchack, V. 2004. Carnal thoughts: Embodiment and moving image culture. Berkeley: University of California Press.

    Google Scholar 

  • Stein, E. 1989. On the problem of empathy: The collected works of Edith Stein. Vol. 3 (3rd revised edition), trans. W. Stein. Washington, D.C.: ICS Publications.

  • Stueber, K. 2006. Rediscovering empathy: Agency, folk psychology, and the human sciences. Cambridge, MA: MIT Press.

    Google Scholar 

  • Stueber, K. 2018. Empathy. In The Stanford encyclopedia of philosophy (Spring 2018 edition), ed. E. N. Zalta, https://plato.stanford.edu/archives/spr2018/entries/empathy/.

  • Vaage, M.B. 2010. Fiction film and the varieties of empathic engagement. Midwest Studies in Philosophy 34: 158–179.

    Google Scholar 

  • Vallor, S. 2011. Carebots and caregivers: Sustaining the ethical ideal of care in the 21st century. Philosophy and Technology 24 (3): 251–268.

    Google Scholar 

  • Weber, K. 2013. What is it like to encounter an autonomous artificial agent? AI & SOCIETY 28: 483–489.

    Google Scholar 

  • Yanal, R.J. 1999. Paradoxes of emotion and fiction. Pennsylvania: Penn State University Press.

    Google Scholar 

  • Zahavi, D. 2011. Empathy and direct social perception: A phenomenological proposal. Review of Philosophy and Psychology 2 (3): 541–558.

    Google Scholar 

  • Zahavi, D. 2014. Self and other: Exploring subjectivity, empathy, and shame. Oxford: Oxford University Press.

    Google Scholar 

  • Zahavi, D., and J. Michael. 2018. Beyond mirroring: 4E perspectives on empathy. In The Oxford handbook of 4E cognition, ed. A. Newen, L. de Bruin, and S. Gallagher, 589–606. Oxford: Oxford University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Susanne Schmetkamp.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schmetkamp, S. Understanding A.I. — Can and Should we Empathize with Robots?. Rev.Phil.Psych. 11, 881–897 (2020). https://doi.org/10.1007/s13164-020-00473-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13164-020-00473-x

Keywords

Navigation