Skip to main content

Advertisement

Log in

Empathy: an ethical consideration of AI & others in the workplace

  • Main Paper
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Empathy is a specific moral aspect of human behavior. The global workplace, and thereby a consideration of employee stakeholders, includes unique behavioral and ethical considerations, including a consideration of human empathy. Further, the human aspects of workplaces are within the domain of human resources and managerial oversight in business organizations. As such, human emotions and interactions are complicated by daily work related expectations, employee/employer interactions and work practices, and the outcomes of employees’ work routines. Business ethics, human resources, and risk management practices are endemic aspects within workplaces. Increasingly, the understanding of models of AI-reliant business practices underscores the need for the consideration of the ethical aspects of AI impacts on employees in the workplace. This paper explores a systematic ethical lens of the opportunities and the risks of AI ideation, development, and deployment in business-employee relations practices beyond a compliance mindset, and that introduces a further set of workplace considerations. Empathy is concerned with human intentions. As such, attributive ethical indications of the role of AI in the workplace and its impacts on employees is necessary. Moreover, this paper uses a cognitive lens of empathy and focuses on artificial morality related to the ethical concerns, implications, and practices of AI development, deployment, and workplace practices that may impact employees in a variety of business aspects.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The American Psychiatric Association proposes diagnostic criteria for identifying Narcissist Personality Disorder, one of which is “characteristic difficulties” in empathy: “impaired ability to recognize or identify with the feelings and needs of others; excessively attuned to reactions of others, but only if perceived as relevant to self; over- or underestimate of own effect on others.” Cf. Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5) (Washington, DC/London: American Psychiatric Association Publishing, 2013), “Alternative DSM-5 Model for Personality Disorders,” Narcissistic Personality Disorder, p. 767. Cited as DSM-5.

  2. Classic works on narcissism and impaired empathy are Kernberg 1975 and Kohut 1971, esp. chapter 12; influential works on impaired empathy and concomitant harms are Hirigoyen 1998 and Röhr 2005.

  3. Kant’s works, the standard Akademie-Ausgabe (Academy Edition) in 23 volumes, are electronically available through korpora.org at the Philosophy Department of Duisburg University; cf. https://korpora.zim.uni-duisburg-essen.de/Kant/main.html#h-1.1. The word Gefühl, “feeling,” for instance, appears nearly 2,000 times in the works, but words for empathy, such as Empathie, Einfühlung and einfühlen, are missing. Cf. https://korpora.zim.uni-duisburg-essen.de/Kant/suche.html.

References

  • Abedin B, Meske C, Junglas I et al (2022) Designing and managing human-ai interactions. Inf Syst Front 24:691–697. https://doi.org/10.1007/s10796-022-10313-1

    Article  Google Scholar 

  • Allen C, Wallach W, Smit I (2006) Why machine ethics? IEEE Intell Syst 21(4):12–17

    Article  Google Scholar 

  • Alzola M (2015) Virtuous persons and virtuous actions in business ethics and organizational research. Bus Ethics Q 25(3):287–318. https://doi.org/10.1017/beq.2015.24

    Article  Google Scholar 

  • American Psychiatric Association (2013) Diagnostic and statistical manual of mental disorders. 5th edition (DSM-5) (Washington, DC/London: American Psychiatric Association Publishing, 2013), “Alternative DSM-5 Model for Personality Disorders,” Narcissistic Personality Disorder, Cited as DSM-5. DSM-5, 767–770

  • Arnold T, Kasenberg D & Scheutz M (2017) Value alignment or misalignment – What will keep systems accountable? AAAI Workshop on AI, Ethics, and Society

  • Bailey O (2022) Empathy and the value of humane understanding. Philos Phenomenol Res 104(1):50–65

    Article  MathSciNet  Google Scholar 

  • Banerjee S (2020) A framework for designing compassionate and ethical artificial intelligence and artificial consciousness. Interdisciplin Descript Complex Syst 18(2-A):85–95

    Article  Google Scholar 

  • Bannon L (2023) Can AI do empathy even better than humans? Companies are trying it. Wall Street J. https://www.wsj.com/tech/ai/ai-empathy-business-applications-technology-fc41aea2

  • Batson CD, Ahmad NY (2009) Using empathy to improve intergroup attitudes and relations. Soc Issues Policy Rev 3(1):141–177

    Article  Google Scholar 

  • Bommasani R (2021) On the opportunities and risks of foundation models. Rishi Bommasani* Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dorottya Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan WuJiajun, Wu Yuhuai, Wu Sang, Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang, *Center for Research on Foundation Models (CRFM) Stanford Institute for Human-Centered Artificial Intelligence (HAI), Stanford University https://arxiv.org/abs/2108.07258

  • Bowie, N. E. (1998). A Kantian theory of meaningful work. Journal of Business Ethics, 1083–1092.

  • Bowman EH, Haire M (1975) A strategic posture toward corporate social responsibility. Calif Manage Rev 18(2):49–58

    Article  Google Scholar 

  • Bringsjord S (2013) What robots can and can’t be (Vol. 12). Springer Science & Business Media

  • Brown LS (2007) Empathy, genuineness—and the dynamics of power: a feminist responds to Rogers. Psychotherapy 44(3):257–259

    Article  Google Scholar 

  • Bublitz JC (2022) Might artificial intelligence become part of the person, and what are the key ethical and legal implications? AI Soc. https://doi.org/10.1007/s00146-022-01584-y

    Article  Google Scholar 

  • Burton E, Goldsmith J, Koenig S, Kuipers B, Mattei N, Walsh T (2017) Ethical considerations in artificial intelligence courses. AI Mag 38(2):22–34

    Google Scholar 

  • Carroll AB (1999) Corporate social responsibility: Evolution of a definitional construct. Bus Soc 38(3):268–295

    Article  Google Scholar 

  • Cattell JM (1886) The time it takes to see and name objects. Mind 11(41):63–65

    Article  Google Scholar 

  • Center for Research on Foundation Models (CRFM), Stanford Institute for Human-Centered Artificial Intelligence <HAI> (August 19, 2021)

  • De George RT (2003) The ethics of information technology and business. Blackwell

    Book  Google Scholar 

  • De Waal FBM (2005) Primates, monks and the mind: the case of empathy. J Conscious Stud 12(7):38–54

    Google Scholar 

  • De Waal FBM (2007b) Do animals feel empathy? Sci Am Mind 18(6):28–35

    Article  Google Scholar 

  • De Waal FBM (2008) Putting the altruism back into altruism: the evolution of empathy. Annu Rev Psychol 59:279–300

    Article  Google Scholar 

  • De Waal FBM (2007) The ‘Russian doll’ model of empathy and imitation. On being moved: From mirror neurons to empathy, 35–48

  • Dierksmeier C (2013) Kant on virtue. J Bus Ethics 113(4):597–609

    Article  Google Scholar 

  • Dodson KE (2003) Kant’s socialism: a philosophical reconstruction. Soc Theory Pract 29(4):525–538

    Article  Google Scholar 

  • Eisenberg N, Miller PA (1987) The relation of empathy to prosocial and related behaviors. Psychol Bull 101(1):91

    Article  Google Scholar 

  • Evert RE, Payne GT, Moore CB, McLeod MS (2018) Top management team characteristics and organizational virtue orientation: an empirical examination of IPO firms. Bus Ethics Quart 28(4):427. https://doi.org/10.1017/beq.2018.3

    Article  Google Scholar 

  • Fenwick A, Molnar G (2022) The importance of humanizing AI: using a behavioral lens to bridge the gaps between humans and machines. Discov Artif Intell. https://doi.org/10.1007/s44163-022-00030-8

    Article  Google Scholar 

  • Friedman B, Nissenbaum H (1996) Bias in computer systems. ACM Trans Inf Syst (TOIS) 14(3):330–347

    Article  Google Scholar 

  • Gerdes KE, Lietz CA, Segal EA (2011) Measuring empathy in the 21st century: development of an empathy index rooted in social cognitive neuroscience and social justice. Soc Work Res 35(2):83–93

    Article  Google Scholar 

  • Hagendorff T (2020) The ethics of AI ethics: an evaluation of guidelines. Mind Mach 30(1):99–120

    Article  Google Scholar 

  • Henz P (2021) Ethical and legal responsibility for artificial intelligence. Discover Artif Intell 1(1):1–5

    Article  Google Scholar 

  • Heyman LA (2011) Name, identity, and trademark law. Indiana Law Journal, pp 381–445. http://ilj.law.indiana.edu/articles/86/86_2_Heymann.pdf

  • Hill TE Jr (1980) Humanity as an end in itself. Ethics 91(1):84–99

    Article  MathSciNet  Google Scholar 

  • Hoffman ML (1984) Interaction of affect and cognition in empathy. In: Izard CE, Kagan J, Zajonc RB (eds) Emotions, cognition, and behavior. Cambridge University Press, New York, pp 103–131

  • Hooker J and Kim TW (2019) Humanizing business in the age of artificial intelligence. Bus Ethics Quart 24:4, October 2014 – ISSN 1052-150X

  • Hooker J, Kim TW (2018) Toward non-intuition-based machine and artificial intelligence ethics: a deontological approach based on modal logic. In: Furman J, Marchant G, Price H, Rossi F (eds) Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society. AAAI/ACM, New York, pp 130–136

    Chapter  Google Scholar 

  • Hooker J, Kim TW (2019b) Truly autonomous machines are ethical. AI Mag 40(4):66–73

    Google Scholar 

  • Hosmer LT (1998) Lessons from the wreck of the Exxon Valdez: the need for imagination, empathy, and courage. Bus Ethics Q 8(S1):109–122

    Article  Google Scholar 

  • Hotchkiss S (2002) Why is it always about you? The seven deadly sins of narcissism. Free Press, New York

    Google Scholar 

  • Hourdequin M (2012) Empathy, shared intentionality, and motivation by moral reasons. Ethical Theory Moral Pract 15(3):403–419

    Article  Google Scholar 

  • Jackson PL, Brunet E, Meltzoff AN, Decety J (2006) Empathy examined through the neural mechanisms involved in imagining how I feel versus how you feel pain. Neuropsychologia 44(5):752–761

    Article  Google Scholar 

  • Johnson DG (2015) Technology with no human responsibility? J Bus Ethics 127(4):707–715

    Article  Google Scholar 

  • Jones C, Parker M, Ten Bos R (2005) For Business Ethics. Routledge

    Book  Google Scholar 

  • Jonze S (2013) Her [Film]. Annapurna Pictures

  • Kant I (1785) The categorical imperative

  • Kennedy JA, Kim TW, Strudler A (2016) Hierarchies and dignity: a Confucian communitarian approach. Bus Ethics Q 26(4):479–502

    Article  Google Scholar 

  • Kim TW (2014) Confucian ethics and labor rights. Bus Ethics Q 24(4):565–594

    Article  Google Scholar 

  • Kim TW, Routledge BR (2021) Why a right to an explanation of algorithmic decision-making should exist: a trust-based approach. Bus Ethics Quart. https://doi.org/10.1017/beq.2021.3

    Article  Google Scholar 

  • Kim TW, Xu J, Routledge B (2019) Are data subjects consumers, workers or investors? Tepper School of Business. Carnegie Mellon University

    Google Scholar 

  • Kim TW, Hooker J, Donaldson T (2021) Taking principles seriously: a hybrid approach to value alignment in artificial intelligence. J Artif Intell Res 70:871–890

    Article  MathSciNet  Google Scholar 

  • Kwon M, Jung MF & Knepper RA (2016) Human expectations of social robots. In: 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 463–464. Christchurch, New Zealand: IEEE, March 7–10, 2016

  • Lamm C, Batson CD, Decety J (2007) The neural substrate of human empathy: effects of perspective-taking and cognitive appraisal. J Cogn Neurosci 19(1):42–58

    Article  Google Scholar 

  • Lin P, Abney K, Bekey G (2011) Robot ethics: mapping the issues for a mechanized world. Artif Intell 175(5–6):942–949

    Article  Google Scholar 

  • Lin P (2016) Relationships with robots: good or bad for humans?. Forbes, February 1, 2016. https://www.forbes.com/sites/patricklin/2016/02/01/relationships-with-robots-good-or-bad-for-humans/#33521db27adc

  • Martin K, Shilton K, Smith J (2019) Business and the ethical implications of technology: introduction. J Bus Ethics 160:307–317

    Article  Google Scholar 

  • Montemayor C, Halpern J, Fairweather A (2021) (2022) In principle obstacles for empathic AI: why we can’t replace human empathy in healthcare. AI Soc 37:1353–1359

    Article  Google Scholar 

  • Murphy PE (1999) Character and virtue ethics in international marketing: an agenda for managers, researchers and educators. J Bus Ethics 18(1):107–124

    Article  Google Scholar 

  • O’Neill O (1998) Kant on duties regarding nonrational nature. Proc Aristot Soc Suppl Vol 72(1):211–228

    Article  Google Scholar 

  • O’Neill O (2009) A simplified account of Kant’s ethics. In: Cahn SM (ed) Exploring ethics: an introductory anthology. Oxford University Press, pp 411–415

    Google Scholar 

  • Pavlovich K, Krahnke K (2012) Empathy, connectedness and organisation. J Bus Ethics 105(1):131–137

    Article  Google Scholar 

  • Perry A (2023) AI will never convey the essence of human empathy. Nat Hum Behav 7(11):1808–1809

    Article  Google Scholar 

  • Pirson M, Goodpaster K, Dierksmeier C (2016) Guest editors’ introduction: human dignity and business. Bus Ethics Q 26(4):465–478

    Article  Google Scholar 

  • Rawls J (1980) Kantian constructivism in moral theory. J Philos 77(9):515–572

    Google Scholar 

  • Schneider S (2019) Artificial you: AI and the future of your mind. Princeton University Press

    Book  Google Scholar 

  • Schönfeld M (1992) Who or what has moral standing? Am Philos Q 29:353–362

    Google Scholar 

  • Schönfeld M (2013) Imagination, progress, evolution. In: Thompson M (ed) Imagination in Kant’s critical philosophy. Walter de Gruyter, New York, pp 183–203

    Chapter  Google Scholar 

  • Schulte-Rüther M, Markowitsch HJ, Shah NJ, Fink GR, Piefke M (2008) Gender differences in brain networks supporting empathy. Neuroimage 42(1):393–403

    Article  Google Scholar 

  • Sen A (1987) On ethics and economics. Basil Blackwell, Oxford

    Google Scholar 

  • Shaw F (2019) Machinic empathy and mental health: the relational ethics of machine empathy and artificial intelligence in her. In: ReFocus: The Films of Spike Jonze, 158–75. Edinburgh University Press, 2019

  • Song Y (2015) How to be a proponent of empathy. Ethical Theory Moral Pract 18(3):437–451

    Article  Google Scholar 

  • Tyler TR, Blader SL (2001) Identity and cooperative behavior in groups. Group Process Intergroup Relat 4(3):207–226

    Article  Google Scholar 

  • Wallach W, Allen C (2008) Moral machines: teaching robots right from wrong. Oxford University Press

    Google Scholar 

  • Walsh T, Levy N, Bell G, Elliott A, Maclaurin J, Mareels I, Wood F (2019) The effective and ethical development of artificial intelligence: an opportunity to improve our wellbeing. Australian Council of Learned Academies

    Google Scholar 

  • Whitman M, Townsend A, Hendrickson A (1999) Cross-national differences in computer-use ethics: a nine-country study. J Int Bus Stud 30:673–687. https://doi.org/10.1057/palgrave.jibs.8490833

    Article  Google Scholar 

  • Zaki J, Weber J, Bolger N, Ochsner K (2009) The neural bases of empathic accuracy. Proc Natl Acad Sci 106(27):11382–11387

    Article  Google Scholar 

Download references

Funding

The author has no external financial or non-financial interests that are directly or indirectly related to this work submitted for research publication consideration.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Denise Kleinrichert.

Ethics declarations

Conflict of interest

The author declare that has no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kleinrichert, D. Empathy: an ethical consideration of AI & others in the workplace. AI & Soc (2024). https://doi.org/10.1007/s00146-023-01831-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00146-023-01831-w

Keywords

Navigation