Abstract
Mark Coeckelbergh (Int J Soc Robot 1:217–221, 2009) argues that robot ethics should investigate what interaction with robots can do to humans rather than focusing on the robot’s moral status. We should ask what robots do to our sociality and whether human–robot interaction can contribute to the human good and human flourishing. This paper extends Coeckelbergh’s call and investigate what it means to live with disembodied AI-powered agents. We address the following question: Can the human–AI interaction contribute to our moral development? We present an empirically informed philosophical analysis of how the AI personal assistant Siri changes its users’ way of life, based on the responses obtained from 20 semi-directive individual interviews with Siri users. We identify changes in the users’ social interaction associated with the adoption of Siri. These changes include: (1) the indirect effect of reducing opportunities of human interaction, (2) the second-order effect of diminished expectations toward each other in a community, and (3) the acquired preference to obtain hassle-free interaction with Siri over human interaction. We examine them in relation to concerns that are voiced in the current debates over the rise of AI, namely the suspicion that humans could become overly reliant on AI (Danaher 2019) and the worry that social AI could impede on moral development (Fröding and Peterson, Ethics Inf Technol 23:207–214, 2012; Li, Ethics Inf Technol 23:543–550, 2021). We analyze the ethical costs that come from these changes in light of virtue ethics and address potential objections along the way. We end by offering directions for thinking about how to live with AI personal assistant while preserving favorable conditions for moral development.
Similar content being viewed by others
Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Notes
See also Danaher (2018) for a review of concerns about how AI personal assistants can threaten a human’s cognitive abilities.
To differentiate this specific Aristotelian sense of “friendly AI” with the notion of “friendly AI” in the discussions of the goal-alignment problem in AI studies, we put the former in quotation marks throughout this paper. The latter usually refers to AI whose ethical goal is aligned with human values (e.g., Yudkowsky 2001; Bostrom 2014; Tegmark 2018). As such, the term “friendly” is not necessarily “friendly” in the sense of Aristotelian virtue ethics.
See also Danaher (2018) for a review of the ethical worries voiced, many of which center around the threat of AI assistants on human cognitive and decision-making abilities.
We call it an “approximate-dilemma” but not “a dilemma” to evade the objection that AI can neither be “friendly” nor “non-friendly” in the sense of “slave-like”. It can resemble an ordinary human friend possessing virtues as well as fragilities.
We are aware of the different conceptions of “virtue”, “moral development” and ethical goals, etc. in the different ethical traditions. For example, Aristotelian virtue of characters are firm and unchanging states or character traits, while Confucian “virtue” is closer to generic virtuosity, more like being excelled in arts, sports, crafts and skills, in interacting with others according to lĭ (ritual). (Ames and Rosemont 2011). For ease of discussion, we are adopting the same ethical terms (e.g., “virtue”, “moral development”, “empathy” etc.) throughout, which can be understood as a “thin moral concept” that provides “a skeleton of an idea” rather than a “thick concept” that “flesh out that ideas in richer details”(Vallor 2016, p. 43). We will provide an explanatory note if the discrepancy in the different traditions is worth noting.
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
We take Li to mean that it is typically the case for ordinary users. It excludes users with special needs, say, autistic children. For example, see Elder (2017) for an exposition of the robots’ appeal to autistic children.
One may also retort that in literature, fictional characters do not have faces and bodies, and yet they still serve the purpose of aiding in the moral development of their readers. To work through this criticism requires an account of how literary fiction engages emotion, leading to the development of virtues in readers, which is beyond the scope of this paper. Our brief reply is that for the literary fictional characters to have an impact on the readers’ moral development, readers will likely need to have had experience of primitive emotions, which can then serve as the “raw materials” for more sophisticated fictional emotions (Robinson 2005, 38). Or, they will have had to acquire basic moral capacities already, upon which the moral development enabled by literary fiction is drawn. Physical interaction with humans (or other animals) is a prominent source of primitive emotions and moral capacities. In this sense, physical interaction with humans is still the optimal condition for moral development.
We follow Jane English’s (2014) distinction that “reciprocity” typically applies to the relationship between acquaintances, business partners and neighbors etc., while its variation “mutuality” typically applies to more intimate, personal relationships characterized by love, such as friendship, romance and family.
Jay Garfield contends that in Buddhist ethics, care (Karuṇā) “is not a mere feeling or disposition, but a determination to act to relieve the suffering of sentient beings” (Garfield 2022, 112). In Vallor’s virtue ethical framework, however, Karuṇā is translated as “compassion” instead of as “care” (2016, 81). Garfield (2022, 111–113) calls out this translation as “unfortunate”. He objects that that “compassion” contains the connotation of a passivity on the side of the moral agent, whereas Karuṇā essentially contains the meaning of “to act”. The implications of the different conceptions, while worth discussing, fall beyond the scope of this current paper. In any event, care—as a disposition or as a way of acting—requires caretaking situations to realize.
In many of the cases where the interviewees think that adopting Siri enhances their virtues, the positive changes are due to Siri’s unstable performance. See the explanatory notes below in Table 2.
This is credited to Ching Yuen Cheung from the University of Tokyo.
See also https://www.j.u-tokyo.ac.jp/adviser/column/n-53/ for a comparison of the social codes between the two cultures.
Jecker et al (2022a, b) note that some negative effects that robots and AI systems can have on human society have to do with the fact that technology companies “steer robot design and deployment” for the companies’ and shareholders’ profits rather than for “bettering human communities” (18). As such, our suggestion may be less viable if it will result in making an AI personal assistant less “marketable” in the eyes of the technology companies.
References
Ames R, Rosemont H Jr (2011) Were the early Confucian virtuous? In: Frasier C (ed) Ethics in early China: an anthology. Hong Kong University Press, Hong Kong, pp 17–39
Borenstein Y, Pearson J (2010) Robot caregivers: harbingers of expanded freedom for all? Ethics Inf Technol 12(3):277–288. https://doi.org/10.1007/s10676-010-9236-4
Borgmann A (1984) Technology and the character of contemporary life: a philosophical inquiry. University of Chicago Press, Chicago
Bostrom N (2014) Superintelligence paths, dangers, strategies. OUP, Oxford
Brey P (1997) New media and the quality of life. Soc Philos Technol Quart Electron J 3:4–18. https://doi.org/10.5840/techne1997319
Brown C, Efstratiou C et al (2013) Tracking serendipitous interactions: how individual cultures shape the office. arXiv.org. https://doi.org/10.1145/2531602.2531641
Burr C, Floridi L (2020) The ethics of digital well-being: a multidisciplinary perspective. In: Burr (ed) Ethics of digital well-being. Springer
Burr C, Cristianini N et al (2018) An analysis of the interaction between intelligent software and human users. Minds Mach 28:735–774. https://doi.org/10.1007/s11023-018-9479-0
Chan B (2020) The rise of artificial intelligence and the crisis of moral passivity. Ai&society 35:991–993. https://doi.org/10.1007/s00146-020-00953-9
Coeckelbergh M (2009) Personal robots, appearance, and human good: a methodological reflection on roboethics. Int J Soc Robot 1:217–221. https://doi.org/10.1007/s12369-009-0026-2
Coeckelbergh M (2010) Robot rights? Towards a social-relational justification of moral consideration. Ethics Inf Technol 12:209–221. https://doi.org/10.1007/s10676-010-9235-5
Coeckelbergh M (2012) Care robots, virtual virtue, and the best possible life. In: Brey, Briggle and Spence (ed) The good life in the technological age. Routledge, London, pp 281–292
Coeckelbergh M (2021) How to use virtue ethics for thinking about the moral standing of social robots: a relational interpretation in terms of practices, habits and performance. Int J Soc Robot 13:31–40. https://doi.org/10.1007/s12369-020-00707-z
Coeckelbergh M, Gunkel DJ (2014) Facing animals: a relational, other-oriented approach to moral standing. J Agric Environ Ethics 27:715–733
Confucius (497 B.C.). The Analects. (trans. Legge, J.) https://ctext.org/analects
Danaher J (2018) Toward an ethics of AI assistants: an initial framework. Philos Technol 31:629–653. https://doi.org/10.1007/s13347-018-0317-3
Danaher J (2019a) The rise of robots and the crisis of moral patiency. Ai&society 34:129–136. https://doi.org/10.1007/s00146-017-0773-9
Danaher J (2019b) The philosophical case for robotic friendship. J Posthuman Stud 3(1):5–24. https://doi.org/10.5325/jpoststud.3.1.0005
Dotson T (2012) Technology, choice and the good life: questioning technological liberalism. Technol Soc 34(4):326–336. https://doi.org/10.1016/j.techsoc.2012.10.004
Dumouchel P (2022) Ethics and robotics, embodiment and vulnerability. Int J Soc Robot. https://doi.org/10.1007/s12369-022-00869-y
Elder A (2017) Robot friends for autistic children: monopoly money or counterfeit currency? In: Lin, Abney, Jenkin (eds) Robot Ethics 2.0: from autonomous cars to artificial intelligence. OUP, New York, pp 113–126
English J (2014) What do grown children owe their parents? In: LaFollette (ed) Ethics in practice: an anthology. Wiley Blackwell, Chichester, pp 219–221
Fröding B, Peterson M (2012) Friendly AI. Ethics Inf Technol 23:207–214. https://doi.org/10.1007/s10676-020-09556-w
Gao Y, Pan Z et al (2018) Alexa, my love: analyzing reviews of amazon echo. 2018 IEEE: 372–380). IEEE.https://doi.org/10.1109/SmartWorld.2018.00094
Garfield J (2022) Buddhist ethics: a philosophical exploration. OUP, New York
Giubilini A, Savulescu J (2018) The artificial moral advisor. The “Ideal Observer” meets artificial intelligence. Philos Technol 31:169–188. https://doi.org/10.1007/s13347-017-0285-z
Gordon J-S (2022) The African relational account of social robots: A Step Back? Philos Technol. https://doi.org/10.1007/s13347-022-00532-4
Guzman A (2016) Making AI safe for humans. In: Gehl (ed) Socialbots and their friends: digital media and the automation of sociality. Routledge, London, pp 69–85
Haidt J (2001) Emotional dogs and its rational tails: a social intuitionist approach to moral judgment. Psychol Rev 108(4):814–834. https://doi.org/10.1037/0033-295x.108.4.814
Hongladarom S (2020) The ethics of AI and robotics: a buddhist viewpoint. Lexington Books, Lanham
Jecker NS (2020a) Ending midlife bias: new values for old age. OUP, New York
Jecker NS (2020b) Nothing to be ashamed of: sex robots for older adults with disabilities. J Med Ethics 47(1):26–32. https://doi.org/10.1136/medethics-2020-106645
Jecker NS (2021a) You’ve got a friend in me: social robots for the older adults in an age of global pandemics. Ethic Inf Technol 23(S1):35–43. https://doi.org/10.1007/s10676-020-09546-y
Jecker NS (2021b) My friend, the robot: an argument for e-friendship. IEE Int Conf Robot Human Interact Commun. https://doi.org/10.1109/RO-MAN50785.2021.9515429
Jecker NS, Nakazawa E (2022) Bridging east-west differences in ethics guidance for AI and robotics. AI 3(3):764–777. https://doi.org/10.3390/ai3030045
Jecker NS, Atiure CA et al (2022a) The moral standing of social robots: untapped insights from Africa. Philos Technol 35:34. https://doi.org/10.1007/s13347-022-00531-5
Jecker NS, Atiure CA et al (2022b) Two steps forward: an African relational account of moral standing. Philos Technol 35:38. https://doi.org/10.1007/s13347-022-00533-3
Jobin A, Ienca M et al (2019) The global landscape of AI ethics guidelines. Nat Mach Intell 1:389–399. https://doi.org/10.1038/s42256-019-0088-2
Lee SK, Kaver P et al (2021) Social interaction and relationships with an intelligent agent. Int J Human-Comput Stud. https://doi.org/10.1016/j.ijhcs.2021.102608
Li O (2021) Problems with ‘Friendly AI.’ Ethics Inf Technol 23(3):543–550
Metz T, Gaie JBR (2010) The African ethic of Ubuntu/Botho: implications for research on morality. J Moral Educ 39(3):273–290. https://doi.org/10.1080/03057240.2010.497609
Purington A, Taft J G et al (2017) Alexa is my new BFF. In: Proceedings of the 2017 CHI conference extended abstracts on human factors in computing systems, pp 2853–2859. https://doi.org/10.1145/3027063.3053246
Rozin P, Royzman EB et al (2001) Negativity Bias, negativity dominance, and contagion. Pers Soc Psychol Rev 5(4):296–320. https://doi.org/10.1207/S15327957PSPR0504
Sandler R (2014) Ethics and emerging technologies. Palgrave Macmillan UK, London
Sharkey N, Sharkey A (2010) The crying shame of robot nannies: an ethical appraisal. Interact Stud 11(2):161–190. https://doi.org/10.1075/is.11.2.01sha
Sharkey A, Sharkey N (2012) Granny and the robots: ethical issues in robot care for the elderly. Ethics Inf Technol 14:27–40. https://doi.org/10.1007/s10676-010-9234-6
Sharkey A, Sharkey N (2020) We need to talk about deception in social robotics! Ethics Inf Technol 23:309–316. https://doi.org/10.1007/s10676-020-09573-9
Sparrow R, Sparrow L (2006) In the hands of machines? The future of aged care. Mind Mach 16:141–161. https://doi.org/10.1007/s11023-006-9030-6
Sparrow R (2016) Kicking a robot dog. 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). https://doi.org/10.5555/2906831.2906871
Tegmark M (2018) Life 3.0. Being human in the age of artificial intelligence. Penguin Books, London
Turkle S (2011) Alone Together: why we expect more from technology and less from each other. Basic Books, New York
Vallor S (2011) Carebots and caregivers: sustaining the ethical ideal of care in the twenty-first century. Philos Technol 24(3):251–268
Vallor S (2016) Technology and the virtues: a philosophical guide to a future worth wanting. OUP, New York
van den Hoven J (2006/2014) Nanotechnology and privacy: the instructive case of RFID. In: Sandler R (ed) Ethics of emerging technologies. Palgrave Macmillan, pp 285–298
Waghid Y (2014) African philosophy of education reconsidered: on becoming human. Routledge, Oxon
Yudkowsky E (2001) Creating Friendly AI 1.0: the analysis and design of benevolent architectures. Machine Intelligence Research Institute
Zuboff S (2019) The age of surveillance capitalism: the fight for a human future at the new frontier of power. Public Affairs, New York
Acknowledgements
The authors would like to thank Paul Dumouchel for serving as their project advisor, and for his valuable comments on an early version of the manuscript.
Funding
The work described in this paper was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. UGC/IDS(R) 23/20). (Title: The Seed Grant Funding Scheme Pilot Project Grant in Environment and Human Health, SCE, HKBU (ref. SCE/PPG/2021/02). This study was approved by the Research Ethics Committee of Hong Kong Baptist University (ref. REC/20-21/0577). The authors have no competing interests to declare that are relevant to the content of this article.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yeung, L.K.C., Tam, C.S.Y., Lau, S.S.S. et al. Living with AI personal assistant: an ethical appraisal. AI & Soc (2023). https://doi.org/10.1007/s00146-023-01776-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00146-023-01776-0