Skip to main content

Advertisement

Log in

Living with AI personal assistant: an ethical appraisal

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Mark Coeckelbergh (Int J Soc Robot 1:217–221, 2009) argues that robot ethics should investigate what interaction with robots can do to humans rather than focusing on the robot’s moral status. We should ask what robots do to our sociality and whether human–robot interaction can contribute to the human good and human flourishing. This paper extends Coeckelbergh’s call and investigate what it means to live with disembodied AI-powered agents. We address the following question: Can the human–AI interaction contribute to our moral development? We present an empirically informed philosophical analysis of how the AI personal assistant Siri changes its users’ way of life, based on the responses obtained from 20 semi-directive individual interviews with Siri users. We identify changes in the users’ social interaction associated with the adoption of Siri. These changes include: (1) the indirect effect of reducing opportunities of human interaction, (2) the second-order effect of diminished expectations toward each other in a community, and (3) the acquired preference to obtain hassle-free interaction with Siri over human interaction. We examine them in relation to concerns that are voiced in the current debates over the rise of AI, namely the suspicion that humans could become overly reliant on AI (Danaher 2019) and the worry that social AI could impede on moral development (Fröding and Peterson, Ethics Inf Technol 23:207–214, 2012; Li, Ethics Inf Technol 23:543–550, 2021). We analyze the ethical costs that come from these changes in light of virtue ethics and address potential objections along the way. We end by offering directions for thinking about how to live with AI personal assistant while preserving favorable conditions for moral development.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Notes

  1. https://www.businessinsider.com/apple-says-siri-has-500-million-users-2018-1.

  2. For the debate revolved around this relational approach to moral standing, see John-Steward Gordon (2022) and Jecker et al (2022b).

  3. See also Danaher (2018) for a review of concerns about how AI personal assistants can threaten a human’s cognitive abilities.

  4. To differentiate this specific Aristotelian sense of “friendly AI” with the notion of “friendly AI” in the discussions of the goal-alignment problem in AI studies, we put the former in quotation marks throughout this paper. The latter usually refers to AI whose ethical goal is aligned with human values (e.g., Yudkowsky 2001; Bostrom 2014; Tegmark 2018). As such, the term “friendly” is not necessarily “friendly” in the sense of Aristotelian virtue ethics.

  5. See also Danaher (2018) for a review of the ethical worries voiced, many of which center around the threat of AI assistants on human cognitive and decision-making abilities.

  6. We call it an “approximate-dilemma” but not “a dilemma” to evade the objection that AI can neither be “friendly” nor “non-friendly” in the sense of “slave-like”. It can resemble an ordinary human friend possessing virtues as well as fragilities.

  7. We are aware of the different conceptions of “virtue”, “moral development” and ethical goals, etc. in the different ethical traditions. For example, Aristotelian virtue of characters are firm and unchanging states or character traits, while Confucian “virtue” is closer to generic virtuosity, more like being excelled in arts, sports, crafts and skills, in interacting with others according to lĭ (ritual). (Ames and Rosemont 2011). For ease of discussion, we are adopting the same ethical terms (e.g., “virtue”, “moral development”, “empathy” etc.) throughout, which can be understood as a “thin moral concept” that provides “a skeleton of an idea” rather than a “thick concept” that “flesh out that ideas in richer details”(Vallor 2016, p. 43). We will provide an explanatory note if the discrepancy in the different traditions is worth noting.

  8. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

  9. Studies showed that users of Alexa likewise describe this form of AI as a friend or family member (e.g., Purington et al. 2017; Gao et al. 2018).

  10. We take Li to mean that it is typically the case for ordinary users. It excludes users with special needs, say, autistic children. For example, see Elder (2017) for an exposition of the robots’ appeal to autistic children.

  11. One may also retort that in literature, fictional characters do not have faces and bodies, and yet they still serve the purpose of aiding in the moral development of their readers. To work through this criticism requires an account of how literary fiction engages emotion, leading to the development of virtues in readers, which is beyond the scope of this paper. Our brief reply is that for the literary fictional characters to have an impact on the readers’ moral development, readers will likely need to have had experience of primitive emotions, which can then serve as the “raw materials” for more sophisticated fictional emotions (Robinson 2005, 38). Or, they will have had to acquire basic moral capacities already, upon which the moral development enabled by literary fiction is drawn. Physical interaction with humans (or other animals) is a prominent source of primitive emotions and moral capacities. In this sense, physical interaction with humans is still the optimal condition for moral development.

  12. We follow Jane English’s (2014) distinction that “reciprocity” typically applies to the relationship between acquaintances, business partners and neighbors etc., while its variation “mutuality” typically applies to more intimate, personal relationships characterized by love, such as friendship, romance and family.

  13. Jay Garfield contends that in Buddhist ethics, care (Karuṇā) “is not a mere feeling or disposition, but a determination to act to relieve the suffering of sentient beings” (Garfield 2022, 112). In Vallor’s virtue ethical framework, however, Karuṇā is translated as “compassion” instead of as “care” (2016, 81). Garfield (2022, 111–113) calls out this translation as “unfortunate”. He objects that that “compassion” contains the connotation of a passivity on the side of the moral agent, whereas Karuṇā essentially contains the meaning of “to act”. The implications of the different conceptions, while worth discussing, fall beyond the scope of this current paper. In any event, care—as a disposition or as a way of acting—requires caretaking situations to realize.

  14. In many of the cases where the interviewees think that adopting Siri enhances their virtues, the positive changes are due to Siri’s unstable performance. See the explanatory notes below in Table 2.

  15. This is credited to Ching Yuen Cheung from the University of Tokyo.

  16. See also https://www.j.u-tokyo.ac.jp/adviser/column/n-53/ for a comparison of the social codes between the two cultures.

  17. Jecker et al (2022a, b) note that some negative effects that robots and AI systems can have on human society have to do with the fact that technology companies “steer robot design and deployment” for the companies’ and shareholders’ profits rather than for “bettering human communities” (18). As such, our suggestion may be less viable if it will result in making an AI personal assistant less “marketable” in the eyes of the technology companies.

References

Download references

Acknowledgements

The authors would like to thank Paul Dumouchel for serving as their project advisor, and for his valuable comments on an early version of the manuscript.

Funding

The work described in this paper was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. UGC/IDS(R) 23/20). (Title: The Seed Grant Funding Scheme Pilot Project Grant in Environment and Human Health, SCE, HKBU (ref. SCE/PPG/2021/02). This study was approved by the Research Ethics Committee of Hong Kong Baptist University (ref. REC/20-21/0577). The authors have no competing interests to declare that are relevant to the content of this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lorraine K. C. Yeung.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (PDF 393 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yeung, L.K.C., Tam, C.S.Y., Lau, S.S.S. et al. Living with AI personal assistant: an ethical appraisal. AI & Soc (2023). https://doi.org/10.1007/s00146-023-01776-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00146-023-01776-0

Keywords

Navigation