Abstract
Intuitively, proper referential extensions of psychological and moral terms exclude artifacts. Yet ordinary speakers commonly treat AI robots as moral patients and use psychological terms to explain their behavior. This paper examines whether this referential shift from the human domain to the AI domain entails semantic changes: do ordinary speakers literally consider AI robots to be psychological or moral beings? Three non-literalist accounts for semantic changes concerning psychological and moral terms used in the AI domain will be discussed: the technical view (ordinary speakers express technical senses), the habit view (ordinary speakers subconsciously express ingrained social habits), and the emotion view (ordinary speakers express their own affective empathetic emotional states). I discuss whether these non-literalist accounts accommodate the results of relevant empirical experiments. The non-literalist accounts are shown to be implausible with respect to the ordinary use of agency-terms (e.g., “believe,” “know,” “decide,” etc.), and therefore I conclude that the concepts ordinary speakers express by agency-terms in reference to AI robots are similar to the concepts they express when applying the same terms to humans. When ordinary speakers extend emotion-terms and/or moral-patiency-terms to AI robots, however, I argue that semantic changes have taken place because ordinary speakers are in fact referring to their own affective empathetic emotional states rather than AI robots. This argument suggests that the judgments made by ordinary speakers regarding the proper referential extensions of emotion-terms and moral-patiency-terms are fallacious.
Similar content being viewed by others
Notes
Robots are physical objects that are able to act in the world, while AI need not have the power to act in the real world (Danaher, 2019, p. 130). Throughout this paper, I use the term “AI robots” to emphasize the disjunction of robots and AI. In my usage, AI robots include humanoid robots, nonhumanoid robots, chatbots, AI programs like Siri, and so on.
While this paper focuses on the referential meaning of psychological and moral terms, I acknowledge that referential meanings do not exhaust the possible meanings of the terms.
For the purposes of this paper, I treat concepts as mental representations.
I borrow the label “technical view” from Figdor (2018, p. 129).
This paper engages particularly with empirical research on the ordinary uses of psychological and moral terms in the AI domain. I do not address the large body of work discussing non-linguistic aspects of our attitudes toward AI robots. For an introduction to that discussion, see Wykowska’s (2021) collection of empirical research in experimental psychology using AI robots to study sociocognitive mechanisms.
To clarify, by claiming that the unexpected referential shift concerning agency-terms preserves semantics, I am not asserting or implying that robots make decisions in the same way that human beings do.
Danaher (2021) also adopts this relational approach.
In the current paper, I argue that ordinary speakers use agency-terms with the literal meaning in the AI domain, but when they extend emotion-terms or moral-agency-terms to AI robots, they fail to mean that the robots are emotional entities or moral patients. In Sect. 4, I discuss the implications of this argument for the relational approach.
While cognitive empathy refers to skills of recognizing others’ emotions and taking others’ perspectives, affective empathy involves sharing others’ feelings (Caravita et al., 2009, p. 141).
Marchesi et al.’s (2019, p. 7) participants were asked to move a slider toward the statement they found to be the most plausible description of the series of pictures showing iCub’s behaviors. (e.g., “iCub classifies cubes by color” vs. “iCub would like to keep this cube”).
For the statements that Ward et al. (2013) used, see their Supplementary Information, Part I: Mind Indices.
Dennett’s (2013, pp. 91–95) homunculus functionalism is a metaphysics of mind version of the intentional stance. In this paper, however, I do not claim that intentionality is fundamentally derived. Because I distinguish between justification and interpretation, the “technical view,” as I use it here, merely holds that ordinary speakers use folk psychology in the human domain.
Huebner (2010, p. 150) claims that no single uniform strategy is adopted by people in ascribing mental states, and suggests two sorts, namely the intentional strategy (i.e., a strategy that is sensitive to consideration of agency, thus beliefs, decision, etc.) and the personhood strategy (i.e., a strategy that is sensitive to consideration of desires and emotions). To simplify matters, under the technical view, I do not differentiate between the two types of psychological terms. But I do apply the distinction to the habit and emotion views.
For details of Marchesi et al.’s experiment, see Sect. 2.1.
Marchesi et al. speculate that the intentional use of psychological terms could have depended on specific scenarios, such as pictures comprising the sequence of iCub’s behaviors and psychological words used in the description (e.g., “cheat” or “understood”), as well as on individual differences among participants, such as cultural backgrounds (p. 9). These possible factors involved in the adaptation of the intentional stance do not need to be discussed in detail to argue against the technical view. However, these factors will be extensively discussed in relation to the objections to the philosophical significance of the data, namely the habit view and the emotion view.
Several other studies in addition to Bossi et al.’s (2020) have found sociocognitive mechanistic similarities (particularly concerning the intentional stance) between the human and AI domains. For example, Wykowska et al. (2014) found that robot actions elicit similar perceptual effects as representations of human actions. Other studies (Abubshait & Wykowska, 2020; Ciardo et al., 2020; Hinz et al., 2019; Wiese et al., 2012) have shown that humans and robots can influence human behavior in similar ways.
Marks of the personal/subpersonal distinction include rationality and beliefs (Drayson, 2014).
By the input/output account, Alexander (2012) means “given scenario x under conditions y, a certain percentage of subjects give answer z” (p. 33).
For the statements used by Marchesi et al. (2019), see the InStance Questionnaire in their Supplementary Material 2 English (p. 11).
Kneer and Stuart’s (2021) experimental results also provide evidence of the mindful use of the agency-term “know,” as their participants’ use of the term correlated with the participants’ awareness of the robots’ cognitive abilities (p. 410).
It’s worth stressing that I am not arguing for some particular theory of philosophy of mind (e.g., functionalism)—the current paper is not about the metaphysics of mind.
In Ward et al.’s (2013) experiment, different participants were assigned to the robot group, the persistent vegetative state patient group, and the corpse group, respectively (i.e., the types of vignettes were between-subject variables). However, all the participants read instances of violations of the social rule that one should not intentionally harm others. For example, “Participants in the harm condition read that the patient’s nurse intentionally unplugged Ann’s food supply every evening with the intention of starving her and obtaining money from a distant relative named in the patient’s will” (p. 1439).
Huebner (2010, pp. 150–151) also distinguishes between the ways we use agency-terms and emotion-terms: “we distinguish two sorts of strategies that we adopt in evaluating the ascription of mental states to various entities. The first is a strategy that is sensitive to considerations of agency; the second is sensitive to considerations of personhood,” where the latter focuses “on the states that allow an entity to be concerned with how things go for her.” In contrast to the emotion view, however, Huebner takes the commonsense understanding of the mind to be intimately tied to philosophically informed intuitions about the mind (p. 154).
Lonigro et al. (2017, p. 5) assessed cognitive empathy by testing whether their participants distinguished characters’ real feelings from the emotions the characters showed to other people, and they assessed affective empathy with items such as “when somebody tells me a nice story, I feel as if the story is happening to me.”.
Pleo is “capable of showing emotional reactions such as joy and fear, and…believable pain reactions” (Rosenthal-von der Pütten et al., 2013, p. 21).
The video types were within-subject variables.
I suspect that the data of Stuart and Kneer (2021) and Huebner (2010) also imply a strong correlation between the attribution of emotions and affective empathy. The participants in these studies attributed emotion-related states (such as “desire,” “pain,” and “happiness”) significantly less frequently than they attributed agency-related states (such as “believe” and “know”) to AI robots. Both studies used vignettes (Huebner, 2010, p. 138; Stuart & Kneer, 2021, p. 10) that explain the robots’ cognitive capacities but contain little information that would lead readers to feel positive or negative affective empathy for the robots. As a result, I argue, the participants did not affectively empathize with the robots’ purported feelings (as described in the vignettes), nor did they attribute emotion-related states to the robots.
Wang and Krumhuber (2018) treated types of robots as between-subject variables in one study and as within-subject variables in another study. They provided the following descriptions to participants (p. 4): ““This robot can quickly learn various movements from demonstrators and also make additional changes either to optimize the behavior or adjust to situations.” In order to assign economic versus social value, these profile descriptions were combined with information about the robot’s corresponding function: (a) economic condition: e.g., “Therefore, this robot can work as a salesperson in stores and supermarkets, guiding customers to different products and answering their inquiries” and (b) social condition: e.g., “Therefore, this robot can work as a social caregiver, keeping those socially isolated/lonely people accompanied, reminding them of their daily activities and having conversations with them”.”.
Wang and Krumhuber (2018, p. 7) write that “while an economic function implies cognitive skills, it is the social function that makes robots capable of experiencing emotions in the eyes of the observer.” Nonetheless, they do not explain why the social function leads to the ascription of emotions.
In this paper’s discussion, I ignore the outliers (i.e., those who ascribed emotions to the economic-robot without experiencing affective empathetic feelings). With respect to agency-terms, nearly half of Marchesi et al.’s (2019) participants chose the intentional stance over the design stance. In Wang and Krumhuber’s (2018) and Shin’s (2021) studies, however, participants disagreed with a statement describing emotional states of an economic-robot at significantly higher rates than they disagreed with a statement describing emotional states of a social-robot.
Determining whether the semantic expansion of the proper domain of agency-terms deserves serious philosophical consideration would require further discussion, for the following reasons. First, Marchesi et al. (2019) remark that their participants’ explanations of iCub’s behavior were somewhat biased toward the mechanistic stance. Second, in Mikalonytė and Kneer’s (2022) experiment, participants were much less willing to ascribe artistic beliefs and intentions (or desires) to AI robots compared to non-artistic psychological states. These experimental results suggest that the ordinary expansion of the proper domain of agency-terms depends on the specific type of agency-term.
This interim conclusion (Sect. 2.4) mentions two interesting topics for future research: (1) Why do we not feel affective empathy for an economic-robot, and therefore not consider it as a sentient being? (2) Why does affective empathy play an essential role in the AI domain (and the geometric shape domain), but not in the human domain, when it comes to treating entities as sentient beings? These are important questions for researchers to consider in order to better understand how we attribute sentience to entities and how our emotional responses play a role in that process.
In Sect. 3, I assume that the same concept of moral patiency is expressed in both human and non-human-animal domains. I avoid discussing the moral status of AI robots in relation to the moral status of non-human animals, as discussing animal ethics would complicate the topic, and I want to strictly focus on the semantics of moral terms used in the AI domain in relation to persons.
Wang and Krumhuber’s (2018) provided the following descriptions to their participants (p. 5): “According to a recent business report, this [economic] robot is predicted to be of high (low) economic value. By economic value, we mean the expected financial benefits and corporate profits they are going to bring to the corporate world. … According to a recent social report, this [social] robot is predicted to be of high (low) social value. By social value, we mean the expected the social support and companionship they are going to bring to the human society.”.
To measure physiological arousal, Rosenthal-von der Pütten (2013, p. 23) used a multi-modality physiological monitoring device that encodes biological signals (i.e., skin conductance responses) in real time.
Lonigro et al.’s (2017, p. 5) participants were tested for the ability to understand moral emotions (happiness, sadness, anger, and guilt) of fictional characters in moral stories with pictures.
I ignore the outliers (i.e., those who ascribed moral patiency to the economic-robot without experiencing affective empathetic feelings) for the same reason explained in footnote 36, namely, that the outliers do not reflect the typical ascription of moral patiency to AI robots.
The studies cited demonstrate that ordinary speakers not only view AI robots as agents, but also hold them morally responsible for their actions (e.g., Hong et al., 2020; Kneer, 2021; Kneer & Stuart, 2021; Stuart & Kneer, 2021) and even consider them blameworthy or deserving of punishment (e.g., Kahn et al., 2012; Lima et al., 2021). However, the relationship between agency and blame/punishment is complex and beyond the scope of this paper. Future research could explore whether the meaning of moral-agency-terms changes in the context of AI, compared to their usage in the human domain.
References
Abubshait, A. Perez-Osorio, J., De Tommaso, D., & Wykowska, A. (2021). Collaboratively framed interactions increase the adoption of intentional stance towards robots. In 2021 30th IEEE international conference on robot & human interactive communication (RO-MAN) (pp. 886–891). https://doi.org/10.1109/RO-MAN50785.2021.9515515
Abubshait, A., & Wykowska, A. (2020). Repetitive robot behavior impacts perception of intentionality and gaze-related attentional orienting. Frontiers in Robotics and AI, 7, 565825. https://doi.org/10.3389/frobt.2020.565825
Alexander, J. (2012). Experimental philosophy: An introduction. Polity Press.
Bennett, M., Dennett, D., Hacker, P. M. S., & Searle, J. (2007). Neuroscience and philosophy: Brain, mind, and language. Columbia University Press.
Bennett, M., & Hacker, P. M. S. (2022). Philosophical foundations of neuroscience (2nd ed.). Wiley.
Birch, J. (2020). The place of animals in Kantian ethics. Biology & Philosophy, 35, 8. https://doi.org/10.1007/s10539-019-9712-0
Bossi, F., Willemse, C., Cavazza, J., Marchesi, S., Murino, V., & Wykowska, A. (2020). The human brain reveals resting state activity patterns that are predictive of biases in attitudes towards robots. Science Robotics, 5(eabb6652), 1–8. https://doi.org/10.1126/scirobotics.abb6652
Caravita, S., Di Blasio, P., & Salmivalli, C. (2009). Unique and interactive effects of empathy and social status on involvement in bullying. Social Development, 18(1), 140–163. https://doi.org/10.1111/j.1467-9507.2008.00465.x
Chaminade, T., Rosset, D., Da Fonseca, D., Nazarian, B., Lutcher, E., Cheng, G., & Deruelle, C. (2012). How do we think machines think? An fMRI study of alleged competition with an artificial intelligence. Frontiers in Human Neuroscience, 6, 103. https://doi.org/10.3389/fnhum.2012.00103
Ciardo, F., Beyer, F., De Tommaso, D., & Wykowska, A. (2020). Attribution of intentional agency towards robots reduces one’s own sense of agency. Cognition, 194(104109), 1–12. https://doi.org/10.1016/j.cognition.2019.104109
Coeckelbergh, M. (2011a). Humans, animals, and robots: A phenomenological approach to human-robot relations. International Journal of Social Robotics, 3, 197–204. https://doi.org/10.1007/s12369-010-0075-6
Coeckelbergh, M. (2011b). You, robot: On the linguistic construction of artificial others. AI & Society, 26, 61–69. https://doi.org/10.1007/s00146-010-0289-z
Coeckelbergh, M. (2014). The moral standing of machines: Towards a relational and non-Cartesian moral hermeneutics. Philosophy & Technology, 27, 61–77. https://doi.org/10.1007/s13347-013-0133-8
Danaher, J. (2019). The rise of the robots and the crisis of moral patiency. AI & Society, 34(1), 129–136. https://doi.org/10.1007/s00146-017-0773-9
Danaher, J. (2021). What matters for moral status: Behavioral or cognitive equivalence? Cambridge Quarterly of Healthcare Ethics, 30(3), 472–478. https://doi.org/10.1017/S0963180120001024
Decety, J., & Cowell, J. M. (2014). Friends or foes: Is empathy necessary for moral behavior? Perspectives on Psychological Science, 9(5), 525–537. https://doi.org/10.1177/1745691614545130
Dennett, D. C. (1997). True believers: the intentional strategy and why it works. In J. Haugeland (Ed.), Mind design II (pp. 57–79). MIT Press.
Dennett, D. C. (2013). Intuition pumps and other tools for thinking. W. W. Norton & Company.
Dennett, D. C. (2017). Why robots won’t rule the world. BBC Viewsnight. Retrieved 31 March, 2023, from https://www.youtube.com/watch?v=2ZxzNAEFtOE&t=1s
Dennett, D. C. (2019). What can we do? In J. Brockman (Ed.), Possible minds: Twenty-five ways of looking at AI (pp. 41–53). Penguin Press.
Drayson, Z. (2014). The personal/subpersonal distinction. Philosophy Compass, 9(5), 338–346. https://doi.org/10.1111/phc3.12124
Edwards, A. D., & Shafer, D. M. (2022). When lamps have feelings: Empathy and anthropomorphism toward inanimate objects in animated films. Projections, 16(2), 27–52. https://doi.org/10.3167/proj.2022.160202
Figdor, C. (2018). Pieces of mind: The proper domain of psychological predicates. Oxford University Press.
Fodor, J. (1990). A theory of content and other essays. MIT Press.
Goldman, A. I. (2018). Philosophical applications of cognitive science. Routledge.
Hansen, N. (2014). Contemporary ordinary philosophy. Philosophy Compass, 9(8), 556–569. https://doi.org/10.1111/phc3.12152
Hansen, N. (2015). Experimental philosophy of language. The Oxford handbook of topics in philosophy. Oxford Academic, 1 Apr. 2014). Oxford Academic. https://doi.org/10.1093/oxfordhb/9780199935314.013.53
Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. The American Journal of Psychology, 57(2), 243–259. https://doi.org/10.2307/1416950
Hinz, N. Ciardo, F., & Wykowska, A. (2019). Individual differences in attitudes toward robots predict behavior in human-robot interaction. In International Conference on Social Robotics (ICSR 2019; Lecture Notes in Computer Science, vol. 11876, pp. 64–73). Springer. https://doi.org/10.1007/978-3-030-35888-4_7
Hong, J., Wang, Y., & Lanz, P. (2020). Why is artificial intelligence blamed more? Analysis of faulting artificial intelligence for self-driving car accidents in experimental settings. International Journal of Human-Computer Interaction, 36(18), 1768–1774. https://doi.org/10.1080/10447318.2020.1785693
Huebner, B. (2010). Commonsense concepts of phenomenal consciousness: Does anyone care about functional zombies? Phenomenology and the Cognitive Sciences, 9, 133–155. https://doi.org/10.1007/s11097-009-9126-6
Kahn, P. H., Kanda, T., Ishiguro, H., Gill, B. G., Ruckert, J. H., Shen, S., Gary, H. E., Reichert, A. L., Freier, N. G., & Severson, R. L. (2012). Do people hold a humanoid robot morally accountable for the harm it causes? In Proceedings of the 7th ACM/IEEE international conference on human-robot interaction (HRI) (pp. 33–40). https://doi.org/10.1145/2157689.2157696.
Kneer, M. (2021). Can a robot lie? Exploring the folk concept of lying as applied to artificial agents. Cognitive Science, 45, e13032. https://doi.org/10.1111/cogs.13032
Kneer, M., & Stuart, M. T. (2021). Playing the blame game with robots. In Companion of the 2021 ACM/IEEE international conference on human-robot interaction (HRI ‘21 Companion) (pp. 407–411). Association for Computing Machinery. https://doi.org/10.1145/3434074.3447202
Langer, E. J. (1992). Matters of mind: Mindfulness/mindlessness in perspective. Consciousness and Cognition, 1, 289–305. https://doi.org/10.1016/1053-8100(92)90066-J
Lima, G., Cha, M., Jeon, C., & Park, K. S. (2021). The conflict between people’s urge to punish AI and legal systems. Frontiers in Robotics and AI, 8, 756242. https://doi.org/10.3389/frobt.2021.756242
Lonigro, A., Baiocco, R., Baumgartner, E., & Laghi, F. (2017). Theory of mind, affective empathy, and persuasive strategies in school-aged children. Infant and Child Development, 26, e2022. https://doi.org/10.1002/icd.2022
Marchesi, S., Ghiglino, D., Ciardo, F., Perez-Osorio, J., Baykara, E., & Wykowska, A. (2019). Do we adopt the intentional stance toward humanoid robots? Frontiers in Psychology, 10, 450. https://doi.org/10.3389/fpsyg.2019.00450
Mikalonytė, E. S., & Kneer, M. (2022). Can artificial intelligence make art? Folk intuitions as to whether AI-driven robots can be viewed as artists and produce art. CM Transactions on Human-Robot Interaction, 11(4), 43. https://doi.org/10.1145/3530875
Millikan, R. G. (1984). Language, thought, and other biological categories: New foundations for realism. MIT Press.
Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153
Perez-Osorio, J., Marchesi, S., Ghiglino, D., Ince, M., & Wykowska, A. (2019). More than you expect: Priors influence the adoption of intentional stance toward humanoid robots. In International Conference on Social Robotics (ICSR 2019; Lecture Notes in Computer Science; Vol. 11876, pp. 119–129). https://doi.org/10.1007/978-3-030-35888-4_12
Prescott, T. J., & Robillard, J. M. (2021). Are friends electric? The benefits and risks of human-robot relationships. iScience, 24, 101993. https://doi.org/10.1016/j.isci.2020.101993
Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. Center for the Study of Language and Information/Cambridge University Press.
Rosenthal-von der Pütten, A. M., Krämer, N. C., Hoffmann, L., Sobieraj, S., & Eimler, S. C. (2013). An experimental study on emotional reactions towards a robot. International Journal of Social Robotics, 5, 17–34. https://doi.org/10.1007/s12369-012-0173-8
Shank, D. B., Graves, C., Gott, A., Gamez, P., & Rodriguez, S. (2019). Feeling our way to machine minds: People’s emotions when perceiving mind in artificial intelligence. Computers in Human Behavior, 98, 256–266. https://doi.org/10.1016/j.chb.2019.04.001
Shin, H. (2021). Who has a mind? Mind perception and moral decision toward robots. Journal of Social Science, 32, 195–213. https://doi.org/10.16881/jss.2021.01.32.1.195
Singer, P. (2009). Speciesism and moral status. Metaphilosophy, 40(3/4), 567–581. https://doi.org/10.1111/j.1467-9973.2009.01608.x
Slater, M., Antely, A., Davison, A., Guger, C., Barker, C., Pistrang, N., & Sanchez-Vives, M. V. (2006). A virtual reprise of the Stanley Milgram obedience experiments. PloS ONE, 1(1), e39. https://doi.org/10.1371/journal.pone.0000039
Stuart, M. T., & Kneer, M. (2021). Guilty artificial minds: Folk attributions of mens rea and culpability to artificially intelligent agents. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 363. https://doi.org/10.1145/3479507
Sung, J., Guo, L., Grinter, R. E., & Christensen, H. I. (2007). “My Roomba is Rambo”: Intimate home appliances. In J. Krumm et al. (Eds.), UbiComp 2007: Ubiquitous Computing (Lecture Notes in Computer Science, Vol. 4717, pp. 145–162). Springer. https://doi.org/10.1007/978-3-540-74853-3_9
Thellman, S., Silvervarg, A., & Ziemke, T. (2017). Folk-psychological interpretation of human vs. humanoid robot behavior: Exploring the intentional stance toward robots. Frontiers in Psychology, 8, 1962. https://doi.org/10.3389/fpsyg.2017.01962
Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books.
Wang, X., & Krumhuber, E. G. (2018). Mind perception of robots varies with their economic versus social function. Frontiers in Psychology, 9, 1230. https://doi.org/10.3389/fpsyg.2018.01230
Ward, A. F., Olsen, A. S., & Wegner, D. M. (2013). The harm-made mind: Observing victimization augments attribution of minds to vegetative patients, robots, and the dead. Psychological Science, 24(8), 1437–1445. https://doi.org/10.1177/0956797612472343
Wiese, E., Wykowska, A., Zwickel, J., & Müller, H. J. (2012). I see what you mean: How attentional selection is shaped by ascribing intentions to others. PLoS ONE, 7(9), e45391. https://doi.org/10.1371/journal.pone.0045391
Wykowska, A. (2021). Robots as mirrors of the human mind. Current Directions in Psychological Science, 30(1), 34–40. https://doi.org/10.1177/0963721420978609
Wykowska, A., Chellali, R., Al-Amin, M., & Müller, H. J. (2014). Implications of robot actions for human perception: How do we represent actions of the observed robots? International Journal of Social Robotics, 6, 357–366. https://doi.org/10.1007/s12369-014-0239-x
Acknowledgements
I would like to thank Heeok Heo, Hong-Im Shin, and On-Soon Lee for their questions and encouragement during my initial attempts at working out some of these issues; and to participants in the May 2022 102nd monthly workshop at Sogang University’s Institute of Philosophical Studies, particularly Sangkyu Shin for helpful comments on an earlier version of this paper; and to participants in the October 2022 “Science, Technology, and Humanities” conference at Kyung Hee University’s Institutes of Humanities, especially Poong Shil Lee for inviting me to the conference. I am also very grateful to two anonymous reviewers for their detailed critical comments on previous drafts, which led to significant improvements and clarifications throughout.
Funding
This work was supported by a research promotion program of SCNU.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declares that he has no conflicts of interest or competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Noh, H. Interpreting ordinary uses of psychological and moral terms in the AI domain. Synthese 201, 209 (2023). https://doi.org/10.1007/s11229-023-04194-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11229-023-04194-3