Synthese 201 (6):1-33 (
2023)
Copy
BIBTEX
Abstract
Intuitively, proper referential extensions of psychological and moral terms exclude artifacts. Yet ordinary speakers commonly treat AI robots as moral patients and use psychological terms to explain their behavior. This paper examines whether this referential shift from the human domain to the AI domain entails semantic changes: do ordinary speakers literally consider AI robots to be psychological or moral beings? Three non-literalist accounts for semantic changes concerning psychological and moral terms used in the AI domain will be discussed: the technical view (ordinary speakers express technical senses), the habit view (ordinary speakers subconsciously express ingrained social habits), and the emotion view (ordinary speakers express their own affective empathetic emotional states). I discuss whether these non-literalist accounts accommodate the results of relevant empirical experiments. The non-literalist accounts are shown to be implausible with respect to the ordinary use of agency-terms (e.g., “believe,” “know,” “decide,” etc.), and therefore I conclude that the concepts ordinary speakers express by agency-terms in reference to AI robots are similar to the concepts they express when applying the same terms to humans. When ordinary speakers extend emotion-terms and/or moral-patiency-terms to AI robots, however, I argue that semantic changes have taken place because ordinary speakers are in fact referring to their own affective empathetic emotional states rather than AI robots. This argument suggests that the judgments made by ordinary speakers regarding the proper referential extensions of emotion-terms and moral-patiency-terms are fallacious.