Blog of the American Philosophical Association (
2024)
Copy
BIBTEX
Abstract
I argue that, given the way that AI models work and the way that ordinary human rationality works, it is very likely that people are anthropomorphizing AI, with potentially serious consequences. There are good reasons to doubt that LLMs have anything like human understanding, and even if they have representations or meaningful contents in some sense, these are unlikely to correspond to our ordinary understanding of natural language. However, there are natural, and in some ways quite rational, pressures to anthropomorphize or personify LLMs and other AI systems in biased ways. This includes not only the classical or obvious ways of personifying AI — taking them to be sentient or have consciousness and understanding — but also taking AI to simulate or function like understanding, track the meaning of our language and our reasoning or logic, or have a model of the world our language is about. These more subtle or unobvious forms of anthropomorphism or personification can contribute to automation bias, AI Hype, and all their attendant risks, and encourage stronger anthropomorphisms as multi-modal AI assistants are developed further.