Ascribing Moral Value and the Embodied Turing Test
Abstract
What would it take for an artificial agent to be treated as having moral value? As a first step toward answering this question, we ask what it would take for an artificial agent to be capable of the sort of autonomous, adaptive social behavior that is characteristic of the animals that humans interact with. We propose that this sort of capacity is best measured by what we call the Embodied Turing Test. The Embodied Turing test is a test in which intelligence is operationally defined in terms of autonomous, adaptive interaction with the environment and with other animals. Three versions of the Embodied Turing test were performed with a SONY AIBO robot. Human participants were asked to differentiate between AIBO in a human-controlled mode and AIBO in a software-controlled mode. Our results indicate that the human participants were guessing at how AIBO was controlled. Our data reveals that people do not have enough experience with robots to accurately evaluate its behavior. This indicates that today’s humans do not have enough experience with artificial agents to treat them as morally valuable.