Ascribing Moral Value and the Embodied Turing Test
David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Jack Alan Reynolds
Learn more about PhilPapers
What would it take for an artificial agent to be treated as having moral value? As a first step toward answering this question, we ask what it would take for an artificial agent to be capable of the sort of autonomous, adaptive social behavior that is characteristic of the animals that humans interact with. We propose that this sort of capacity is best measured by what we call the Embodied Turing Test. The Embodied Turing test is a test in which intelligence is operationally defined in terms of autonomous, adaptive interaction with the environment and with other animals. Three versions of the Embodied Turing test were performed with a SONY AIBO robot. Human participants were asked to differentiate between AIBO in a human-controlled mode and AIBO in a software-controlled mode. Our results indicate that the human participants were guessing at how AIBO was controlled. Our data reveals that people do not have enough experience with robots to accurately evaluate its behavior. This indicates that today’s humans do not have enough experience with artificial agents to treat them as morally valuable.
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library||
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
Jamie Cullen (2009). Imitation Versus Communication: Testing for Human-Like Intelligence. Minds and Machines 19 (2):237-254.
Bruce Edmonds (2000). The Constructability of Artificial Intelligence. Journal of Logic Language and Information 9 (4):419-424.
Tyler Cowen & Michelle Dawson, What Does the Turing Test Really Mean? And How Many Human Beings (Including Turing) Could Pass?
Benny Shanon (1989). A Simple Comment Regarding the Turing Test. Journal for the Theory of Social Behaviour 19 (June):249-56.
Dale Jacquette (1993). Who's Afraid of the Turing Test? Behavior and Philosophy 20 (21):63-74.
Saul Traiger (2000). Making the Right Identification in the Turing Test. Minds and Machines 10 (4):561-572.
Susan G. Sterrett (2000). Turing's Two Tests for Intelligence. Minds and Machines 10 (4):541-559.
Christopher Wareham (2011). On the Moral Equality of Artificial Agents. International Journal of Technoethics 2 (1):35-42.
James H. Moor (2001). The Status and Future of the Turing Test. Minds and Machines 11 (1):77-93.
Robert M. French (2000). Peeking Behind the Screen: The Unsuspected Power of the Standard Turing Test. Journal of Experimental and Theoretical Artificial Intelligence 12 (3):331-340.
B. Jack Copeland (2000). The Turing Test. Minds and Machines 10 (4):519-539.
B. Edmonds (2000). The Constructibility of Artificial Intelligence (as Defined by the Turing Test). Journal of Logic, Language and Information 9 (4):419-424.
Ayse Pinar Saygin, Ilyas Cicekli & Varol Akman (2000). Turing Test: 50 Years Later. [REVIEW] Minds and Machines 10 (4):463-518.
Stuart M. Shieber (2007). The Turing Test as Interactive Proof. Noûs 41 (4):686–713.
Added to index2010-12-22
Total downloads71 ( #50,919 of 1,780,605 )
Recent downloads (6 months)6 ( #107,070 of 1,780,605 )
How can I increase my downloads?