What does the Turing test really mean? And how many human beings (including Turing) could pass?
About PhilPapers
General Editors:
David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Area Editors:
David Bourget
Gwen Bradford
Berit Brogaard
Margaret Cameron
David Chalmers
James Chase
Rafael De Clercq
Ezio Di Nucci
Barry Hallen
Hans Halvorson
Jonathan Ichikawa
Michelle Kosch
Øystein Linnebo
JeeLoo Liu
Paul Livingston
Brandon Look
Matthew McGrath
Michiru Nagatsu
Susana Nuccetelli
Gualtiero Piccinini
Giuseppe Primiero
Jack Alan Reynolds
Darrell Rowbottom
Aleksandra Samonek
Constantine Sandis
Howard Sankey
Jonathan Schaffer
Thomas Senor
Robin Smith
Daniel Star
Jussi Suikkanen
Lynne Tirrell
Aness Webster
John Wilkins
Other editors
Contact us
Learn more about PhilPapers
David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Area Editors:
David Bourget
Gwen Bradford
Berit Brogaard
Margaret Cameron
David Chalmers
James Chase
Rafael De Clercq
Ezio Di Nucci
Barry Hallen
Hans Halvorson
Jonathan Ichikawa
Michelle Kosch
Øystein Linnebo
JeeLoo Liu
Paul Livingston
Brandon Look
Matthew McGrath
Michiru Nagatsu
Susana Nuccetelli
Gualtiero Piccinini
Giuseppe Primiero
Jack Alan Reynolds
Darrell Rowbottom
Aleksandra Samonek
Constantine Sandis
Howard Sankey
Jonathan Schaffer
Thomas Senor
Robin Smith
Daniel Star
Jussi Suikkanen
Lynne Tirrell
Aness Webster
John Wilkins
Other editors
Contact us
Learn more about PhilPapers
| Abstract |
The so-called Turing test, as it is usually interpreted, sets a benchmark standard for determining when we might call a machine intelligent. We can call a machine intelligent if the following is satisfied: if a group of wise observers were conversing with a machine through an exchange of typed messages, those observers could not tell whether they were talking to a human being or to a machine. To pass the test, the machine has to be intelligent but it also should be responsive in a manner which cannot be distinguished from a human being. This standard interpretation presents the Turing test as a criterion for demarcating intelligent from non-intelligent entities. For a long time proponents of artificial intelligence have taken the Turing test as a goalpost for measuring progress.
|
||||||||||
| Keywords | No keywords specified (fix it) | ||||||||||
| Categories | (categorize this paper) | ||||||||||
| Options |
|
||||||||||
| PhilPapers Archive |
Upload a copy of this paper Check publisher's policy on self-archival Papers currently archived: 19,159 |
| External links |
Setup an account with your affiliations in order to access resources via your University's proxy server Configure custom proxy (use this if your affiliation does not provide a proxy) |
| Through your library |
|
No references found.
No citations found.
Gualtiero Piccinini (2000). Turing's Rules for the Imitation Game. Minds and Machines 10 (4):573-582.
Y. Sato & T. Ikegami (2004). Undecidability in the Imitation Game. Minds and Machines 14 (2):133-43.
B. Edmonds (2000). The Constructibility of Artificial Intelligence (as Defined by the Turing Test). Journal of Logic, Language and Information 9 (4):419-424.
Justin Leiber (1995). On Turing's Turing Test and Why the Matter Matters. Synthese 104 (1):59-69.
Robert M. French (1990). Subcognition and the Limits of the Turing Test. Mind 99 (393):53-66.
Bruce Edmonds (2000). The Constructability of Artificial Intelligence. Journal of Logic Language and Information 9 (4):419-424.
Robert M. French (2000). Peeking Behind the Screen: The Unsuspected Power of the Standard Turing Test. Journal of Experimental and Theoretical Artificial Intelligence 12 (3):331-340.
Dale Jacquette (1993). Who's Afraid of the Turing Test? Behavior and Philosophy 20 (21):63-74.
Monthly downloads |
Added to index2009-06-27Total downloads45 ( #86,932 of 1,787,438 )Recent downloads (6 months)0How can I increase my downloads? |




