About this topic
Summary The Turing test is a test for intelligence in machines.  In 1950, Alan Turing published "Computing Machinery and Intelligence" where he described a game involving a human judge conversing with a human and a language-using computer hidden away in separate rooms.  The point of the game is for the computer to fool the human judge into thinking it is the human.  Turing's point is that were a computer to successfully and repeatedly pass such a test, we should then regard the computer as intelligent on the human level. Chatterbots are one contemporary legacy of Turing's Test.
Key works Turing 1950; Weizenbaum, Joseph (January 1966), "ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine", Communications of the ACM 9 (1): 36–45.
Introductions McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd.; Weizenbaum, Joseph (1976), Computer power and human reason: from judgment to calculationW. H. Freeman and CompanyISBN 0-7167-0463-3
  Show all references
Related categories
Siblings:See also:
137 found
Search inside:
(import / add options)   Sort by:
1 — 50 / 137
  1. Darren Abramson (2011). Philosophy of Mind Is (in Part) Philosophy of Computer Science. Minds and Machines 21 (2):203-219.
    In this paper I argue that whether or not a computer can be built that passes the Turing test is a central question in the philosophy of mind. Then I show that the possibility of building such a computer depends on open questions in the philosophy of computer science: the physical Church-Turing thesis and the extended Church-Turing thesis. I use the link between the issues identified in philosophy of mind and philosophy of computer science to respond to a prominent argument (...)
    Remove from this list | Direct download (15 more)  
     
    My bibliography  
     
    Export citation  
  2. Darren Abramson (2008). Turing's Responses to Two Objections. Minds and Machines 18 (2):147-167.
    In this paper I argue that Turing’s responses to the mathematical objection are straightforward, despite recent claims to the contrary. I then go on to show that by understanding the importance of learning machines for Turing as related not to the mathematical objection, but to Lady Lovelace’s objection, we can better understand Turing’s response to Lady Lovelace’s objection. Finally, I argue that by understanding Turing’s responses to these objections more clearly, we discover a hitherto unrecognized, substantive thesis in his philosophical (...)
    Remove from this list | Direct download (10 more)  
     
    My bibliography  
     
    Export citation  
  3. Varol Akman & Patrick Blackburn (2000). Editorial: Alan Turing and Artificial Intelligence. [REVIEW] Journal of Logic, Language and Information 9 (4):391-395.
    Department of Computer Engineering, Bilkent University, 06533 Ankara, Turkey E-mail: akman@cs.bilkent.edu.tr; http://www.cs.bilkent.edu.tr/?akman..
    Remove from this list | Direct download (12 more)  
     
    My bibliography  
     
    Export citation  
  4. Samuel Alexander (2011). A Paradox Related to the Turing Test. The Reasoner 5 (6):90-90.
  5. G. Alper (1990). A Psychoanalyst Takes the Turing Test. Psychoanalytic Review 77:59-68.
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  6. Reza Amini, Catherine Sabourin & Joseph de Koninck (2011). Word Associations Contribute to Machine Learning in Automatic Scoring of Degree of Emotional Tones in Dream Reports. Consciousness and Cognition 20 (4):1570-1576.
  7. John Barresi (1987). Prospects for the Cyberiad: Certain Limits on Human Self-Knowledge in the Cybernetic Age. Journal for the Theory of Social Behaviour 17 (March):19-46.
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  8. Andrew beedle (1998). Sixteen Years of Artificial Intelligence: Mind Design and Mind Design II. Philosophical Psychology 11 (2):243 – 250.
    John Haugeland's Mind design and Mind design II are organized around the idea that the fundamental idea of cognitive science is that, “intelligent beings are semantic engines — in other words, automatic formal systems with interpretations under which they consistently make sense”. The goal of artificial intelligence research, or the problem of “mind design” as Haugeland calls it, is to develop computers that are in fact semantic engines. This paper canvasses the changes in artificial intelligence research reflected in the different (...)
    Remove from this list | Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  9. Christian Beenfeldt (2006). The Turing Test: An Examination of its Nature and its Mentalistic Ontology. Danish Yearbook of Philosophy 40:109-144.
    Remove from this list |
     
    My bibliography  
     
    Export citation  
  10. Hanoch Ben-Yami (2005). Behaviorism and Psychologism: Why Block's Argument Against Behaviorism is Unsound. Philosophical Psychology 18 (2):179-186.
    Ned Block ((1981). Psychologism and behaviorism. Philosophical Review, 90, 5-43.) argued that a behaviorist conception of intelligence is mistaken, and that the nature of an agent's internal processes is relevant for determining whether the agent has intelligence. He did that by describing a machine which lacks intelligence, yet can answer questions put to it as an intelligent person would. The nature of his machine's internal processes, he concluded, is relevant for determining that it lacks intelligence. I argue against Block (...)
    Remove from this list | Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  11. Ned Block (1981). Psychologism and Behaviorism. Philosophical Review 90 (1):5-43.
    Let psychologism be the doctrine that whether behavior is intelligent behavior depends on the character of the internal information processing that produces it. More specifically, I mean psychologism to involve the doctrine that two systems could have actual and potential behavior _typical_ of familiar intelligent beings, that the two systems could be exactly alike in their actual and potential behavior, and in their behavioral dispositions and capacities and counterfactual behavioral properties (i.e., what behaviors, behavioral dispositions, and behavioral capacities they would (...)
    Remove from this list | Direct download (9 more)  
     
    My bibliography  
     
    Export citation  
  12. Paul Richard Blum, Michael Polanyi: Can the Mind Be Represented by a Machine? Existence and Anthropology.
    On the 27th of October, 1949, the Department of Philosophy at the University of Manchester organized a symposium "Mind and Machine", as Michael Polanyi noted in his Personal Knowledge (1974, p. 261). This event is known, especially among scholars of Alan Turing, but it is scarcely documented. Wolfe Mays (2000) reported about the debate, which he personally had attended, and paraphrased a mimeographed document that is preserved at the Manchester University archive. He forwarded a copy to Andrew Hodges and B. (...)
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  13. Selmer Bringsjord (2010). Meeting Floridi's Challenge to Artificial Intelligence From the Knowledge-Game Test for Self-Consciousness. Metaphilosophy 41 (3):292-312.
    Abstract: In the course of seeking an answer to the question "How do you know you are not a zombie?" Floridi (2005) issues an ingenious, philosophically rich challenge to artificial intelligence (AI) in the form of an extremely demanding version of the so-called knowledge game (or "wise-man puzzle," or "muddy-children puzzle")—one that purportedly ensures that those who pass it are self-conscious. In this article, on behalf of (at least the logic-based variety of) AI, I take up the challenge—which is to (...)
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  14. Selmer Bringsjord (2000). Animals, Zombanimals, and the Total Turing Test: The Essence of Artificial Intelligence. Journal of Logic Language and Information 9 (4):397-418.
    Alan Turing devised his famous test (TT) through a slight modificationof the parlor game in which a judge tries to ascertain the gender of twopeople who are only linguistically accessible. Stevan Harnad hasintroduced the Total TT, in which the judge can look at thecontestants in an attempt to determine which is a robot and which aperson. But what if we confront the judge with an animal, and arobot striving to pass for one, and then challenge him to peg which iswhich? (...)
    Remove from this list | Direct download (15 more)  
     
    My bibliography  
     
    Export citation  
  15. Selmer Bringsjord, P. Bello & David A. Ferrucci (2001). Creativity, the Turing Test, and the (Better) Lovelace Test. Minds and Machines 11 (1):3-27.
    Remove from this list | Direct download (18 more)  
     
    My bibliography  
     
    Export citation  
  16. Selmer Bringsjord, Clarke Caporale & Ron Noel (2000). Animals, Zombanimals, and the Total Turing Test. Journal of Logic, Language and Information 9 (4):397-418.
    Alan Turing devised his famous test (TT) through a slight modificationof the parlor game in which a judge tries to ascertain the gender of twopeople who are only linguistically accessible. Stevan Harnad hasintroduced the Total TT, in which the judge can look at thecontestants in an attempt to determine which is a robot and which aperson. But what if we confront the judge with an animal, and arobot striving to pass for one, and then challenge him to peg which (...)
    Remove from this list | Direct download (9 more)  
     
    My bibliography  
     
    Export citation  
  17. Anthony Chemero, Ascribing Moral Value and the Embodied Turing Test.
    What would it take for an artificial agent to be treated as having moral value? As a first step toward answering this question, we ask what it would take for an artificial agent to be capable of the sort of autonomous, adaptive social behavior that is characteristic of the animals that humans interact with. We propose that this sort of capacity is best measured by what we call the Embodied Turing Test. The Embodied Turing test is a test in which (...)
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  18. Thomas W. Clark (1992). The Turing Test as a Novel Form of Hermeneutics. International Studies in Philosophy 24 (1):17-31.
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  19. B. Jack Copeland (2000). The Turing Test. Minds and Machines 10 (4):519-539.
    Turing''s test has been much misunderstood. Recently unpublished material by Turing casts fresh light on his thinking and dispels a number of philosophical myths concerning the Turing test. Properly understood, the Turing test withstands objections that are popularly believed to be fatal.
    Remove from this list | Direct download (16 more)  
     
    My bibliography  
     
    Export citation  
  20. Tyler Cowen & Michelle Dawson, What Does the Turing Test Really Mean? And How Many Human Beings (Including Turing) Could Pass?
    The so-called Turing test, as it is usually interpreted, sets a benchmark standard for determining when we might call a machine intelligent. We can call a machine intelligent if the following is satisfied: if a group of wise observers were conversing with a machine through an exchange of typed messages, those observers could not tell whether they were talking to a human being or to a machine. To pass the test, the machine has to be intelligent but it also should (...)
    Remove from this list |
    Translate to English
    |
     
    My bibliography  
     
    Export citation  
  21. C. Crawford (1994). Notes on the Turing Test. Communications of the Association for Computing Machinery 37 (June):13-15.
    Remove from this list |
     
    My bibliography  
     
    Export citation  
  22. L. Crockett (1994). The Turing Test and the Frame Problem: AI's Mistaken Understanding of Intelligence. Ablex.
    I have discussed the frame problem and the Turing test at length, but I have not attempted to spell out what I think the implications of the frame problem ...
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  23. Jamie Cullen (2009). Imitation Versus Communication: Testing for Human-Like Intelligence. Minds and Machines 19 (2):237-254.
    Turing’s Imitation Game is often viewed as a test for theorised machines that could ‘think’ and/or demonstrate ‘intelligence’. However, contrary to Turing’s apparent intent, it can be shown that Turing’s Test is essentially a test for humans only. Such a test does not provide for theorised artificial intellects with human-like, but not human-exact, intellectual capabilities. As an attempt to bypass this limitation, I explore the notion of shifting the goal posts of the Turing Test, and related tests such as the (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  24. Donald Davidson (1990). Turing's Test. In K. Said (ed.), Modelling the Mind. Oxford University Press.
    Remove from this list |
     
    My bibliography  
     
    Export citation  
  25. Aurea Anguera de Sojo, Juan Ares, Juan A. Lara, David Lizcano, María A. Martínez & Juan Pazos (2013). Turing and the Serendipitous Discovery of the Modern Computer. Foundations of Science 18 (3):545-557.
    In the centenary year of Turing’s birth, a lot of good things are sure to be written about him. But it is hard to find something new to write about Turing. This is the biggest merit of this article: it shows how von Neumann’s architecture of the modern computer is a serendipitous consequence of the universal Turing machine, built to solve a logical problem.
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  26. Daniel C. Dennett (1984). Can Machines Think? In M. G. Shafto (ed.), How We Know. Harper & Row.
    Remove from this list |
     
    My bibliography  
     
    Export citation  
  27. Adam Drozdek (2001). Descartes' Turing Test. Epistemologia 24 (1):5-29.
  28. Adam Drozdek (1998). Human Intelligence and Turing Test. AI and Society 12 (4):315-321.
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  29. B. Edmonds (2000). The Constructibility of Artificial Intelligence (as Defined by the Turing Test). Journal of Logic, Language and Information 9 (4):419-424.
    The Turing Test (TT), as originally specified, centres on theability to perform a social role. The TT can be seen as a test of anability to enter into normal human social dynamics. In this light itseems unlikely that such an entity can be wholly designed in an off-line mode; rather a considerable period of training insitu would be required. The argument that since we can pass the TT,and our cognitive processes might be implemented as a Turing Machine(TM), that consequently a (...)
    Remove from this list | Direct download (10 more)  
     
    My bibliography  
     
    Export citation  
  30. Bruce Edmonds (2000). The Constructability of Artificial Intelligence (as Defined by the Turing Test). Journal of Logic Language and Information 9 (4):419-424.
    The Turing Test (TT), as originally specified, centres on theability to perform a social role. The TT can be seen as a test of anability to enter into normal human social dynamics. In this light itseems unlikely that such an entity can be wholly designed in anoff-line mode; rather a considerable period of training insitu would be required. The argument that since we can pass the TT,and our cognitive processes might be implemented as a Turing Machine(TM), that consequently (...)
    Remove from this list | Direct download (11 more)  
     
    My bibliography  
     
    Export citation  
  31. Gerald J. Erion (2001). The Cartesian Test for Automatism. Minds and Machines 11 (1):29-39.
    In Part V of his Discourse on the Method, Descartes introduces a test for distinguishing people from machines that is similar to the one proposed much later by Alan Turing. The Cartesian test combines two distinct elements that Keith Gunderson has labeled the language test and the action test. Though traditional interpretation holds that the action test attempts to determine whether an agent is acting upon principles, I argue that the action test is best (...)
    Remove from this list | Direct download (17 more)  
     
    My bibliography  
     
    Export citation  
  32. Luciano Floridi (2005). Consciousness, Agents and the Knowledge Game. Minds and Machines 15 (3):415-444.
    This paper has three goals. The first is to introduce the “knowledge game”, a new, simple and yet powerful tool for analysing some intriguing philosophical questions. The second is to apply the knowledge game as an informative test to discriminate between conscious (human) and conscious-less agents (zombies and robots), depending on which version of the game they can win. And the third is to use a version of the knowledge game to provide an answer to Dretske’s question “how do you (...)
    Remove from this list | Direct download (18 more)  
     
    My bibliography  
     
    Export citation  
  33. Luciano Floridi & Mariarosaria Taddeo (2009). Turing's Imitation Game: Still an Impossible Challenge for All Machines and Some Judges––an Evaluation of the 2008 Loebner Contest. [REVIEW] Minds and Machines 19 (1):145-150.
    An evaluation of the 2008 Loebner contest.
    Remove from this list | Direct download (16 more)  
     
    My bibliography  
     
    Export citation  
  34. Luciano Floridi, Mariarosaria Taddeo & Matteo Turilli (2008). Turing’s Imitation Game: Still an Impossible Challenge for All Machines and Some Judges. Minds and Machines 19 (1):145-150.
    An Evaluation of the 2008 Loebner Contest.
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  35. Robert French (2000). The Turing Test: The First Fifty Years. Trends in Cognitive Sciences 4 (3):115-121.
    The Turing Test, originally proposed as a simple operational definition of intelligence, has now been with us for exactly half a century. It is safe to say that no other single article in computer science, and few other articles in science in general, have generated so much discussion. The present article chronicles the comments and controversy surrounding Turing's classic article from its publication to the present. The changing perception of the Turing Test over the last fifty years has (...)
    Remove from this list | Direct download (9 more)  
     
    My bibliography  
     
    Export citation  
  36. Robert French (1996). The Inverted Turing Test: How a Mindless Program Could Pass It. Psycoloquy 7 (39).
    This commentary attempts to show that the inverted Turing Test (Watt 1996) could be simulated by a standard Turing test and, most importantly, claims that a very simple program with no intelligence whatsoever could be written that would pass the inverted Turing test. For this reason, the inverted Turing test in its present form must be rejected.
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  37. Robert M. French (2000). Peeking Behind the Screen: The Unsuspected Power of the Standard Turing Test. Journal of Experimental and Theoretical Artificial Intelligence 12 (3):331-340.
    No computer that had not experienced the world as we humans had could pass a rigorously administered standard Turing Test. We show that the use of “subcognitive” questions allows the standard Turing Test to indirectly probe the human subcognitive associative concept network built up over a lifetime of experience with the world. Not only can this probing reveal differences in cognitive abilities, but crucially, even differences in _physical aspects_ of the candidates can be detected. Consequently, it is unnecessary (...)
    Remove from this list | Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  38. Robert M. French (1995). Refocusing the Debate on the Turing Test: A Response. Behavior and Philosophy 23 (1):59-60.
    Remove from this list |
     
    My bibliography  
     
    Export citation  
  39. Robert M. French (1990). Subcognition and the Limits of the Turing Test. Mind 99 (393):53-66.
    Remove from this list | Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  40. B. de Gelder (ed.) (1982). Knowledge and Representation. Routledge & Kegan Paul.
    Remove from this list |
    Translate to English
    |
     
    My bibliography  
     
    Export citation  
  41. Judith Genova (1994). Turing's Sexual Guessing Game. Social Epistemology 8 (4):313 – 326.
  42. Keith Gunderson (1964). The Imitation Game. Mind 73 (April):234-45.
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  43. Stevan Harnad (2006). The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence. In Robert Epstein & G. Peters (eds.), [Book Chapter] (in Press). Kluwer.
    This quote/commented critique of Turing's classical paper suggests that Turing meant -- or should have meant -- the robotic version of the Turing Test (and not just the email version). Moreover, any dynamic system (that we design and understand) can be a candidate, not just a computational one. Turing also dismisses the other-minds problem and the mind/body problem too quickly. They are at the heart of both the problem he is addressing and the solution he is proposing.
    Remove from this list | Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  44. Stevan Harnad (2006). The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence. In Robert Epstein & Grace Peters (eds.), [Book Chapter] (in Press). Kluwer.
    This quote/commented critique of Turing's classical paper suggests that Turing meant -- or should have meant -- the robotic version of the Turing Test (and not just the email version). Moreover, any dynamic system (that we design and understand) can be a candidate, not just a computational one. Turing also dismisses the other-minds problem and the mind/body problem too quickly. They are at the heart of both the problem he is addressing and the solution he is proposing.
    Remove from this list | Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  45. Stevan Harnad (2005). Distributed Processes, Distributed Cognizers and Collaborative Cognition. [Journal (Paginated)] (in Press) 13 (3):01-514.
    Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing (“know-how”) This is called the Turing Test. It cannot test whether a process can generate feeling, (...)
    Remove from this list | Direct download (14 more)  
     
    My bibliography  
     
    Export citation  
  46. Stevan Harnad (1999). Turing on Reverse-Engineering the Mind. Journal of Logic, Language, and Information.
    Remove from this list |
     
    My bibliography  
     
    Export citation  
  47. Stevan Harnad (1995). Thoughts as Activation Vectors in Recurrent Nets, or Concentric Epicenters, Or.. Http.
    Churchland underestimates the power and purpose of the Turing Test, dismissing it as the trivial game to which the Loebner Prize (offered for the computer program that can fool judges into thinking it's human) has reduced it, whereas it is really an exacting empirical criterion: It requires that the candidate model for the mind have our full behavioral capacities -- so fully that it is indistinguishable from any of us, to any of us (not just for one Contest night, but (...)
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  48. Stevan Harnad (1995). Does Mind Piggyback on Robotic and Symbolic Capacity? In H. Morowitz & J. Singer (eds.), The Mind, the Brain, and Complex Adaptive Systems. Addison Wesley.
    Cognitive science is a form of "reverse engineering" (as Dennett has dubbed it). We are trying to explain the mind by building (or explaining the functional principles of) systems that have minds. A "Turing" hierarchy of empirical constraints can be applied to this task, from t1, toy models that capture only an arbitrary fragment of our performance capacity, to T2, the standard "pen-pal" Turing Test (total symbolic capacity), to T3, the Total Turing Test (total symbolic plus robotic capacity), to T4 (...)
    Remove from this list | Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  49. Stevan Harnad (1994). Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing Test for Artificial Life. Artificial Life 1 (3):93-301.
    Both Artificial Life and Artificial Mind are branches of what Dennett has called "reverse engineering": Ordinary engineering attempts to build systems to meet certain functional specifications, reverse bioengineering attempts to understand how systems that have already been built by the Blind Watchmaker work. Computational modelling (virtual life) can capture the formal principles of life, perhaps predict and explain it completely, but it can no more be alive than a virtual forest fire can be hot. In itself, a computational model is (...)
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  50. Stevan Harnad (1992). The Turing Test is Not a Trick: Turing Indistinguishability is a Scientific Criterion. 3 (4):9-10.
    It is important to understand that the Turing Test (TT) is not, nor was it intended to be, a trick; how well one can fool someone is not a measure of scientific progress. The TT is an empirical criterion: It sets AI's empirical goal to be to generate human-scale performance capacity. This goal will be met when the candidate's performance is totally indistinguishable from a human's. Until then, the TT simply represents what it is that AI must endeavor eventually (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
1 — 50 / 137