This paper commences from the critical observation that the TuringTest (TT) might not be best read as providing a definition or a genuine test of intelligence by proxy of a simulation of conversational behaviour. Firstly, the idea of a machine producing likenesses of this kind served a different purpose in Turing, namely providing a demonstrative simulation to elucidate the force and scope of his computational method, whose primary theoretical import lies within the realm of mathematics (...) rather than cognitive modelling. Secondly, it is argued that a certain bias in Turing’s computational reasoning towards formalism and methodological individualism contributed to systematically unwarranted interpretations of the role of the TT as a simulation of cognitive processes. On the basis of the conceptual distinction in biology between structural homology vs. functional analogy, a view towards alternate versions of the TT is presented that could function as investigative simulations into the emergence of communicative patterns oriented towards shared goals. Unlike the original TT, the purpose of these alternate versions would be co-ordinative rather than deceptive. On this level, genuine functional analogies between human and machine behaviour could arise in quasi-evolutionary fashion. (shrink)
Turing''s test has been much misunderstood. Recently unpublished material by Turing casts fresh light on his thinking and dispels a number of philosophical myths concerning the Turingtest. Properly understood, the Turingtest withstands objections that are popularly believed to be fatal.
The claim has often been made that passing the TuringTest would not be sufficient to prove that a computer program was intelligent because a trivial program could do it, namely, the “Humongous-Table (HT) Program”, which simply looks up in a table what to say next. This claim is examined in detail. Three ground rules are argued for: (1) That the HT program must be exhaustive, and not be based on some vaguely imagined set of tricks. (2) That (...) the HT program must not be created by some set of sentient beings enacting responses to all possible inputs. (3) That in the current state of cognitive science it must be an open possibility that a computational model of the human mind will be developed that accounts for at least its nonphenomenological properties. Given ground rule 3, the HT program could simply be an “optimized” version of some computational model of a mind, created via the automatic application of program-transformation rules [thus satisfying ground rule 2]. Therefore, whatever mental states one would be willing to impute to an ordinary computational model of the human psyche one should be willing to grant to the optimized version as well. Hence no one could dismiss out of hand the possibility that the HT program was intelligent. This conclusion is important because the Humongous-Table Program Argument is the only argument ever marshalled against the sufficiency of the TuringTest, if we exclude arguments that cognitive science is simply not possible. (shrink)
I advocate a theory of syntactic semantics as a way of understanding how computers can think (and how the Chinese-Room-Argument objection to the TuringTest can be overcome): (1) Semantics, considered as the study of relations between symbols and meanings, can be turned into syntax â a study of relations among symbols (including meanings) â and hence syntax (i.e., symbol manipulation) can suffice for the semantical enterprise (contra Searle). (2) Semantics, considered as the process of understanding one domain (...) (by modeling it) in terms of another, can be viewed recursively: The base case of semantic understanding âunderstanding a domain in terms of itself â is syntactic understanding. (3) An internal (or narrow ), first-person point of view makes an external (or wide ), third-person point of view otiose for purposes of understanding cognition. (shrink)
The main factor of intelligence is defined as the ability tocomprehend, formalising this ability with the help of new constructsbased on descriptional complexity. The result is a comprehension test,or C- test, which is exclusively defined in computational terms. Due toits absolute and non-anthropomorphic character, it is equally applicableto both humans and non-humans. Moreover, it correlates with classicalpsychometric tests, thus establishing the first firm connection betweeninformation theoretical notions and traditional IQ tests. The TuringTest is compared with the C- (...) class='Hi'>test and the combination of the two isquestioned. In consequence, the idea of using the TuringTest as apractical test of intelligence should be surpassed, and substituted bycomputational and factorial tests of different cognitive abilities, amuch more useful approach for artificial intelligence progress and formany other intriguing questions that present themselves beyond theTuring Test. (shrink)
Stuart M. Shieber’s name is well known to computational linguists for his research and to computer scientists more generally for his debate on the Loebner TuringTest competition, which appeared a decade earlier in Communications of the ACM. 1 With this collection, I expect it to become equally well known to philosophers.
The TuringTest is one of the most disputed topics in artificial intelligence, philosophy of mind, and cognitive science. This paper is a review of the past 50 years of the TuringTest. Philosophical debates, practical developments and repercussions in related disciplines are all covered. We discuss Turing's ideas in detail and present the important comments that have been made on them. Within this context, behaviorism, consciousness, the `other minds' problem, and similar topics in philosophy (...) of mind are discussed. We also cover the sociological and psychological aspects of the TuringTest. Finally, we look at the current situation and analyze programs that have been developed with the aim of passing the TuringTest. We conclude that the TuringTest has been, and will continue to be, an influential and controversial topic. (shrink)
This paper discusses some difficulties in understanding the Turingtest. It emphasizes the importance of distinguishing between conceptual and empirical perspectives and highlights the former as introducing more serious problems for the TT. Some objections against the Turingian framework stemming from the later Wittgenstein’s philosophy are exposed. The following serious problems are examined: 1) It considers a unique and exclusive criterion for thinking which amounts to their identification; 2) it misidentifies the relationship of speaking to thinking as that (...) of a criterion; 3) it neglects the “natural” course of the development in semantics. However, these considerations suggest only that it is problematic to label a successful chatbot as a “thinking entity” without further qualifications, but not necessarily and once and for all incorrect. Philosophy has only little to say about the technical possibility of creating such an effective program. (shrink)
The standard interpretation of the imitation game is defended over the rival gender interpretation though it is noted that Turing himself proposed several variations of his imitation game. The Turingtest is then justified as an inductive test not as an operational definition as commonly suggested. Turing's famous prediction about his test being passed at the 70% level is disconfirmed by the results of the Loebner 2000 contest and the absence of any serious (...) class='Hi'>Turingtest competitors from AI on the horizon. But, reports of the death of the Turingtest and AI are premature. AI continues to flourish and the test continues to play an important philosophical role in AI. Intelligence attribution, methodological, and visionary arguments are given in defense of a continuing role for the Turingtest. With regard to Turing's predictions one is disconfirmed, one is confirmed, but another is still outstanding. (shrink)
The TuringTest (TT), as originally specified, centres on theability to perform a social role. The TT can be seen as a test of anability to enter into normal human social dynamics. In this light itseems unlikely that such an entity can be wholly designed in an off-line mode; rather a considerable period of training insitu would be required. The argument that since we can pass the TT,and our cognitive processes might be implemented as a Turing (...) Machine(TM), that consequently a TM that could pass the TT could be built, isattacked on the grounds that not all TMs are constructible in a plannedway. This observation points towards the importance of developmentalprocesses that use random elements (e.g., evolution), but in these casesit becomes problematic to call the result artificial. This hasimplications for the means by which intelligent agents could bedeveloped. (shrink)
This paper argues that the Turingtest is based on a fixed and de-contextualized view of communicative competence. According to this view, a machine that passes the test will be able to communicate effectively in a variety of other situations. But the de-contextualized view ignores the relationship between language and social context, or, to put it another way, the extent to which speakers respond dynamically to variations in discourse function, formality level, social distance/solidarity among participants, and participants' (...) relative degrees of power and status. In the case of the Loebner Contest, a present day version of the Turingtest, the social context of interaction can be interpreted in conflicting ways. For example, Loebner discourse is defined 1) as a friendly, casual conversation between two strangers of equal power, and 2) as a one-way transaction in which judges control the conversational floor in an attempt to expose contestants that are not human. This conflict in discourse function is irrelevant so long as the goal of the contest is to ensure that only thinking, human entities pass the test. But if the function of Loebner discourse is to encourage the production of software that can pass for human on the level of conversational ability, then the contest designers need to resolve this ambiguity in discourse function, and thus also come to terms with the kind of competence they are trying to measure. (shrink)
My aim in this paper is to use a formal approach to the Turingtest. This approach is based on a tool developed within Inferential Erotetic Logic, so called erotetic search scenarios. First, I reconstruct the setting of the Turingtest proposed by A.M. Turing. On this basis, I build a model of the test using erotetic search scenarios framework. I use the model to investigate one of the most interesting issues of the TT (...) setting – the interrogator’s perspective and role in the test. (shrink)
Alan Turing devised his famous test (TT) through a slight modificationof the parlor game in which a judge tries to ascertain the gender of twopeople who are only linguistically accessible. Stevan Harnad hasintroduced the Total TT, in which the judge can look at thecontestants in an attempt to determine which is a robot and which aperson. But what if we confront the judge with an animal, and arobot striving to pass for one, and then challenge him to peg (...) which iswhich? Now we can index TTT to a particular animal and its syntheticcorrelate. We might therefore have TTTrat, TTTcat,TTTdog, and so on. These tests, as we explain herein, are abetter barometer of artificial intelligence (AI) than Turing's originalTT, because AI seems to have ammunition sufficient only to reach thelevel of artificial animal, not artificial person. (shrink)
The testTuring proposed for machine intelligence is usually understood to be a test of whether a computer can fool a human into thinking that the computer is a human. This standard interpretation is rejected in favor of a test based on the Imitation Game introduced by Turing at the beginning of "Computing Machinery and Intelligence.".
The paper examines the nature of the behavioral evidence underlying attributions of intelligence in the case of human beings, and how this might be extended to other kinds of cognitive system, in the spirit of the original TuringTest. I consider Harnad's Total TuringTest, which involves successful performance of both linguistic and robotic behavior, and which is often thought to incorporate the very same range of empirical data that is available in the human case. However, (...) I argue that the TTT is still too weak, because it only tests the capabilities of particular tokens within a preexisting context of intelligent behavior. What is needed is a test of the cognitive type, as manifested through a number of exemplary tokens, in order to confirm that the cognitive type is able to produce the context of intelligent behavior presupposed by tests such as the TT and TTT. (shrink)
Some of the papers in this special issue distribute cognition between what is going on inside individual cognizers' heads and their outside worlds; others distribute cognition among different individual cognizers. Turing's criterion for cognition was individual, autonomous input/output capacity. It is not clear that distributed cognition could pass the TuringTest.
If, as a number of writers have predicted, the computers of the future will possess intelligence and capacities that exceed our own then it seems as though they will be worthy of a moral respect at least equal to, and perhaps greater than, human beings. In this paper I propose a test to determine when we have reached that point. Inspired by Alan Turing’s (1950) original “Turingtest”, which argued that we would be justified in conceding (...) that machines could think if they could fill the role of a person in a conversation, I propose a test for when computers have achieved moral standing by asking when a computer might take the place of a human being in a moral dilemma, such as a “triage” situation in which a choice must be made as to which of two human lives to save. We will know that machines have achieved moral standing comparable to a human when the replacement of one of these people with an artificial intelligence leaves the character of the dilemma intact. That is, when we might sometimes judge that it is reasonable to preserve the continuing existence of a machine over the life of a human being. This is the “Turing Triage Test”. I argue that if personhood is understood as a matter of possessing a set of important cognitive capacities then it seems likely that future AIs will be able to pass this test. However this conclusion serves as a reductio of this account of the nature of persons. I set out an alternative account of the nature of persons, which places the concept of a person at the centre of an interdependent network of moral and affective responses, such as remorse, grief and sympathy. I argue that according to this second, superior, account of the nature of persons, machines will be unable to pass the Turing Triage Test until they possess bodies and faces with expressive capacities akin to those of the human form. (shrink)
Proceedings of the papers presented at the Symposium on "Revisiting Turing and his Test: Comprehensiveness, Qualia, and the Real World" at the 2012 AISB and IACAP Symposium that was held in the Turing year 2012, 2–6 July at the University of Birmingham, UK. Ten papers. - http://www.pt-ai.org/turing-test --- Daniel Devatman Hromada: From Taxonomy of TuringTest-Consistent Scenarios Towards Attribution of Legal Status to Meta-modular Artificial Autonomous Agents - Michael Zillich: My Robot is Smarter (...) than Your Robot: On the Need for a Total TuringTest for Robots - Adam Linson, Chris Dobbyn and Robin Laney: Interactive Intelligence: Behaviour-based AI, Musical HCI and the TuringTest - Javier Insa, Jose Hernandez-Orallo, Sergio España - David Dowe and M.Victoria Hernandez-Lloreda: The anYnt Project Intelligence Test (Demo) - Jose Hernandez-Orallo, Javier Insa, David Dowe and Bill Hibbard: Turing Machines and Recursive Turing Tests — Francesco Bianchini and Domenica Bruni: What Language for TuringTest in the Age of Qualia? - Paul Schweizer: Could there be a TuringTest for Qualia? - Antonio Chella and Riccardo Manzotti: Jazz and Machine Consciousness: Towards a New TuringTest - William York and Jerry Swan: Taking Turing Seriously (But Not Literally) - Hajo Greif: Laws of Form and the Force of Function: Variations on the TuringTest. (shrink)
Este artículo analiza el Test de Turing, uno de los métodos más famosos y controvertidos para evaluar la existencia de vida mental en la Filosofía de la Mente, revelando dos mitos filosóficos comúnmente aceptados y criticando su dogma. En primer lugar, se muestra por qué Turing nunca propuso una definición de inteligencia. En segundo lugar, se refuta que el Test de Turing involucre condiciones necesarias o suficientes para la inteligencia. En tercer lugar, teniendo presente el (...) objetivo y el tipo de evidencia que recopila, se considera si el Test de Turing cuenta como un experimento científico a la luz de la concepción de Fodor. Finalmente, se argumenta que Turing simpatiza con una forma de Conductismo, confundiendo la simulación -un proceso epistémico que, gobernado por la verosimilitud, es eficaz cuando alguien es causado a creer que el computador es inteligente- con la duplicación de la inteligencia en cuanto propiedad, lo que ocurre a nivel ontológico. Tal confusión implica un dogma y explica por qué, a pesar de haber sido propuesto como una solución final a la problemática de si las máquinas programadas piensan, el Test de Turing ha tenido precisamente el efecto contrario en más de cinco décadas, estimulando el debate filosófico en torno a la naturaleza de lo mental.Debunking two commonly held myths and fleshing out its dogma, this article deals with the TuringTest, one of the most famous and controversial methods to assess the existence of mental life in the Philosophy of Mind. Firstly, I show why Turing never gave a definition of intelligence. Secondly, I dispute claims that the TuringTest provides a necessary or sufficient condition of intelligence. Thirdly, in view of its aim and the sort of evidence it offers, I consider whether or not Turing's test can be regarded as a scientific experiment in light of Fodor's theory. Finally, I argue that Turing is committed to a form of behaviourism and, further, confuses simulation -an epistemic process which, being governed by verisimilitude, is successful when someone is caused to believe that the computer is intelligent-with the duplication of intelligence qua property, which takes place at an ontological level. This confusion involves a dogma and explains why, despite being devised as the final solution to the dilemma of whether or not programmed machines think, the TuringTest has precisely had the opposite effect for longer than five decades, stimulating the philosophical discussion on the nature of mind. (shrink)
The paper begins by examining the original TuringTest (2T) and Searle’s antithetical Chinese Room Argument, which is intended to refute the 2T in particular, as well as any formal or abstract procedural theory of the mind in general. In the ensuing dispute between Searle and his own critics, I argue that Searle’s ‘internalist’ strategy is unable to deflect Dennett’s combined robotic-systems reply and the allied Total TuringTest (3T). Many would hold that the 3T marks (...) the culmination of the dialectic and, in principle, constitutes a fully adequate empirical standard for judging that an artifact is intelligent on a par with human beings. However, the paper carries the debate forward by arguing that the sociolinguistic factors highlighted in externalist views in the philosophy of language indicate the need for a fundamental shift in perspective in a Truly Total TuringTest (4T). It’s not enough to focus on Dennett’s individual robot viewed as a system; instead, we need to focus on an ongoing system of such artifacts. Hence a 4T should evaluate the general category of cognitive organization under investigation, rather than the performance of single specimens. From this comprehensive standpoint, the question is not whether an individual instance could simulate intelligent behavior within the context of a pre-existing sociolinguistic culture developed by the human cognitive type. Instead the key issue is whether the artificial cognitive type itself is capable of producing a comparable sociolinguistic medium. (shrink)
The TuringTest is a verbal-behavioral operational criterion of artificial intelligence. If a machine can participate in question–and–answer conversation adequately enough to deceive an intelligent interlocutor, then it has intelligent information processing abilities. Robert M. French has argued that recent discoveries in cognitive science about subcognitive processes involving associational primings prove that the TuringTest cannot provide a satisfactory criterion of machine intelligence, that Turing's prediction concerning the feasibility of building machines to play the imitation (...) game successfully is false, and that the test should be rejected as ethnocentric and incapable of measuring kinds and degrees of nonhuman intelligence. But French's criticism is flawed, because it requires Turing's sufficient conditional criterion of intelligence to serve as a necessary condition. Turing's Test is defended against these objections, and French's claim that the test ought to be rejected because machines cannot pass it is deemed unscientific, resting on the empirically unwarranted assumption that intelligent machines are possible. (shrink)
Este artículo analiza el Test de Turing, uno de los métodos más famosos y controvertidos para evaluar la existencia de vida mental en la Filosofía de la Mente, revelando dos mitos filosóficos comúnmente aceptados y criticando su dogma. En primer lugar, se muestra por qué Turing nunca propuso una definición de inteligencia. En segundo lugar, se refuta que el Test de Turing involucre condiciones necesarias o suficientes para la inteligencia. En tercer lugar, teniendo presente el (...) objetivo y el tipo de evidencia que recopila, se considera si el Test de Turing cuenta como un experimento científico a la luz de la concepción de Fodor. Finalmente, se argumenta que Turing simpatiza con una forma de Conductismo, confundiendo la simulación -un proceso epistémico que, gobernado por la verosimilitud, es eficaz cuando alguien es causado a creer que el computador es inteligente- con la duplicación de la inteligencia en cuanto propiedad, lo que ocurre a nivel ontológico. Tal confusión implica un dogma y explica por qué, a pesar de haber sido propuesto como una solución final a la problemática de si las máquinas programadas piensan, el Test de Turing ha tenido precisamente el efecto contrario en más de cinco décadas, estimulando el debate filosófico en torno a la naturaleza de lo mental. (shrink)
The TuringTest, originally proposed as a simple operational definition of intelligence, has now been with us for exactly half a century. It is safe to say that no other single article in computer science, and few other articles in science in general, have generated so much discussion. The present article chronicles the comments and controversy surrounding Turing's classic article from its publication to the present. The changing perception of the TuringTest over the last (...) fifty years has paralleled the changing attitudes in the scientific community towards artificial intelligence: from the unbridled optimism of 1960's to the current realization of the immense difficulties that still lie ahead. I conclude with the prediction that the TuringTest will remain important, not only as a landmark in the history of the development of intelligent machines, but also with real relevance to future generations of people living in a world in which the cognitive capacities of machines will be vastly greater than they are now. (shrink)
In a recent study of a patient in a persistent vegetative state, [Owen, A. M., Coleman, M. R., Boly, M., Davis, M. H., Laureys, S., & Pickard, J. D. . Detecting awareness in the vegetative state. Science, 313, 1402] claimed that they had demonstrated the presence of consciousness in this patient. This bold conclusion was based on the isomorphy between brain activity in this patient and a set of conscious control subjects, obtained in various imagery tasks. However, establishing consciousness in (...) unresponsive patients is fraught with methodological and conceptual difficulties. The aim of this paper is to demonstrate that the current debate surrounding consciousness in VS patients has parallels in the artificial intelligence debate as to whether machines can think. Basically, used a method analogous to the Turingtest to reveal the presence of consciousness, whereas their adversaries adopted a line of reasoning akin to Searle’s Chinese room argument. Highlighting the correspondence between these two debates can help to clarify the issues surrounding consciousness in non-communicative agents. (shrink)
In 1950, Alan Turing proposed his eponymous test based on indistinguishability of verbal behavior as a replacement for the question "Can machines think?" Since then, two mutually contradictory but well-founded attitudes towards the TuringTest have arisen in the philosophical literature. On the one hand is the attitude that has become philosophical conventional wisdom, viz., that the TuringTest is hopelessly flawed as a sufficient condition for intelligence, while on the other hand is the (...) overwhelming sense that were a machine to pass a real live full-fledged TuringTest, it would be a sign of nothing but our orneriness to deny it the attribution of intelligence. The arguments against the sufficiency of the TuringTest for determining intelligence rely on showing that some extra conditions are logically necessary for intelligence beyond the behavioral properties exhibited by an agent under a TuringTest. Therefore, it cannot follow logically from passing a TuringTest that the agent is intelligent. I argue that these extra conditions can be revealed by the TuringTest, so long as we allow a very slight weakening of the criterion from one of logical proof to one of statistical proof under weak realizability assumptions. The argument depends on the notion of interactive proof developed in theoretical computer science, along with some simple physical facts that constrain the information capacity of agents. Crucially, the weakening is so slight as to make no conceivable difference from a practical standpoint. Thus, the Gordian knot between the two opposing views of the sufficiency of the TuringTest can be cut. (shrink)
The so-called Turingtest, as it is usually interpreted, sets a benchmark standard for determining when we might call a machine intelligent. We can call a machine intelligent if the following is satisfied: if a group of wise observers were conversing with a machine through an exchange of typed messages, those observers could not tell whether they were talking to a human being or to a machine. To pass the test, the machine has to be intelligent but (...) it also should be responsive in a manner which cannot be distinguished from a human being. This standard interpretation presents the Turingtest as a criterion for demarcating intelligent from non-intelligent entities. For a long time proponents of artificial intelligence have taken the Turingtest as a goalpost for measuring progress. (shrink)
After proposing the TuringTest, Alan Turing himself considered a number of objections to the idea that a machine might eventually pass it. One of the objections discussed by Turing was that no machine will ever pass the TuringTest because no machine will ever “have as much diversity of behaviour as a man”. He responded as follows: the “criticism that a machine cannot have much diversity of behaviour is just a way of saying (...) that it cannot have much storage capacity”. I shall argue that the objection cannot be dismissed so easily. The diversity exhibited by human behaviour is characterized by a kind of context-sensitive adaptive plasticity. Most of the time, human beings flexibly and fluently respond to what is relevant in a given situation. Moreover, ordinary human life involves an open-ended flow of shifting contexts to which our behaviour typically adapts in real time. For a machine to “have as much diversity of behaviour as a man” would be for that machine to keep its responses and behaviour relevant within such a flow. Merely giving a machine the capacity to store a huge amount of information and an enormous number of behaviour-generating rules will not achieve this goal. By drawing on arguments presented originally by Descartes, and by making contact with the frame problem in artificial intelligence, I shall argue that the distinctive context-sensitive adaptive plasticity of human behaviour explains why the TuringTest is such a stringent test for the presence of thought, and why it is much harder to pass than Turing himself may have realized. (shrink)
No computer that had not experienced the world as we humans had could pass a rigorously administered standard TuringTest. We show that the use of “subcognitive” questions allows the standard TuringTest to indirectly probe the human subcognitive associative concept network built up over a lifetime of experience with the world. Not only can this probing reveal differences in cognitive abilities, but crucially, even differences in _physical aspects_ of the candidates can be detected. Consequently, it (...) is unnecessary to propose even harder versions of the Test in which all physical and behavioral aspects of the two candidates had to be indistinguishable before allowing the machine to pass the Test. Any machine that passed the “simpler” symbols- in/symbols-out test as originally proposed by Turing would be intelligent. The problem is that, even in its original form, the TuringTest is already too hard and too anthropocentric for any machine that was not a physical, social, and behavioral carbon copy of ourselves to actually pass it. Consequently, the TuringTest, even in its standard version, is not a reasonable test for general machine intelligence. There is no need for an even stronger version of the Test. (shrink)
What would it take for an artificial agent to be treated as having moral value? As a first step toward answering this question, we ask what it would take for an artificial agent to be capable of the sort of autonomous, adaptive social behavior that is characteristic of the animals that humans interact with. We propose that this sort of capacity is best measured by what we call the Embodied TuringTest. The Embodied Turingtest is (...) a test in which intelligence is operationally defined in terms of autonomous, adaptive interaction with the environment and with other animals. Three versions of the Embodied Turingtest were performed with a SONY AIBO robot. Human participants were asked to differentiate between AIBO in a human-controlled mode and AIBO in a software-controlled mode. Our results indicate that the human participants were guessing at how AIBO was controlled. Our data reveals that people do not have enough experience with robots to accurately evaluate its behavior. This indicates that today’s humans do not have enough experience with artificial agents to treat them as morally valuable. (shrink)
Why did the plan of using zombie manufacture as a means of studying consciousness ever seem plausible? Why does it impress so many people today? The immediate reason surely lies in fascination with the TuringTest -- the suggestion that computer programs would be proved to be conscious if they managed to carry on conversations in a way that made them seem conscious to a naive observer.
It is important to understand that the TuringTest is not, nor was it intended to be, a trick; how well one can fool someone is not a measure of scientific progress. The TT is an empirical criterion: It sets AI's empirical goal to be to generate human-scale performance capacity. This goal will be met when the candidate's performance is totally indistinguishable from a human's. Until then, the TT simply represents what it is that AI must endeavor eventually (...) to accomplish scientifically. (shrink)
This target article argues that the Turingtest implicitly rests on a "naive psychology," a naturally evolved psychological faculty which is used to predict and understand the behaviour of others in complex societies. This natural faculty is an important and implicit bias in the observer's tendency to ascribe mentality to the system in the test. The paper analyses the effects of this naive psychology on the Turingtest, both from the side of the system and (...) the side of the observer, and then proposes and justifies an inverted version of the test which allows the processes of ascription to be analysed more directly than in the standard version. (shrink)
In this paper, we look at the possibility of a machine having a sense of humour. In particular, we focus on actual machine utterances in Turingtest discourses. In doing so, we do not consider the Turingtest in depth and what this might mean for humanity, rather we merely look at cases in conversations when the output from a machine can be considered to be humorous. We link such outpourings with Turing’s “arguments from various (...) disabilities” used against the concept of a machine being able to think, taken from his seminal work of 1950. Finally we consider the role that humour might play in adding to the deception, integral to the Turingtest, that a machine in practice appears to be a human. (shrink)
This commentary attempts to show that the inverted TuringTest could be simulated by a standard Turingtest and, most importantly, claims that a very simple program with no intelligence whatsoever could be written that would pass the inverted Turingtest. For this reason, the inverted Turingtest in its present form must be rejected.
Stuart M. Shieber’s name is well known to computational linguists for his research and to computer scientists more generally for his debate on the Loebner TuringTest competition, which appeared a decade earlier in Communications of the ACM. 1 With this collection, I expect it to become equally well known to philosophers.
Robert French has argued that a disembodied computer is incapable of passing a TuringTest that includes subcognitive questions. Subcognitive questions are designed to probe the network of cultural and perceptual associations that humans naturally develop as we live, embodied and embedded in the world. In this paper, I show how it is possible for a disembodied computer to answer subcognitive questions appropriately, contrary to Frenchs claim. My approach to answering subcognitive questions is to use statistical information extracted (...) from a very large collection of text. In particular, I show how it is possible to answer a sample of subcognitive questions taken from French, by issuing queries to a search engine that indexes about 350 million Web pages. This simple algorithm may shed light on the nature of human cognition, but the scope of this paper is limited to demonstrating that French is mistaken: a disembodied computer can answer subcognitive questions. (shrink)