Skip to main content

Advertisement

Log in

The Externalist Foundations of a Truly Total Turing Test

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

The paper begins by examining the original Turing Test (2T) and Searle’s antithetical Chinese Room Argument, which is intended to refute the 2T in particular, as well as any formal or abstract procedural theory of the mind in general. In the ensuing dispute between Searle and his own critics, I argue that Searle’s ‘internalist’ strategy is unable to deflect Dennett’s combined robotic-systems reply and the allied Total Turing Test (3T). Many would hold that the 3T marks the culmination of the dialectic and, in principle, constitutes a fully adequate empirical standard for judging that an artifact is intelligent on a par with human beings. However, the paper carries the debate forward by arguing that the sociolinguistic factors highlighted in externalist views in the philosophy of language indicate the need for a fundamental shift in perspective in a Truly Total Turing Test (4T). It’s not enough to focus on Dennett’s individual robot viewed as a system; instead, we need to focus on an ongoing system of such artifacts. Hence a 4T should evaluate the general category of cognitive organization under investigation, rather than the performance of single specimens. From this comprehensive standpoint, the question is not whether an individual instance could simulate intelligent behavior within the context of a pre-existing sociolinguistic culture developed by the human cognitive type. Instead the key issue is whether the artificial cognitive type itself is capable of producing a comparable sociolinguistic medium.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. See Harnad (2002) for a concise discussion/analysis of the CRA.

  2. Shieber (2007) provides a valiant and intriguing rehabilitation/defense of the 2T, but it nonetheless remains a 'bed-ridden' standard that neglects crucial behavioral data, such as mastery of salient language exit and entry rules. Ultimately Shieber's rehabilitation in terms of interactive proof requires acceptance of the notion that conversational input/response patters alone are sufficient, which premise I would deny for the reasons given. The program is still operating within a closed syntactic bubble.

  3. Alternatively, Rapaport (2006) argues that human neuron firings are also just a form of uninterpreted syntax, so that what the homunculus Searle in the control room is doing is no different from what our brains do. And if our brains understand natural language, then there's no reason to deny this of Searle in the room, at least not just because all he's doing is manipulating uninterpreted syntax.

  4. For example, in (1984) and (1990) Searle makes some of the background machinery more explicit.

  5. This fact, in the context of the 2T, is also noted in, e.g. Copeland (2001).

  6. Interestingly, Turing considers the possibility that the best way to produce a machine able to pass the 2T might be to "follow the normal teaching of a child". However, when describing the 'child programme' he observes that "It will not be possible to apply exactly the same teaching process to the machine as to a normal child. It will not, for instance, be provided with legs …" indicating that Turing, at this point, is speculating about a learning program, not a genuine robot (although he does subsequently conjecture about engineering enhancements, which seem to anticipate the robotic 3T).

  7. This is perhaps comparable to a situation where unsuspecting earthlings crash land their space ship on Twin Earth. On day one they will still mean H2O when they utter the term ‘water’, since that's the native interpretation of their language. But after they've lived on Twin Earth for sometime and had sufficiently many interactions with environmental XYZ, and been integrated into their new sociolinguistic clan, they will enter a grey area, and it is plausible to hold that they will eventually become grounded in Twin Earth semantics and mean XYZ when they say ‘water’.

  8. A similar but less extreme point holds in regard to human members of different native NL groups. Since we are all member of the same cognitive type the 4T is not an issue. So a French person who diligently studied English as a second language in Paris and then came to London would presumably be able to understand enough English to pass an English 3T. And this is because the French person is a member of the French sociolinguistic community, and hence is already semantically grounded in a human NL, and can thereby understand English by first translating it into French. So one might be able to learn a language 'purely syntactically', but only through the pre-existence of a semantical foundation in some prior interpreted language. And clearly this is asymmetrical with the case of a newly assembled 3T robot. I would like to thank an anonymous reviewer for bringing this (and the Martian learning English case) to my attention as potential objections to my view.

  9. As with the 3T, the proposed test framework is quite futuristic (as even the original 2T is now turning out to have been), since the paper is not concerned with the practicalities of carrying out actual assessments, but rather with the operational standards which, in principle, are required to attain parity with the full range of data available in the human case. The evidence has taken tens of thousands of years to manifest itself, and we would have to somehow collapse the timeframe in order to apply the same standards to an artificial cognitive type. One possibility would be to use computer modelling to run ‘evolutionary’ scenarios in much faster than real time, where simulations of communities of the artificial agents could perhaps yield answers about long term 4T capabilities.

References

  • Block, N. (1981). Psychologism and behaviorism. Philosophical Review, 90, 5–43.

    Article  Google Scholar 

  • Burge, T. (1979). Individualism and the mental. In P. French, T. Euhling, & H. Wettstein (Eds.), Studies in epistemology, vol. 4, midwest studies in philosophy (Vol. 4). Minneapolis: University of Minnesota Press.

    Google Scholar 

  • Copeland, B. J. (2001). The Turing test. Minds and Machines, 10, 519–539.

    Article  Google Scholar 

  • Dennett, D. (1980). The milk of human intentionality. Behavioral and Brain Sciences, 3, 428–430.

    Article  Google Scholar 

  • Fodor, J., & Pylyshyn, Z. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1–2), 3–71.

    Article  Google Scholar 

  • French, R. (2000). The Turing test: The first 50 years. Trends in Cognitive Sciences, 4, 115–122.

    Article  Google Scholar 

  • Harnad, S. (1991). Other bodies, other minds: A machine incarnation of an old philosophical problem. Minds and Machines, 1, 43–54.

    Google Scholar 

  • Harnad, S. (2002). Minds, machines and Searle 2: What’s wrong and right bout Searle’s Chinese room argument? In J. Preston & M. Bishop (Eds.), Views into the Chinese room: New essays on Searle and artificial intelligence (pp. 294–307). Oxford: Oxford University Press.

    Google Scholar 

  • Kripke, S. (1972). Naming and necessity. Harvard: Harvard University Press.

    Google Scholar 

  • McCarthy, J. (1955). A proposal for the Dartmouth Summer Research Project on Artificial Intelligence. http://www.formal.stanford.edu/jmc/history/dartmouth/dartmouth.html.

  • Newel, A., & Simon, H. (1976). Computer science as empirical inquiry: Symbols and search. Communications of the Association for Computing Machinery, 19, 113–126.

    Article  MathSciNet  Google Scholar 

  • Putnam, H. (1975). The meaning of ‘meaning’. In Mind, language and reality, Cambridge: Cambridge University Press.

  • Putnam, H. (1981). Brains in a vat. In Reason, truth and history, pp. 1–21, Cambridge: Cambridge University Press.

  • Rapaport, W. J. (2006). How Helen Keller used syntactic semantics to escape from a Chinese room. Minds and Machines, 16, 381–436.

    Article  Google Scholar 

  • Rey, G. (2002). Searle’s misunderstanding of functionalism and strong AI. In J. Preston & M. Bishop (Eds.), Views into the Chinese room: New essays on Searle and artificial intelligence (pp. 201–225). Oxford: Oxford University Press.

    Google Scholar 

  • Schweizer, P. (1998). The truly total Turing test. Minds and Machines, 8, 263–272.

    Article  Google Scholar 

  • Searle, J. (1980). Minds, brains and programs. Behavioral and Brain Sciences, 3, 417–424.

    Article  Google Scholar 

  • Searle, J. (1984). Minds, brains and science. Harvard: Harvard University Press.

    Google Scholar 

  • Searle, J. (1990). Consciousness, explanatory inversion and cognitive science. Behavioral and Brain Sciences, 13, 585–596.

    Article  Google Scholar 

  • Searle, J. (1994). The failures of computationalism. Think, 2, 68–71.

    Google Scholar 

  • Shieber, S. (2007). The Turing test as interactive proof. Nous, 41, 33–60.

    Article  MathSciNet  Google Scholar 

  • Turing, A. (1950). Computing machinery and intelligence. Mind, 59, 433–460.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paul Schweizer.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Schweizer, P. The Externalist Foundations of a Truly Total Turing Test. Minds & Machines 22, 191–212 (2012). https://doi.org/10.1007/s11023-012-9272-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-012-9272-4

Keywords

Navigation