Skip to main content

Advertisement

Log in

Imitation Versus Communication: Testing for Human-Like Intelligence

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

Turing’s Imitation Game is often viewed as a test for theorised machines that could ‘think’ and/or demonstrate ‘intelligence’. However, contrary to Turing’s apparent intent, it can be shown that Turing’s Test is essentially a test for humans only. Such a test does not provide for theorised artificial intellects with human-like, but not human-exact, intellectual capabilities. As an attempt to bypass this limitation, I explore the notion of shifting the goal posts of the Turing Test, and related tests such as the Total Turing Test, away from the exact imitation of human capabilities, and towards communication with humans instead. While the continued philosophical relevance of such tests is open to debate, the outcome is a different class of tests which are, unlike the Turing Test, immune to failure by means of sub-cognitive questioning techniques. I suggest that attempting to instantiate such tests could potentially be more scientifically and pragmatically relevant to some Artificial Intelligence researchers, than instantiating a Turing Test, due to the focus on producing a variety of goal directed outcomes through communicative methods, as opposed to the Turing Test’s emphasis on ‘fooling’ an Examiner.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Note that Turing gave little indication in his original paper that the test was intended as a test for ‘intelligence’ as well. However, the test has been often liberally reinterpreted as also being a test for intelligence by later researchers, regardless of what Turing may or may not have intended.

  2. While I do not object to the relatively recent concerns of some researchers that embodiment may be an important factor in intelligent behaviour, a seemingly overlooked fact in the history of AI is that Turing was also aware of the importance of embodiment, particularly in communication. In an unfinished paper, written in 1948, but published long after his death, he wrote that the learning of languages by a machine seemed to “depend rather too much on sense organs and locomotion to be feasible” (Turing 1948). He envisioned a machine which would include “television cameras, microphones, loudspeakers, wheels and ‘handling servomechanisms’ as well as some sort of ‘electronic brain’ ... In order that the machine should have a chance of finding things out for itself it should be allowed to roam the countryside”. He further speculated that this “method is probably the ‘sure’ way of producing a thinking machine” (Turing 1948). In his day, such machines would have been enormous when constructed with available technology, and Turing consequently felt that such a machine would be “too slow and impracticable” (Turing 1948). By the time Turing’s ideas on this subject were published, ‘mainstream’ AI research had already been moving in a different direction for some time.

References

  • Deacon, T. (1997). The symbolic species. New York: W. W. Norton and Company.

    Google Scholar 

  • Dick, P. K. (1968). Do androids dream of electric sheep? Garden City, NY: Doubleday.

    Google Scholar 

  • Fisher, R., Patton, B., & Ury, W. (1992). Getting to yes: Negotiating agreement without giving in (2nd ed.). Boston, MA: Houghton Mifflin.

    Google Scholar 

  • French, R. (1990). Subcognition and the limits of the Turing Test. Mind, 99, 53–65.

    Article  MathSciNet  Google Scholar 

  • French, R. (2000). The Turing Test: The first fifty years. Trends in Cognitive Sciences, 4(3), 115–121.

    Article  Google Scholar 

  • Harnad, S. (2001). Minds, machines and Turing: The indistinguishability of indistinguishables. Journal of Logic, Language, and Information (special issue on “Alan Turing and Artificial Intelligence”).

  • Lakoff, G., & Johnson, M. (2003). Metaphors we live by (2nd ed.). Chicago: University of Chicago Press.

    Google Scholar 

  • Loebner, H. (1994). In response [to Shieber: Lessons from a restricted Turing Test]. Accessed April 19, 2009, from http://www.loebner.net/Prizef/In-response.html

  • Russell, S., & Norvig, P. (2003). Artificial Intelligence: A modern approach (2nd ed.). NJ: Prentice-Hall, Inc.

    Google Scholar 

  • Schank, R. (1987). What is AI anyway? AI Magazine, 8(4), 59–65.

    Google Scholar 

  • Shieber, S. (1993). Lessons from a restricted Turing Test. Communications of the Association for Computing Machinery,37(6), 70–78.

    Google Scholar 

  • Turing, A. (1948). Intelligent machinery. Machine Intelligence, 5, 3–23. (Manuscript written in 1948, published in 1969)

  • Turing, A. (1950). Computing machinery and intelligence. Mind, 59, 433–460.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author would like to thank Alan Blair and David Chalmers for their words of encouragement regarding an early draft of this paper. The author would also like to thank the anonymous reviewers whose feedback he found helpful.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jamie Cullen.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cullen, J. Imitation Versus Communication: Testing for Human-Like Intelligence. Minds & Machines 19, 237–254 (2009). https://doi.org/10.1007/s11023-009-9149-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-009-9149-3

Keywords

Navigation