Skip to main content
Log in

Classical AI linguistic understanding and the insoluble Cartesian problem

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

This paper examines an insoluble Cartesian problem for classical AI, namely, how linguistic understanding involves knowledge and awareness of u’s meaning, a cognitive process that is irreducible to algorithms. As analyzed, Descartes’ view about reason and intelligence has paradoxically encouraged certain classical AI researchers to suppose that linguistic understanding suffices for machine intelligence. Several advocates of the Turing Test, for example, assume that linguistic understanding only comprises computational processes which can be recursively decomposed into algorithmic mechanisms. Against this background, in the first section, I explain Descartes’ view about language and mind. To show that Turing bites the bullet with his imitation game and in the second section I analyze this method to assess intelligence. Then, in the third section, I elaborate on Schank and Abelsons’ Script Applier Mechanism (SAM, hereby), which supposedly casts doubt on Descartes’ denial that machines can think. Finally, in the fourth section, I explore a challenge that any algorithmic decomposition of linguistic understanding faces. This challenge, I argue, is the core of the Cartesian problem: knowledge and awareness of meaning require a first-person viewpoint which is irreducible to the decomposition of algorithmic mechanisms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Whenever I quote Descartes, I add the Adams Tannery (AT) convention.

  2. Here I assume the Cartesian co-extension of the terms reason, intelligence and mind.

  3. Longsworth addresses the issue of whether linguistic understanding is a form of knowledge. In doing so, he states what desiderata a theory of linguistic understanding must satisfy. His discussion is directly relevant to this essay, because he examines in what sense understanding an utterance needs one to know and be aware of the meaning that expresses. He states the point as follows: “[…] those states [of understanding] must be of a sort able to interact with ordinary states of belief, knowledge, etc., and to play the same sort of role as those other states in shaping the subject’s consciousness […] What we seek in an account of state-understanding is an account of how such states can play a role in ordinary psychology, how occupying them can impact on the rational development of one’s cognitive economy” (Longsworth 2008, p. 51–52). For the sake of argument, I consider the awareness of u’s meaning as parasitic upon the knowledge required to understand u.

  4. See, for example, Marciszewski and Murawski (1995). In their book they assert that the mechanization of reason is paradigmatic in Leibniz’s logical calculi and Boole’s Algebra of Logic. The relation between logic and the mechanization of reason is pertinent not only for AI, but also for Cognitive Science. Copeland (1993, p. 10), for example, maintains that the philosophy of AI is prior to AI, since Turing wrote “Computing Machinery and Intelligence” in 1950. Dartmouth conference, which gave AI its name, was organized in 1956. Ever since, Minsky stated AI’s goal thus: “Artificial Intelligence is the science of making of machines do things that would require the intelligence if done by men” (Copeland 1993, p. 1). I quote this passage in order to show how AI’s main goal evolved, from the mechanization of reason, in the 19th century (with Babbage for example), to the making of machines that simulate intelligence, after 1956. Here I mean by ‘classical AI’ the approach that attempts to making intelligent machines upon the basis of formal rules and representations.

  5. See, for example, Hill (2002). This materialist philosopher attacks Descartes’ Dualism by stating that what can be conceived need not be the case.

  6. The fact that two things are joined does not entail that they are identical. Descartes emphasizes this point in the sixth Meditation, with the pilot and the ship dis-analogy (Descartes 1985a, p. 56, AT VII, 81).

  7. Take, for instance, Clarke’s theory (2003).

  8. I am grateful to an anonymous referee for this journal for raising the issue that AI’s main project is the simulation of intelligence. In fact, Bostrom (2014, p. 6) remarks that “[…] on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”. In this essay, I assume that prior to the simulation of intelligence, Leibniz Babbage, Boole and others attempted the mechanization of reason. Take, for example, Babbage’s difference and analytic engines (SWADE 2000).

  9. Turing is clear about his un-biological functionalism when he states: “if now some particular machine can be described as a brain we have only to program our digital computer to imitate it and it will also be a brain” (2004a, p. 112).

  10. I will return to this criticism and to the issue of linguistic understanding in the final section.

  11. In Sect. 1, I defined linguistic understanding in terms of knowledge and awareness of certain meaningful expression. Still, I note that such knowledge and awareness can also be applied to matching stories and scripts. In fact, although the state-understanding process of stories is more complex than that of expressions, in both cases understanding requires S’s knowledge and awareness.

  12. Take, for instance, Colby’s program (1975), PARRY. Such a program is the simulation of a paranoid person. All the emphasis is on PARRY’s psychotic linguistic behavior, which is reflected in the answers given by the program.

  13. Here I mean “psychology” in a broad sense, that is, the scientific study of cognitive intelligence and behavior. Moreover, by “psychologically plausible” I mean the possibility of imagining consciousness experiences, and how these experiences are supposed to be integrated to our cognitive intelligence and behavior.

  14. Whether Searle endorses a Cartesian view when he holds that intentional mental states have conditions of satisfaction, which are known by an agent (Cf. Searle 1983, p. 64) is indeed debatable. Also, I examine elsewhere the Systems Reply to the Chinese Room argument, which involves a mechanism that cannot be internalized: the agent’s introspection, which is fundamental to run the thought experiment (González 2012). All these points deserve more discussion in another essay.

References

  • Block N (1995) The mind as software of the brain. In: Heil J (ed) Philosophy of mind: a guide and anthology. OUP, Oxford, pp 267–274

    Google Scholar 

  • Bostrom N (2014) Superintelligence: paths, dangers, strategies. OUP, Oxford

    Google Scholar 

  • Brown JR (2007) Counter thought experiments. R Inst Philos Suppl 61(82):155–177

    Article  Google Scholar 

  • Clarke D (2003) Descartes’s theory of mind. Clarendon Press, Oxford

    Book  Google Scholar 

  • Colby K (1975) Artificial paranoia. Pergamon Press, New York

    Google Scholar 

  • Copeland J (1993) Artificial intelligence: a philosophical introduction. Blackwell, Oxford

    Google Scholar 

  • Copeland J (2019) “The Church-Turing Thesis”. Available at: http://plato.stanford.edu/entries/church-turing/. Accessed on 22.06.2019

  • Crane T (2003) The mechanical mind: a philosophical introduction to minds, machines and mental representation. Routledge, London

    Book  Google Scholar 

  • Descartes R (1985a) Meditations on first philosophy. In: Cottingham J, Stoothoff R, Murdoch D (eds) The philosophical writings of descartes, vol II. Cambridge University Press, New York, pp 1–62

    Google Scholar 

  • Descartes R (1985b) Discourse on the method. In: Cottingham J, Stoothoff R, Murdoch D (eds) The philosophical writings of descartes, vol I. Cambridge University Press, New York, pp 109–151

    Chapter  Google Scholar 

  • Genova J (1994) Turing’s sexual guessing game. Social Epistemology 8(4):313–326

    Article  Google Scholar 

  • González R (2011) Descartes, las Intuiciones Modales y la IA. In: Revista Alpha, vol 32, pp 181–198

  • González R (2012) La pieza china: un experimento mental con sesgo cartesiano. In: Revista Chilena de Neuropsicología, vol 7, pp 1–6

  • González R (2015) ¿Importa la determinación del sexo en el Test de Turing? In: Revista de Filosofía Aurora, vol 27 (January–April), pp 277–295

  • Hill C (2002) Imaginability, conceivability, and the mind-body problem. In: Chalmers D (ed) Philosophy of mind: classical and contemporary readings. OUP, Oxford, pp 334–341

    Google Scholar 

  • Lassègue J (1996) What kind of turing test did turing have in mind? Tekhnema 3:37–58

    Google Scholar 

  • Longsworth G (2008) Linguistic understanding and knowledge. Noûs 42(1):50–79

    Article  Google Scholar 

  • Marciszewski W, Murawski R (1995) Mechanization of reasoning in a historical perspective. Rodopi, Amsterdam/Atlanta

    MATH  Google Scholar 

  • Moor J (1976) “An Analysis of the Turing test”. In: Philosophical Studies 30, pp 249–57. Reprinted in Shieber S (ed.) The turing test: verbal behaviour as the hallmark of intelligence. MIT Press, Cambridge, pp 297–306

  • Nagel T (1974) What is it like to be a bat? Philos Rev 83:435–450

    Article  Google Scholar 

  • Penrose R (1999) The emperor´s new mind. Oxford University Press, Oxford

    Google Scholar 

  • Putnam H (1967) “Psychological predicates.” In: Capitan W, Merril D, Art (ed.) Mind, and Religion. Pittsburgh: University of Pittsburgh Press. Reprinted in Heil J (ed) Philosophy of mind: a guide and anthology. Oxford University Press, Oxford, pp 160–167

  • Putnam H (1973) “The nature of mental states,” originally published as “Psychological Predicates.” In: Capitan W, Merril D, Art (ed.) Mind, and religion. Pittsburgh: University of Pittsburgh Press. Reprinted in Chalmers D (ed) Philosophy of mind: classical and contemporary readings. Oxford University Press, New York, pp 73–79

  • Schank RC, Abelson RP (1977) Scripts, Plans, Goals, and Understanding. Hillsdale, NJ: Erlbaum

  • Searle J (1980) Minds, brains and programs. Behav Brain Sci 3:417–424

    Article  Google Scholar 

  • Searle J (1983) Intentionality: an essay in the philosophy of mind. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Searle J (1990) “Is the brain’s mind a computer program?”. In: Scientific American, pp 20–25

  • Swade D (2000) The difference engine: charles babbage and the quest to build the first computer. Penguin, London

    MATH  Google Scholar 

  • Turing A (1936) “On computable numbers, with an application to the Entscheidungsproblem”. In: Proceedings of the London Mathematical Society, series 2, v. 42, pp 231–65 (with corrections in v. 43), pp 544–546

  • Turing A (1990) “Computing intelligence and machinery” In: Mind LIX, n. 2236, pp 433–60, Oct. 1950. Reprinted in: Boden, M. (ed) The Philosophy of Artificial Intelligence. OUP, Oxford, pp 40–66

  • Turing A (2004a) “Can Digital Computers Think?” An interview in BBC, 15 may 1951. Reference of Turing archives: B.5. Reprinted in Shieber S (ed) The turing test: verbal behavior as the hallmark of intelligence. MIT Press, Cambridge, pp 111–116

  • Turing A (2004b) “Intelligent machinery, a heretical theory”, inedited manuscripts of a conference in “51 Society”, Manchester, England. Reference of Turing archives: B.4. Reprinted in Shieber S (ed) The turing test: verbal behavior as the hallmark of intelligence. MIT Press, Cambridge, pp 105–109

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rodrigo González.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

González, R. Classical AI linguistic understanding and the insoluble Cartesian problem. AI & Soc 35, 441–450 (2020). https://doi.org/10.1007/s00146-019-00906-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-019-00906-x

Keywords

Navigation