This article is an accompaniment to Anthony Freeman’s review of Views into the ChineseRoom, reflecting on some pertinent outstanding questions about the Chineseroom argument. Although there is general agreement in the artificial intelligence community that the CRA is somehow wrong, debate continues on exactly why and how it is wrong. Is there a killer counter-argument and, if so, what is it? One remarkable fact is that the CRA is prototypically a thought experiment, yet it (...) has been very little discussed from the perspective of thought experiments in general. Here, I argue that the CRA fails as a thought experiment because it commits the fallacy of undersupposing, i.e., it leaves too many details to be filled in by the audience. Since different commentators will often fill in details differently, leading to different opinions of what constitutes a decisive counter, the result is 21-plus years of inconclusive debate. (shrink)
I argue in this article that there is a mistake in Searle's Chineseroom argument that has not received sufficient attention. The mistake stems from Searle's use of the Church-Turing thesis. Searle assumes that the Church-Turing thesis licences the assumption that the Chineseroom can run any program. I argue that it does not, and that this assumption is false. A number of possible objections are considered and rejected. My conclusion is that it is consistent with (...) Searle's argument to hold onto the claim that understanding consists in the running of a program. (shrink)
The Chineseroom argument is a thought experiment of John Searle (1980a) and associated (1984) derivation. It is one of the best known and widely credited counters to claims of artificial intelligence (AI)—that is, to claims that computers do or at least can (someday might) think. According to Searle’s original presentation, the argument is based on two key claims: brains cause minds and syntax doesn’t suffice for semantics. Its target is what Searle dubs “strong AI.” According to strong (...) AI, Searle says, “the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states” (1980a, p. 417). Searle contrasts strong AI with “weak AI.” According to weak AI, computers just simulate thought, their seeming understanding isn’t real understanding (just as-if), their seeming calculation is only as-if calculation, etc. Nevertheless, computer simulation is useful for studying the mind (as for studying the weather and other things). (shrink)
In this paper I submit that the “Chineseroom” argument rests on the assumption that understanding a sentence necessarily implies being conscious of its content. However, this assumption can be challenged by showing that two notions of consciousness come into play, one to be found in AI, the other in Searle’s argument, and that the former is an essential condition for the notion used by Searle. If Searle discards the first, he not only has trouble explaining how we (...) can learn a language but finds the validity of his own argument in jeopardy. (shrink)
I argue that John Searle's (1980) influential Chineseroom argument (CRA) against computationalism and strong AI survives existing objections, including Block's (1998) internalized systems reply, Fodor's (1991b) deviant causal chain reply, and Hauser's (1997) unconscious content reply. However, a new ``essentialist'' reply I construct shows that the CRA as presented by Searle is an unsound argument that relies on a question-begging appeal to intuition. My diagnosis of the CRA relies on an interpretation of computationalism as a scientific theory (...) about the essential nature of intentional content; such theories often yield non-intuitive results in non-standard cases, and so cannot be judged by such intuitions. However, I further argue that the CRA can be transformed into a potentially valid argument against computationalism simply by reinterpreting it as an indeterminacy argument that shows that computationalism cannot explain the ordinary distinction between semantic content and sheer syntactic manipulation, and thus cannot be an adequate account of content. This conclusion admittedly rests on the arguable but plausible assumption that thought content is interestingly determinate. I conclude that the viability of computationalism and strong AI depends on their addressing the indeterminacy objection, but that it is currently unclear how this objection can be successfully addressed. (shrink)
John Searle’s ChineseRoom Argument purports to demonstrate that syntax is not sufficient for semantics, and, hence, because computation cannot yield understanding, the computational theory of mind, which equates the mind to an information processing system based on formal computations, fails. In this paper, we use the CRA, and the debate that emerged from it, to develop a philosophical critique of recent advances in robotics and neuroscience. We describe results from a body of work that contributes to blurring (...) the divide between biological and artificial systems; so-called animats, autonomous robots that are controlled by biological neural tissue and what may be described as remote-controlled rodents, living animals endowed with augmented abilities provided by external controllers. We argue that, even though at first sight, these chimeric systems may seem to escape the CRA, on closer analysis, they do not. We conclude by discussing the role of the body–brain dynamics in the processes that give rise to genuine understanding of the world, in line with recent proposals from enactive cognitive science. (shrink)
It was in 1980 that John Searle first opened the door of his ChineseRoom, purporting to show that the conscious mind cannot, in principle, work like a digital computer. Searle, who speaks no Chinese, stipulated that locked in this fictitious space he had a supply of different Chinese symbols, together with instructions for using them . When Chinese characters were passed in to him, he would consult the instructions and pass out more symbols. Neither (...) input nor output would mean anything to him, but it would look to the outsider as though he were answering in Chinese the questions in Chinese that were being passed into him. That, claimed Searle, is exactly the situation with the notorious Turing test for computer intelligence . Only from the outside does the computer appear understand the questions and answers. Inside, all is a formal shuffling of meaningless symbols. John Preston and Mark Bishop, ed., Views into the ChineseRoom: New Essays on Searle and Artificial Intelligence, Oxford University Press, 2002, 410pp, ?50 ISBN 0198250576. (shrink)
_The Chineseroom argument_ - John Searle's (1980a) thought experiment and associated (1984) derivation - is one of the best known and widely credited counters to claims of artificial intelligence (AI), i.e., to claims that computers _do_ or at least _can_ (someday might) think. According to Searle's original presentation, the argument is based on two truths: _brains cause minds_ , and _syntax doesn't_ _suffice for semantics_ . Its target, Searle dubs "strong AI": "according to strong AI," according to (...) Searle, "the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really _is_ a mind in the sense that computers given the right programs can be literally said to _understand_ and have other cognitive states" (1980a, p. 417). Searle contrasts "strong AI" to "weak AI". According to weak AI, according to Searle, computers just. (shrink)
To convince us that computers cannot have mental states, Searle (1980) imagines a “Chineseroom” that simulates a computer that “speaks” Chinese and asks us to find the understanding in the room. It's a trick. There is no understanding in the room, not because computers can't have it, but because the room's computer-simulation is defective. Fix it and understanding appears. Abracadabra!
Detractors of Searle’s ChineseRoom Argument have arrived at a virtual consensus that the mental properties of the Man performing the computations stipulated by the argument are irrelevant to whether computational cognitive science is true. This paper challenges this virtual consensus to argue for the first of the two main theses of the persons reply, namely, that the mental properties of the Man are what matter. It does this by challenging many of the arguments and conceptions put forth (...) by the systems and logical replies to the ChineseRoom, either reducing them to absurdity or showing how they lead, on the contrary, to conclusions the persons reply endorses. The paper bases its position on the ChineseRoom Argument on additional philosophical considerations, the foundations of the theory of computation, and theoretical and experimental psychology. The paper purports to show how all these dimensions tend to support the proposed thesis of the persons reply. (shrink)
Charles Babbage began the quest to build an intelligent machine in the nineteenth century. Despite finishing neither the Difference nor the Analytical engine, he was aware that the use of mental language for describing the functioning of such machines was figurative. In order to reverse this cautious stance, Alan Turing postulated two decisive ideas that contributed to give birth to Artificial Intelligence: the Turing machine and the Turing test. Nevertheless, a philosophical problem arises from regarding intelligence simulation and make-believe as (...) sufficient to establish that programmed computers are intelligent and have mental states, especially given the nature of mind and its characteristic first-person viewpoint. The origin of Artificial Intelligence is undoubtedly linked to the accounts that inspired John Searle to coin the term strong AI ―or the view that simply equates computers and minds. Especially emphasising the divergence between algorithmic processes and intentional mental states, the ChineseRoom thought experiment shows that, since the mind is embodied and able to realise when linguistic understanding takes place, mental states require material implementation, a point that directly conflicts with the accounts that reduce the mind to the functioning of a programmed computer. The experience of linguistic understanding with its typical quale leads to other important philosophical issues. Searle’s theory of intentionality holds that intentional mental states have conditions of satisfaction and appear in semantic networks; thus people know when they understand and what terms are about. In contrast, a number of philosophers maintain that consciousness is only an illusion and that it plays no substantial biological role. However, consciousness is a built-in feature of the system. Moreover, neurological evidence suggests that conscious mental states, qualia and emotions enhance survival chances and are an important part of the phenomenal side of mental life and its causal underpinnings. This renders an important gap between simulating a mind and replicating the properties that allow having mental states and consciousness. On this score, the Turing test and the evidence it offers clearly overestimate simulation and verisimilar make-believe, since such evidence is insufficient to establish that programmed computers have mental life. In summary, this dissertation criticises views which hold that programmed computers are minds and minds are nothing but computers. Despite the arguments in favour of such an equation, they all fail to properly reduce the mind and its first-person viewpoint. Accordingly, the burden of proof still lies with the advocates of strong AI and with those who are willing to deny fundamental parts of the mind to make room for machine intelligence. (shrink)
Searle’s Chineseroom argument (CRA) was recently charged as being unsound because it makes a logical error. It is shown here that this charge is based on a misinterpretation of the modal scope of a major premise of the CRA and that the CRA does not commit the logical error with which it is charged.
This paper is a follow-up of the first part of the persons reply to the ChineseRoom Argument. The first part claims that the mental properties of the person appearing in that argument are what matter to whether computational cognitive science is true. This paper tries to discern what those mental properties are by applying a series of hypothetical psychological and strengthened Turing tests to the person, and argues that the results support the thesis that the Man performing (...) the computations characteristic of understanding Chinese actually understands Chinese. The supposition that the Man does not understand Chinese has gone virtually unquestioned in this foundational debate. The persons reply acknowledges the intuitive power behind that supposition, but knows that brute intuitions are not epistemically sacrosanct. Like many intuitions humans have had, and later deposed, this intuition does not withstand experimental scrutiny. The second part of the persons reply consequently holds that computational cognitive science is confirmed by the ChineseRoom thought experiment. (shrink)
More than a decade ago, philosopher John Searle started a long-running controversy with his paper “Minds, Brains, and Programs” (Searle, 1980a), an attack on the ambitious claims of artificial intelligence (AI). With his now famous _Chinese Room_ argument, Searle claimed to show that despite the best efforts of AI researchers, a computer could never recreate such vital properties of human mentality as intentionality, subjectivity, and understanding. The AI research program is based on the underlying assumption that all important aspects of (...) human cognition may in principle be captured in a computational model. This assumption stems from the belief that beyond a certain level, implementational details are irrelevant to cognition. According to this belief, neurons, and biological wetware in general, have no preferred status as the substrate for a mind. As it happens, the best examples of minds we have at present have arisen from a carbon-based substrate, but this is due to constraints of evolution and possibly historical accidents, rather than to an absolute metaphysical necessity. As a result of this belief, many cognitive scientists have chosen to focus not on the biological substrate of the mind, but instead on the abstract causal structure_ _that the mind embodies (at an appropriate level of abstraction). The view that it is abstract causal structure that is essential to mentality has been an implicit assumption of the AI research program since Turing (1950), but was first articulated explicitly, in various forms, by Putnam (1960), Armstrong (1970) and Lewis (1970), and has become known as _functionalism_. From here, it is a very short step to _computationalism_, the view that computational structure is what is important in capturing the essence of mentality. This step follows from a belief that any abstract causal structure can be captured computationally: a belief made plausible by the Church–Turing Thesis, which articulates the power. (shrink)
A computer can come to understand natural language the same way Helen Keller did: by using “syntactic semantics”—a theory of how syntax can suffice for semantics, i.e., how semantics for natural language can be provided by means of computational symbol manipulation. This essay considers real-life approximations of Chinese Rooms, focusing on Helen Keller’s experiences growing up deaf and blind, locked in a sort of ChineseRoom yet learning how to communicate with the outside world. Using the SNePS (...) computational knowledge-representation system, the essay analyzes Keller’s belief that learning that “everything has a name” was the key to her success, enabling her to “partition” her mental concepts into mental representations of: words, objects, and the naming relations between them. It next looks at Herbert Terrace’s theory of naming, which is akin to Keller’s, and which only humans are supposed to be capable of. The essay suggests that computers at least, and perhaps non-human primates, are also capable of this kind of naming. (shrink)
Ford’s Helen Keller Was Never in a ChineseRoom claims that my argument in How Helen Keller Used Syntactic Semantics to Escape from a ChineseRoom fails because Searle and I use the terms ‘syntax’ and ‘semantics’ differently, hence are at cross purposes. Ford has misunderstood me; this reply clarifies my theory.
Searle’s ChineseRoom Argument (CRA) has been the object of great interest in the philosophy of mind, artificial intelligence and cognitive science since its initial presentation in ‘Minds, Brains and Programs’ in 1980. It is by no means an overstatement to assert that it has been a main focus of attention for philosophers and computer scientists of many stripes. It is then especially interesting to note that relatively little has been said about the detailed logic of the argument, (...) whatever significance Searle intended CRA to have. The problem with the CRA is that it involves a very strong modal claim, the truth of which is both unproved and highly questionable. So it will be argued here that the CRA does not prove what it was intended to prove. (shrink)
William Rapaport, in “How Helen Keller used syntactic semantics to escape from a ChineseRoom,” (Rapaport 2006), argues that Helen Keller was in a sort of ChineseRoom, and that her subsequent development of natural language fluency illustrates the flaws in Searle’s famous ChineseRoom Argument and provides a method for developing computers that have genuine semantics (and intentionality). I contend that his argument fails. In setting the problem, Rapaport uses his own preferred definitions (...) of semantics and syntax, but he does not translate Searle’s ChineseRoom argument into that idiom before attacking it. Once the ChineseRoom is translated into Rapaport’s idiom (in a manner that preserves the distinction between meaningful representations and uninterpreted symbols), I demonstrate how Rapaport’s argument fails to defeat the CRA. This failure brings a crucial element of the ChineseRoom Argument to the fore: the person in the ChineseRoom is prevented from connecting the Chinese symbols to his/her own meaningful experiences and memories. This issue must be addressed before any victory over the CRA is announced. (shrink)
John Searle begins his ``Consciousness, Explanatory Inversion and Cognitive Science'' with " ``Ten years ago in this journal I published an article criticising what I call Strong AI, the view that for a system to have mental states it is sufficient for the system to implement the right sort of program with right inputs and outputs. Strong AI is rather easy to refute and the basic argument can be summarized in one sentence: {it a system, me for example, could implement (...) a program for understanding Chinese, for example, without understanding any Chinese at all.} This idea, when developed, became known as the ChineseRoom Argument.'' " The ChineseRoom Argument can be refuted in one sentence. (shrink)
Searle's ChineseRoom was supposed to prove that computers can't understand: the man in the room, following, like a computer, syntactical rules alone, though indistinguishable from a genuine Chinese speaker, doesn't understand a word. But such a room is impossible: the man won't be able to respond correctly to questions like What is the time?, even though such an ability is indispensable for a genuine Chinese speaker. Several ways to provide the room with (...) the required ability are considered, and it is concluded that for each of these the room will have understanding. Hence, Searle's argument is invalid. (shrink)
Employing Searle’s views, I begin by arguing that students of Mathematics behave similarly to machines that manage symbols using a set of rules. I then consider two types of Mathematics, which I call Cognitive Mathematics and Technical Mathematics respectively. The former type relates to concepts and meanings, logic and sense, whilst the latter relates to algorithms, heuristics, rules and application of various techniques. I claim that an upgrade in the school teaching of Cognitive Mathematics is necessary. The aim is to (...) change the current mentality of the stakeholders so as to compensate for the undue value presently attached to Technical Mathematics, due to advances in technology and its applications, and thus render the two sides of Mathematics equal. Furthermore, I suggest a reorganization/systematization of School Mathematics into a cognitive network to facilitate students’ understanding of the subject. The final goal is the transition from mechanical execution of rules to better understanding and in-depth knowledge of Mathematics. (shrink)
John Searle has argued that one can imagine embodying a machine running any computer program without understanding the symbols, and hence that purely computational processes do not yield understanding. The disagreement this argument has generated stems, I hold, from ambiguity in talk of 'understanding'. The concept is analysed as a relation between subjects and symbols having two components: a formal and an intentional. The central question, then becomes whether a machine could possess the intentional component with or without the formal (...) component. I argue that the intentional state of a symbol's being meaningful to a subject is a functionally definable relation between the symbol and certain past and present states of the subject, and that a machine could bear this relation to a symbol. I sketch a machine which could be said to possess, in primitive form, the intentional component of understanding. Even if the machine, in lacking consciousness, lacks full understanding, it contributes to a theory of understanding and constitutes a counterexample to the ChineseRoom argument. (shrink)
The purpose of this paper is to explore a possible resolution to one of the main objections to machine thought as propounded by Alan Turing in the imitation game that bears his name. That machines will, at some point, be able to think is the central idea of this text, a claim supported by a schema posited by Andy Clark and David Chalmers in their paper, “The Extended Mind” (1998). Their notion of active externalism is used to support, strengthen and (...) further what John Searle calls “the systems reply” to his objection to machine thought or strong Artificial Intelligence in his ChineseRoom thought experiment. Relevant objections and replies to these objections are considered, then some conclusions about machine thought and the Turing test are examined. (shrink)
John Searle has argued that one can imagine embodying a machine running any computer program without understanding the symbols, and hence that purely computational processes do not yield understanding. The disagreement this argument has generated stems, I hold, from ambiguity in talk of 'understanding'. The concept is analysed as a relation between subjects and symbols having two components: a formal and an intentional. The central question, then becomes whether a machine could possess the intentional component with or without the formal (...) component. I argue that the intentional state of a symbol's being meaningful to a subject is a functionally definable relation between the symbol and certain past and present states of the subject, and that a machine could bear this relation to a symbol. I sketch a machine which could be said to possess, in primitive form, the intentional component of understanding. Even if the machine, in lacking consciousness, lacks full understanding, it contributes to a theory of understanding and constitutes a counterexample to the ChineseRoom argument. (shrink)
The "ChineseRoom" controversy between Searle and Churchland and Churchland over whether computers can think is subjected to Derridean "deconstruction." There is a hidden complicity underlying the debate which upholds traditional subject/object metaphysics, while deferring to future empirical science an account of the problematic semantic relation between brain syntax and the perceptible world. I show that an empirical solution along the lines hoped for is not scientifically conceivable at present. An alternative account is explored, based on the productivity (...) of neural nets, in which the semantic relation is found to be dynamical - a spontaneous, stochastic, self-organizing process. (shrink)
John Searle's Chineseroom argument is perhaps the most influential andwidely cited argument against artificial intelligence. Understood astargeting AI proper – claims that computers can think or do think– Searle's argument, despite its rhetorical flash, is logically andscientifically a dud. Advertised as effective against AI proper, theargument, in its main outlines, is an ignoratio elenchi. It musterspersuasive force fallaciously by indirection fostered by equivocaldeployment of the phrase "strong AI" and reinforced by equivocation on thephrase "causal powers" equal to (...) those of brains." On a morecarefully crafted understanding – understood just to targetmetaphysical identification of thought with computation and not AI proper the argument is still unsound,though more interestingly so. It's unsound in ways difficult for high church– "someday my prince of an AI program will come" – believersin AI to acknowledge without undermining their high church beliefs. The adhominem bite of Searle's argument against the high church persuasions of somany cognitive scientists, I suggest, largely explains the undeserved reputethis really quite disreputable argument enjoys among them. (shrink)
Imagine advanced computers that could, by virtue merely of being programmed in the right ways, act, react, communicate, and otherwise behave like humans. Might such computers be capable of understanding, thinking, believing, and the like? The framework developed in this paper for tackling challenging questions of concept application answers in the affirmative, contrary to Searle’s famous ‘ChineseRoom’ thought experiment, which purports to prove that ascribing such mental processes to computers like these would be necessarily incorrect. The paper (...) begins by arguing that the core issue concerns language, specifically the discourse-community-guided mapping of phenomena onto linguistic categories. It then offers a model of how people adapt language to deal with novel states of affairs and thereby lend generality to their words, employing processes of assimilation, lexemic creation, and accommodation. Attributions of understanding to some computers lie in the middle range on a spectrum of acceptability and are thus reasonable. Possible objections deriving from Searle’s writings require supplementing the model with distinctions between present and future acceptability, and between contemplated and uncontemplated word uses, as well as a literal-figurative distinction that is more sensitive than Searle’s to actual linguistic practice and the multiplicity of subsenses possible within a single literal sense. The paper then critiques two misleading rhetorical features of Searle’s ChineseRoom presentation, and addresses a contemporary defense of Searle that seems to confront the sociolinguistic issue, but fails to allow for intrasense accommodation. It concludes with a brief consideration of the proper course for productive future discussion. (shrink)
Viewed in the light of the remarkable performance of ‘Watson’ - IBMs proprietary artificial intelligence computer system capable of answering questions posed in natural language - on the US general knowledge quiz show ‘Jeopardy’, we review two experiments on formal systems - one in the domain of quantum physics, the other involving a pictographic languaging game - whereby behaviour seemingly characteristic of domain understanding is generated by the mere mechanical application of simple rules. By re-examining both experiments in the context (...) of Searle’s ChineseRoom Argument, we suggest their results merely endorse Searle’s core intuition: that ‘syntactical manipulation of symbols is not sufficient for semantics’. Although, pace Watson, some artificial intelligence practitioners have suggested that more complex, higher-level operations on formal symbols are required to instantiate understanding in computational systems, we show that even high-level calls to Google translate would not enable a computer qua ‘formal symbol processor’ to understand the language it processes. We thus conclude that even the most recent developments in ‘quantum linguistics’ will not enable computational systems to genuinely understand natural language. (shrink)