Skip to main content
Log in

Helen Keller Was Never in a Chinese Room

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

William Rapaport, in “How Helen Keller used syntactic semantics to escape from a Chinese Room,” (Rapaport 2006), argues that Helen Keller was in a sort of Chinese Room, and that her subsequent development of natural language fluency illustrates the flaws in Searle’s famous Chinese Room Argument and provides a method for developing computers that have genuine semantics (and intentionality). I contend that his argument fails. In setting the problem, Rapaport uses his own preferred definitions of semantics and syntax, but he does not translate Searle’s Chinese Room argument into that idiom before attacking it. Once the Chinese Room is translated into Rapaport’s idiom (in a manner that preserves the distinction between meaningful representations and uninterpreted symbols), I demonstrate how Rapaport’s argument fails to defeat the CRA. This failure brings a crucial element of the Chinese Room Argument to the fore: the person in the Chinese Room is prevented from connecting the Chinese symbols to his/her own meaningful experiences and memories. This issue must be addressed before any victory over the CRA is announced.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Rapaport calls the Room’s occupant “Searle-in-the-room”, and I will follow his convention. Rapaport also calls Searle himself “Searle-the-philosopher”. When considering the Systems Reply, I’ll call the person who memorizes the program and the symbols and works outdoors “Searle-outside-the-room”, and for the Robot Reply, “Searle-driving-the-robot”.

  2. Searle makes a similar argument for syntax—that nothing has syntactic properties intrinsically. Things only become symbols when we minded creatures treat them as symbols (Searle 1992, Chap. 9). Evaluating Searle’s argument on the mind-dependent nature of syntax is beyond the scope of this paper, but if Searle is correct here, it would be doubly devastating to positions like Rapaport’s.

  3. Rapaport recognizes the distinction between the two different understandings of syntax/semantics here: “in NL [natural language], there are grammatical properties and relations among the words in the sentence ‘Ricky loves Lucy’ (subject, verb, object), and there are semantic relations between the words ‘lawyer’ and ‘attorney’ (synonymy), or ‘tall’ and ‘short’ (antonymy). But, in the classical sense of syntax (Morris 1938), both of these sorts of relations are syntactic,” (Rapaport 2006, p. 392, emphasis added).

  4. The main reason I’m not concerned with whether this account of syntax (allowing the inclusion of semantic relations within syntax) is true to Morris or not, is that Rapaport’s position is what it is, independent of whether it comes directly from Morris, requires some interpretation of Morris, or is an extension of Morris’s ideas.

  5. A contention that Rapaport has made in several works besides the article in question here, e.g., “The linguistic and perceptual ‘input’ to a cognitive agent can be considered a syntactic domain…,” (Rapaport 1995, p. 59, italics in the original).

  6. We may be tempted to think that C is a subset of E, but we should allow unconscious, non-experiential representations of the Chinese symbols. Even when Searle-in-the-room is in a well-earned dreamless sleep, we would want to say things like: he believes that some particular Chinese symbol is one that he saw for the first time yesterday, he believes that this symbol is more common than that one, etc.

  7. Here is what Rapaport really said if we distinguish the definitions of syntax and semantics, as I suggest: “This would eventually be the experience of Searle-in-the-room, who would then have both semantic S methods for doing things and purely syntactic S ones. The semanticS methods, however, are strictly internal: they are not correspondences among words and things, but syntactic R correspondences among internal nodes for words and things.”

  8. Though I believe Searle’s original response to the Robot Reply will survive Rapaport’s challenge, we can construct a variation of the Chinese Room that causes additional difficulty for the standard Robot Reply (e.g., Crane 1996)—I call it the Twisted Chinese Room. To build the Twisted Chinese Room, put all the Chinese characters in one column (in any arbitrary order), and all their meanings in a second column. Then move all the characters down one row, taking the last character from the bottom back up to the top. Change all the symbols in the Chinese Room instruction book accordingly. Then add two steps to the instructions, so that the first thing Searle-in-the-room does when he gets some Chinese symbols in his in-box is to transform the message into Twisted Chinese, and the last thing he does is to transform the output from Twisted back into Regular Chinese. Now, the vast majority of the operations that Searle-in-the-room performs are done on Twisted Chinese characters. Without the final transform-back step, the result would be gibberish (certainly not comprehensible to any native Chinese speaker). So, in the Robot Reply, would the interactions with the external environment confer understanding of Chinese or Twisted Chinese? Both? Neither? Would this divorce meaning from understanding? I don’t see a good answer, so I think the best solution is to avoid the initial claim that Searle-in-the-Room would understand anything on the basis of shuffling uninterpreted symbols around. An anonymous reviewer has remarked that such a Twisting, if done one character or word at a time, might allow a Chinese speaker to learn Twisted Chinese, rather than automatically yielding gibberish. That may be so, but it would presume comprehension of Chinese to start with.

  9. Searle might need super-memory in order to memorize such a vast rulebook, including all the rules about keeping track of prior responses that would be needed for a convincing conversational simulation. Rules to handle repetitions of questions, or questions like, “What did you just say?” would require a magnificent memory indeed.

  10. Rapaport isn’t the only person making this sort of move; Simon and Eisenstadt do the same when they add windows to the Chinese Room in “A Chinese Room that Understands,” (Simon and Eisenstadt 2002).

  11. We could have two systems, identical at the formal, high-end level of beliefs (and “beliefs”) and desires (and “desires”), with very different underlying causal relations. Searle would call this the difference between functional causal relations and causal powers. We can also describe the difference as between what a system does (at the higher level of description) and why it does that (at the lower, micro-level description). Searle claims that the “why” matters to consciousness and intentionality.

    This may shed some light on Rey’s claim that Searle is a sort of functionalist (Rey 1986, 2002)—if we suppose we had a causal/functional account of human consciousness in the brain and we duplicated those causal/functional processes in a silicon-based robot, such that the robot’s “brain” went from one state to the next for the very same reasons that the human brain went from one analogous state to the analogous next state, Searle would say that the robot has a mind. Rey recognizes Searle’s demand, but doesn’t see the spirit of it: “Searle’s phrasing here actually suggests that analysis of a belief must include an account of why the belief actually manages to have the causal role it does. But it’s hard to see what exactly such a demand would come to, much less why he or anyone else would want to insist upon it,” (Rey 2002, p. 206, footnote 11).

    Perhaps this will help. Since we don’t know what, in the brain, actually causes conscious mental states, we don’t know what level of description to locate the causal relations we’d need to duplicate in order to produce conscious mentality. Suppose we choose a level of description (say, of beliefs, thoughts and desires—the level where propositional attitudes occur), and we jury-rig a program so that the states succeed each other in exactly the same way that they would in a human being. But the underlying reasons (the causal mechanisms) are very different. The reasons why State Xmachine is followed by State Ymachine is entirely different than the reasons why State Xhuman is followed by State Yhuman. It is possible that the thing we sought, the essence of the mind, depends on the underlying reasons and not the causal series that we’ve arranged via the program. Possible analogy: one system has objects moving because of gravity, and another system has similar objects moving in similar patterns, but via God’s Will. The underlying “reason why” would make a substantial difference.

References

  • Crane, T. (1996). The mechanical mind: A philosophical introduction to minds, machines and mental representation. London: Penguin.

    Google Scholar 

  • Jahren, N. (1990). Can semantics be syntactic? Synthese, 82, 309–328.

    Article  Google Scholar 

  • Morris, C. (1938). Foundations of the theory of signs. Chicago: University of Chicago Press.

    Google Scholar 

  • Morris, C. (1971). Writings on the general theory of signs. The Hague: Mouton.

    Google Scholar 

  • Preston, J., & Bishop, M. (Eds.). (2002). Views into the Chinese Room: New essays on Searle and artificial intelligence. Oxford: Oxford University Press.

    MATH  Google Scholar 

  • Rapaport, W. J. (1995). Understanding understanding: Syntactic semantics and computational cognition. Philosophical Perspectives, 9, 49–88.

    Article  Google Scholar 

  • Rapaport, W. J. (2006). How Helen Keller used syntactic semantics to escape from a Chinese Room. Minds and Machines, 16, 381–436.

    Article  Google Scholar 

  • Rey, G. (1986). What’s really going on in Searle’s “Chinese Room”. Philosophical Studies, 50, 169–185.

    Article  Google Scholar 

  • Rey, G. (2002). Searle’s misunderstandings of functionalism and strong AI. In J. Preston & M. Bishop (Eds.), Views into the Chinese Room: New essays on Searle and artificial intelligence (pp. 201–225). Oxford: Oxford University Press.

    Google Scholar 

  • Searle, J. R. (1984). Minds, brains and science. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Searle, J. R. (1992). The rediscovery of the mind. Cambridge, MA: The MIT Press.

    Google Scholar 

  • Searle, J. R. (1997). The mystery of consciousness. New York, NY: The New York Review of Books.

    Google Scholar 

  • Simon, H. A., & Eisenstadt, S. A. (2002). A Chinese Room that understands. In J. Preston & M. Bishop (Eds.), Views into the Chinese Room: New essays on Searle and artificial intelligence (pp. 95–108). Oxford: Oxford University Press.

    Google Scholar 

Download references

Acknowledgments

I would like to express my gratitude most especially to David Cole, for his very productive discussions on the Chinese Room, and for reading several drafts of this paper. I would also like to thank Tristram McPherson, James Moor, Mark Newman, Sean Walsh and an anonymous reviewer for Minds and Machines for their very helpful questions and comments. Any errors that remain are entirely my own.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jason Ford.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ford, J. Helen Keller Was Never in a Chinese Room. Minds & Machines 21, 57–72 (2011). https://doi.org/10.1007/s11023-010-9220-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-010-9220-0

Keywords

Navigation