Skip to main content
Log in

Yes, She Was!

Reply to Ford’s “Helen Keller Was Never in a Chinese Room”

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

Ford’s “Helen Keller Was Never in a Chinese Room” claims that my argument in “How Helen Keller Used Syntactic Semantics to Escape from a Chinese Room” fails because Searle and I use the terms ‘syntax’ and ‘semantics’ differently, hence are at cross purposes. Ford has misunderstood me; this reply clarifies my theory.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The “marks” or “units” of this LOT (e.g., nodes of a semantic network; terms and predicates of a language; or their biological analogues, etc). need not all be alike, either in “shape” or function. Natural languages, e.g., use a wide variety of letters, numerals, etc.; neurons include afferent, internal, and efferent ones (and the former do much of the internalization or “pushing”). Thanks to Albert Goldfain (personal communication) for emphasizing this last point.

  2. And, of course, there are many others; see previous note.

  3. Deus sive natura.

  4. Or “viewed from nowhere”, to use Nagel’s (1986) phrase instead of Spinoza’s; cf. Goldstein (2006): 67–68, 276.

  5. Does Ford mean external signals sent from the external world to the robot’s sensors? Or does he mean internal signals sent from the sensors to the effectors?

  6. Who “recognizes” the diagrams?

  7. [http://www.michaelbach.de/ot/cog_dalmatian/].

  8. In general, we understand, or give meaning to, incoming information by embedding it in the context of our prior beliefs. For further elaboration on this idea, as it relates to contextual vocabulary acquisition, see Rapaport 2003b. And for a commentary from a legal standpoint, see New York Times 2010.

  9. This is the recursive case of understanding. In the base case, to understand is to be able to manipulate the symbols syntactically. See Rapaport 2006, Thesis 3.

  10. On the distinction between computational philosophy and computational psychology, see Shapiro (1992), Rapaport (2003a).

  11. For a computational implementation of this consistent with the SNePS theory presented in Rapaport (2006) see Shapiro and Bona (2010).

References

  • Cole, D. (1991). Artificial intelligence and personal identity. Synthese, 88, 399–417.

    Article  Google Scholar 

  • Copeland, J. (1993). Artificial intelligence: A philosophical introduction. Oxford: Blackwell.

    Google Scholar 

  • Flanagan, O. J. (1984). The science of mind. Cambridge, MA: MIT Press.

    Google Scholar 

  • Ford, J. M. (in press). Helen Keller was never in a Chinese Room. Minds and Machines.

  • Goldstein, R. N. (2006). Betraying Spinoza: The Renegade Jew who gave us modernity. New York: Schocken.

    Google Scholar 

  • Jackendoff, R. (2002). Foundations of language: Brain, meaning, grammar, evolution. Oxford: Oxford University Press.

    Google Scholar 

  • Jahren, N. (1990). Can semantics be syntactic? Synthese, 82, 309–328.

    Article  Google Scholar 

  • Morris, C. (1938). Foundations of the theory of signs. Chicago: University of Chicago Press.

    Google Scholar 

  • New York Times (2010, 5 June). Justice Souter’s Counsel. editorial, p. A20.

  • Parisien, C., & Thagard, P. (2008). Robosemantics: How Stanley the Volkswagen represents the world. Minds and Machines, 18(2), 169–178.

    Article  Google Scholar 

  • Putnam, H. (1975). The meaning of ‘meaning’ ”, reprinted in Mind, language and reality. Cambridge, UK: Cambridge University Press, pp. 215–271.

  • Rapaport, W. J. (1981). How to make the world fit our language: An essay in Meinongian semantics. Grazer Philosophische Studien, 14, 1–21.

    Google Scholar 

  • Rapaport, W. J. (1985/1986). Non-existent objects and epistemological ontology. Grazer Philosophische Studien 25/26: 61–95.

    Google Scholar 

  • Rapaport, W. J. (1986a). Philosophy, artificial intelligence, and the Chinese-Room Argument. Abacus, 3 (Summer), 6–17; correspondence, Abacus 4 (Winter 1987): 6–7, Abacus 4 (Spring 1987): 5–7. [http://www.cse.buffalo.edu/~rapaport/Papers/abacus.pdf].

  • Rapaport, W. J. (1986b). Searle’s experiments with thought. Philosophy of Science, 53, 271–279.

    Article  Google Scholar 

  • Rapaport, W. J. (1988). Syntactic semantics: Foundations of computational natural-language understanding. In J. H. Fetzer (Ed.), Aspects of artificial intelligence (pp. 81–131). Dordrecht, Holland: Kluwer Academic Publishers. (errata online at [http://www.cse.buffalo.edu/~rapaport/Papers/synsem.original.errata.pdf]).

  • Rapaport, W. J. (1990). Computer processes and virtual persons: Comments on Cole’s ‘Artificial intelligence and personal identity’ ”, Technical Report 90-13. Buffalo: SUNY Buffalo Department of Computer Science, May 1990); [http://www.cse.buffalo.edu/~rapaport/Papers/cole.tr.17my90.pdf].

  • Rapaport, W. J. (1995). Understanding understanding: Syntactic semantics and computational cognition. In J. E. Tomberlin (Ed.), Philosophical perspectives, Vol. 9: AI, connectionism, and philosophical psychology (pp. 49–88). Atascadero, CA: Ridgeview.

  • Rapaport, W. J. (1998). How minds can be computational systems. Journal of Experimental and Theoretical Artificial Intelligence, 10, 403–419.

    Article  MATH  Google Scholar 

  • Rapaport, W. J. (2000). How to pass a Turing test: Syntactic semantics, natural-language understanding, and first-person cognition. Journal of Logic, Language, and Information, 9(4): 467–490.

    Google Scholar 

  • Rapaport, W. J. (2002). Holism, conceptual-role semantics, and syntactic semantics. Minds and Machines, 12(1), 3–59.

    Article  MATH  Google Scholar 

  • Rapaport, W. J. (2003a). What did you mean by that? Misunderstanding, negotiation, and syntactic semantics. Minds and Machines, 13(3), 397–427.

    Article  Google Scholar 

  • Rapaport, W. J. (2003). What is the ‘context’ for contextual vocabulary acquisition?. In P. P. Slezak (Ed.), Proceedings of the 4th joint international conference on cognitive science/7th Australasian society for cognitive science conference (ICCS/ASCS-2003; Sydney, Australia). Sydney: University of New South Wales, Vol. 2, pp. 547–552.

  • Rapaport, W. J. (2005a). Implementation is semantic interpretation: Further thoughts. Journal of Experimental and Theoretical Artificial Intelligence, 17(4), 385–417.

    Article  MathSciNet  Google Scholar 

  • Rapaport, W. J. (2005b). The Turing test. In Encyclopedia of language and linguistics (2nd ed.) (Vol. 13, pp. 151–159). Oxford: Elsevier.

  • Rapaport, W. J. (2006). How Helen Keller used syntactic semantics to escape from a Chinese room. Minds and Machines, 16(4), 381–436.

    Article  Google Scholar 

  • Rapaport, W. J. (2007). Searle on brains as computers. American Philosophical Association Newsletter on Philosophy and Computers, 6(2) (Spring), 4–9.

    Google Scholar 

  • Rapaport, W. J., & Kibby, M. W. (2007). Contextual vocabulary acquisition as computational philosophy and as philosophical computation. Journal of Experimental and Theoretical Artificial Intelligence, 19(1): 1–17.

    Article  Google Scholar 

  • Searle, J. R. (1980). Minds, brains, and programs”, Behavioral and brain sciences, 3, 417–457.

    Google Scholar 

  • Searle, J. R. (1990). Is the brain a digital computer?. Proceedings and Addresses of the American Philosophical Association, 64(3), 21–37.

    Article  Google Scholar 

  • Shapiro, S. C. (1992). Artificial intelligence. In S. C. Shapiro (Ed.), Encyclopedia of artificial intelligence (2nd ed.) (pp. 54–57). New York: Wiley.

  • Shapiro, S. C.; & Bona, J. P. (2010). The GLAIR cognitive architecture. International Journal of Machine Consciousness, 2, 307–332.

    Google Scholar 

  • Shapiro, S. C., & Rapaport, W. J. (1987). SNePS considered as a fully intensional propositional semantic network. In N. Cercone, & G. McCalla (Eds.), The knowledge frontier: Essays in the representation of knowledge (pp. 262–315). New York: Springer.

    Google Scholar 

  • Shapiro, S. C., & Rapaport, W. J. (1991). Models and minds: Knowledge representation for natural-language competence. In R. Cummins & J. Pollock (Eds.), Philosophy and AI: Essays at the interface (pp. 215–259). Cambridge, MA: MIT Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to William J. Rapaport.

Appendix

Appendix

Ford cites Jahren’s (1990) objections to my view. Let me take this opportunity to reply to Jahren, whose critique of my (1988) theory of syntactic understanding and its application to the CRA shows how easy it is in discussing these issues to talk just slightly past one another.

10.1

What, for example, is a natural language, and what does it mean to understand one? For Jahren, a natural-language is “a series of signs used by a system”, and “the sine qua non of natural-language understanding ... [is] an ability to take those signs to stand for something else ... in the world” (Jahren 1990: 310, my emphasis). But if a natural language is just “a series of signs”, it follows that to understand it is to understand the series of signs as used by the system—which is a syntactic process. Now, as I urged in “Syntactic Semantics” (Rapaport 1988), to understand, in general, is to map symbols to concepts. Footnote 9 Thus, for me to understand you is for me to map your symbols to my concepts, which is, to use Jahren’s phrase, taking “those signs to stand for something else”—but not “something in the world” (except in the uninteresting sense that my concepts are things in the world). This is also a syntactic process: Insofar as I internalize your symbols and then map my internalized representations (or counterparts) of your symbols to my concepts, I am doing nothing but internal symbol manipulation (syntax), even though I am taking your “signs to stand for something else”, namely, my concepts.

How do I understand my concepts? Do I take my concepts to stand for something else outside me? Yes—I so take them, although I only have indirect access to the “something else” outside me. The only way I can take your symbols “to stand for something in the world” would, pre-theoretically, have to be either directly or else indirectly via my symbols (concepts). But all of it is indirect, since I can at best take your symbols to stand for the same thing I take mine to stand for, and, in both cases, that’s just more symbols (cf. Rapaport 2000).

10.2

Jahren takes me to task for using ‘mentality’ in a “suprapsychological” sense (citing Flanagan 1984) instead of “in a human sense” (Jahren 1990: 314ff). But what sense is that? Is it determined by human behavior (as in, say, the Turing Test)? If so, then Jahren and I are talking about the same thing, since human mental behavior might be produced by different processes. Is it determined by the way the human brain does mental processing? But that is too strong for my computational philosophical tastes: I am concerned with how mentality, thinking, cognition, understanding—call it what you will—is possible, period. I am not concerned with how human mentality, in particular, works; I take that to be the domain of (computational) cognitive psychology. Footnote 10 However, I don’t intend (at least, I don’t think I intend) the very weak claim that as long as a computer can simulate human behavior by any means, that would be mentality. I do want to rule out table look-up or the (superhuman) ability to solve any mathematical problem, without error, in microseconds. The former is too finite (it can’t account for productivity); the latter is too perfect (in fact, if viewed as an infinite, God-like ability to know and do everything instantaneously, it, too, is a kind of table look-up that fails to account for productivity; cf. Rapaport 2005b).

Now, having excluded those two extremes, there is still a lot of variety in the middle. So I’ll agree with Jahren that, the extreme cases excepted, “a computational system is minded to the extent that the information processing it performs is functionally [that is, input-output, or behaviorally] equivalent to the information processing in a mind” (Jahren 1990: 315)—presumably, a human mind. However, Jahren says that two mappings are input-output equivalent “because these mappings themselves can be transformed into one another” (Jahren 1990: 315). This seems to me too restrictive, not to say vague (what does it mean to transform one mapping into another?). Jahren gives as an example “solving a matrix equation [which] is said to be equivalent to solving a system of linear equations” (Jahren 1990: 315). But surely two algorithms with the same input-output behavior would be functionally equivalent even if they were not thus transformable. Consider, for instance, two very different algorithms for computing greatest common divisors. They would be functionally equivalent even if there were no way to map parts of one to parts of the other in any way that preserved functional equivalence of the parts.

10.3

Jahren alludes to the symbol-grounding problem: “The semantics R [that is, the semantics in Rapaport’s sense] of a term is given by its position within the entire network” (Jahren 1990: 318). The proper response to this is: ‘Yes and no’. Yes, in the sense that ultimately all is syntactic, hence holistic, as Jahren observes (cf. Rapaport 2002, 2003a). But no in the sense that this misleadingly suggests that nothing in the network represents the external world. For instance, Jahren gives an example of ‘red’ linked as subclass to ‘color’ and as property to ‘apple’, etc. But this omits another, crucial—albeit still internal—link: to a node representing the sensation of redness. Footnote 11 Some parts of the network represent external objects, so an internal analogue of “reference” is possible.

Now, to be fair, Jahren is not unsympathetic to this view:

... Rapaport’s conception of natural-language understanding does shed some light on how humans work with natural language. For example, my own criterion states that when I use the term ‘alligator’, I should know that it (qua sign) stands for something else, but let us examine the character of my knowledge. The word ‘alligator’ might be connected in my mind to visual images of alligators, like the ones I saw sunning themselves at the Denver Zoo some years ago. But imagine a case where I have no idea what an alligator is but have been instructed to take a message about an alligator from one friend to another. Now the types of representations to which the word ‘alligator’ is connected are vastly different in both cases. In the first, I understand ‘alligator’ to mean the green, toothy beast that was before me; in the second, I understand it to be only something my friends were talking about. But I would submit that the character of the connection is the same: it is only that in the former case there are richer representations of alligators (qua object) for me to connect to the sign ‘alligator’. ... The question ... is whether the computer takes the information it stores in the ... [internal semantic network] to stand for something else. (Jahren 1990: 318–319; cf. Rapaport 1988, n. 16).

Well, the computer does and it doesn’t “take the information it stores ... to stand for something else”. It doesn’t, in the sense that it can’t directly access that something else (any more—or less—than we can). It does, in the sense that it assumes that there is an external world. But note that if it represents the external world internally, it’s doing so via more nodes! There’s no escaping our internal, first-person experience of the world. As Kant might have put it, there’s no escape from phenomena, no direct access to noumena.

10.4

I have been avoiding the issue of consciousness and what it “feels like” to understand or to think (though I have something to say about part of that problem in Rapaport 2005a). But let me make one observation here, in response to Jahren’s description of how we can experience what it is like to be the machine: “in accordance with the Thesis of Functional Equivalence one can be the machine in the only theoretically relevant sense if one performs the same information processing that the machine does” (Jahren 1990: 321). That is, to see if a machine that passes the Turing Test is conscious, we would need to be the machine, and, to do that, all we have to do is behave as it does. But just “being” the machine (or the “other mind”) isn’t sufficient—one would also have to simultaneously be oneself, too, in order to compare the two experiences. This seems to be at the core of Searle’s Chinese-Room Argument—he tries to be himself and the computer simultaneously (cf. Cole 1991; Rapaport 1990; Copeland 1993). But he can’t use his own experiences (or lack of them) to experience his own-qua-computer experiences (or lack of them). That’s like my sticking a pin into you and, failing to feel pain, claiming that you don’t, either. It is also like my making believe I’m you, sticking a pin into me-qua-you, feeling pain, and concluding that so do you. Either one “is” both cognitive agents at the same time, in which case there is no way to distinguish one from the other—the experiences of the one are the experiences of the other—or else one is somehow able to separate the two, in which case there is no way for either to know what it is like to be the other. Note, finally, that what holds for me (or Searle) imitating a computer holds for a computer as well: Assume that we are conscious, and let a computer simulate us; could the computer determine whether our consciousness matched its? I doubt it.

10.5

Let’s return to the syntactic understanding of Searle-in-the-room. Jahren says that Searle-in-the-room does not understand Chinese “because ... [he] cannot distinguish between categories. If everything is in Chinese, how is he to know when something is a proper name, when it is a property, or when it is a class or subclass?” (Jahren 1990: 322). I take it that Jahren is concerned with how Searle-in-the-room can decide of a given input expression whether it is a name, or a noun for a property, or a noun for a class or subclass. In terms of a computational cognitive agent (such as Cassie, discussed in Rapaport 2006), this is the question of how she would “know” that ‘Lucy’ in ‘Lucy is rich’ is a proper name (in SNePS terms, how she would “decide” whether to build an object-propername case frame or some other case frame) or of how she “knows” that ‘rich’ expresses a property rather than a class (how she “decides” whether to build an object-property case frame rather than a member-class case frame; see Rapaport 2006 for details on these SNePS semantic network notions).

In one sense, the answer is straightforward: In Cassie’s case, an augmented-transition-network parsing grammar “tells” her. And how does the augmented transition network “know”? Well, of course, we programmed it to know. But in a more realistic case, Cassie would learn her grammar, with some “innate” help, just as we would. In that case, what the arc labels are is absolutely irrelevant. For us programmers, it’s convenient to label them with terms that we understand. But Cassie has no access to those labels. So, in another sense, she does not know, de dicto, whether a term is a proper name or expresses a property rather than a class. Only if there were a node labeled ‘proper name’ and appropriately linked to other nodes in such a way that a dictionary definition of ‘proper name’ could be inferred would Cassie know de dicto the linguistic category of a term. Would she know that something was a proper name in our sense of ‘proper name’? Only if she had a conversation with us and was able to conclude something like, “Oh—what you call a ‘proper name’, I call a___”, where the blank is filled in with the appropriate node label.

This is simply the point that native speakers of a language don’t have to explicitly understand its grammar in order to understand the language. I once asked (in French) a native French-speaking clerk in a store in France whether a certain noun was masculine or feminine, so that I would know whether to use ‘le’ or ‘la’; the clerk had no idea what I was talking about, but she did volunteer that one said ‘le portefeuille’, not ‘la portefeuille’.

Jahren “argue[s] that Searle-in-the-room cannot interpret any of the Chinese terms in the way he understands English terms” (Jahren 1990: 323). But insofar as Searle-in-the-room is understanding Chinese, he is not understanding English. Neither does Cassie, strictly speaking, understand SNePS networks; rather, she understands natural language, and she uses SNePS networks to do so. Just as a native speaker of English would explicitly understand English grammar only if she had studied it formally, so would Cassie only explicitly understand SNePS networks if she were a SNePS programmer (or a computational cognitive scientist). And, even if she were, the networks she would understand wouldn’t be her own—they wouldn’t be the ones she was using in order to understand the ones she was programming. Insofar as Searle-in-the-room does understand English while he is processing Chinese, he could map the Chinese terms onto his English ones, and thus he would understand Chinese in a sense that even Searle-the-author would have to accept.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Rapaport, W.J. Yes, She Was!. Minds & Machines 21, 3–17 (2011). https://doi.org/10.1007/s11023-010-9213-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-010-9213-z

Keywords

Navigation