Skip to main content
Log in

How Helen Keller used syntactic semantics to escape from a Chinese Room

  • Original Paper
  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

A computer can come to understand natural language the same way Helen Keller did: by using “syntactic semantics”—a theory of how syntax can suffice for semantics, i.e., how semantics for natural language can be provided by means of computational symbol manipulation. This essay considers real-life approximations of Chinese Rooms, focusing on Helen Keller’s experiences growing up deaf and blind, locked in a sort of Chinese Room yet learning how to communicate with the outside world. Using the SNePS computational knowledge-representation system, the essay analyzes Keller’s belief that learning that “everything has a name” was the key to her success, enabling her to “partition” her mental concepts into mental representations of: words, objects, and the naming relations between them. It next looks at Herbert Terrace’s theory of naming, which is akin to Keller’s, and which only humans are supposed to be capable of. The essay suggests that computers at least, and perhaps non-human primates, are also capable of this kind of naming.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. Often, programs are characterized merely in input–output (I/O) terms. I use the phrase “manipulate the input to yield the output” in order to emphasize the algorithmic nature of the program. (The “manipulation”, of course, could be a null operation, in which case the algorithm (or program) would indeed be a mere I/O specification.) However, to be even more accurate, we should say that the program describes how to accept and manipulate the input, since a program is a static (usually textual) object, as opposed to a dynamic “process”. See Sect. ‘The processor’ below.

  2. Albert Goldfain pointed out to me that some of Eliza’s responses are only trivially appropriate (see Weizenbaum, 1966).

  3. On the relation of “symbols” to “marks” (roughly, uninterpreted symbols, for readers who will excuse the apparent oxymoronic nature of that phrase), see Rapaport (1995, 2000).

  4. A problem is AI-complete if a (computational) solution for it requires or produces (computational) solutions to all problems in AI (Shapiro, 1992).

  5. Cf. Richmond Thomason’s (2003) characterization of a computer as a device that “change[s] variable assignments”, i.e., that accepts certain assignments of values to variables as input, changes (i.e., manipulates) them, and then outputs the changed values. (The I/O processes do not actually have to be part of the computer, so-defined.)

  6. Effectors are also provided, to enable Searle-in-the-room or the system to manipulate the environment, though this may be less essential, since almost no one would want to claim that a quadriplegic or a brain-in-a-vat with few or no effectors was not capable of cognition. Cf. Maloney (1987, 1989); Rapaport (1993, 1998, 2000); Anderson (2003, Sect. 5); Chrisley (2003, fn. 25).

  7. Rapaport (1985, 1986b1990, 19932003a, 2005b, 2006).

  8. And/or some kind of “dynamic” or “incremental” semantics along the lines of, e.g., Discourse Representation Theory (Kamp & Reyle, 1993) or Dresner’s (2002) algebraic-logic approach.

  9. Such a theory has received a partial computational implementation in our research project on “contextual vocabulary acquisition”: Ehrlich (1995, 2004); Ehrlich and Rapaport (1997, 2004); Rapaport (2003b, 2005a); Rapaport and Ehrlich (2000); Rapaport and Kibby (2002); Kibby, Rapaport, Wieland, & Dechert (forthcoming).

  10. Objections to holistic theories in general are replied to in Rapaport (2002, 2003a).

  11. For other ways of viewing SNePS, see Shapiro and the SNePS Research Group (2006). For details, see, e.g., Shapiro (1979); Shapiro and Rapaport (1987, 1992, 1995); Shapiro (2000); and online at: [http://www.cse.buffalo.edu/sneps]] and [http://www.cse.buffalo.edu/∼rapaport/snepskrra.html].

  12. Here, it is represented via a subclass–superclass relationship. There are other ways to represent this in SNePS, e.g., as one of the universally quantified propositions “For all x, if object x has the property of being human, then x is a member of the class mammals” or else “For all x, if x is a member of the class humans, then x is a member of the class mammals”, or in other ways depending on the choice of ontology, which determines the choice of arc labels (or “case frames”). SNePS leaves this up to each user. E.g., an alternative to the lex arc is discussed in Sect. ‘What really happened at the well house?’

  13. The labels on the nodes at the heads of the lex arcs are arbitrary. For expository convenience, I use English plural nouns. But they could just as well have been singular nouns or even arbitrary symbols (e.g., “ b1 ”, “ b2 ”). The important points are that the nodes (1) represent lexical expressions in some language and (2) are “aligned” (in a technical sense; see Shapiro & Ismail, 2003) with entries in a lexicon. E.g., had we used “ b1 ” instead of “ humans ”, the lexical entry for b1 could indicate that its morphological “root” is ‘human’, and an English morphological synthesizer could contain information about how to modify that root in various contexts. See Sect. ‘A SNePS analysis of what Keller learned: preliminaries’, n. 42.

  14. On the notion of “structural”, as opposed to “assertional”, characterizations, see Woods (1975); Shapiro and Rapaport (1987, 1991).

  15. For further discussion, see Shapiro and Rapaport (1987) and Sect. ‘What really happened at the well house?’, below.

  16. For more information on Cassie, see Shapiro and Rapaport (1987, 1991, 1992, 1995); Shapiro (1989, 1998); Rapaport, Shapiro, and Wiebe (1997); Rapaport (1991a, 1998, 2000, 2002, 2003a); Ismail and Shapiro (2000); Shapiro, Ismail, and Santore (2000); Shapiro and Ismail (2003); Santore and Shapiro (2004).

  17. SNePS only has function symbols, no predicates. All well-formed formulas are terms; none are sentences in the usual sense, although some terms can be “asserted”, meaning that Cassie (or the system) treats them as (true) sentences.

  18. I use double-brackets (“[[ ]]”) to denote a function that takes as input the symbol inside the brackets and returns a meaning for it.

  19. The logic underlying SNePS is not that of ordinary predicate logic! It is a paraconsistent relevance logic with an automatic belief-revision system that can allow for any believed (i.e., asserted) proposition to be withdrawn (i.e., unasserted), along with any of its inferred consequents, in the light of later beliefs. See Shapiro and Rapaport (1987), Martins and Shapiro (1988), Shapiro (2000).

  20. Cf. Meinong (1904); Rapaport (1976, 1978, 1981, 1985/1986, 1991b).

  21. As Goldfain (personal communication, 2006) put it: It can understand with the Semantic Web information.

  22. Would I be able to do so uniquely or “correctly”? Or would the theoretical existence of an infinite number of models for any formal theory mean that the best I might be able to hope for is an understanding that would be unique (or “correct”) “up to isomorphism”? Or (following Quine, 1960, 1969) might the best I could hope for be something like equivalent but inconsistent translation manuals? John McCarthy (personal communication, April 2006) thinks that I would eventually come to a unique or correct understanding. These important issues are beyond the scope of this essay. For present purposes, it suffices that the understander be able to make some sense out of the networks, even if it is not the intended one, as long as the understanding is consistent with all the given data and modifiable (or “correctable”) in the light of further evidence (Rapaport, 2005a; Rapaport & Ehrlich, 2000).

  23. A similar observation has been made by Justin Leiber (1996, esp. p. 435):

    [T]he suspicion that Keller did not live in the real world, could not mean what she said, and was a sort of symbol-crunching language machine ... suggests a prejudice against the Turing test so extreme that it carries the day even when the Turing test passer has a human brain and body, and the passer does not pass as simply human but as a bright, witty, multilingual product of a most prestigious university, and professional writer about a variety of topics.

    For readers unfamiliar with Helen Keller (1880–1968), she was blind and deaf because of a childhood illness (see below), yet graduated from Radcliffe College of Harvard University, wrote three autobiographies, and delivered many public lectures on women’s rights, pacifism, and helping the blind and deaf.

  24. When I was very young, I heard what I took to be a single word that my parents always used when paraphrasing something: ‘inotherwords’. (At the time, I had no idea how to spell it.) It took me several hearings before I understood that this was really the three-word phrase “in + other + words”. Similarly, from Keller’s point of view, her finger spellings were not letters + for + dolls, but an unanalyzed “lettersfordolls”.

  25. Keller referred to a “well”-house, whereas Sullivan referred to a “pump”-house. Keller’s house is now a museum; their website (helenkellerbirthplace.org) also calls it a “pump”. I use Keller’s term in this essay.

  26. On hands and cognition, cf. Papineau’s (1998) review of Wilson (1998).

  27. Keller had been accused of plagiarism, when, in fact, it is possible that she had merely made a grievous use-mention confusion, viz., not having learned how to use quotation marks; cf. Keller (1905).

  28. As Goldfain pointed out to me, ‘partitioned’ needs scare quotes, because I don’t want to imply an empty intersection between the syntactic domain and the semantic domain. Cf. Sect.‘Thesis 1’, above.

  29. Trusting a third-person over a first-person viewpoint is consistent with trusting the native Chinese speaker’s viewpoint over Searle-in-the-room’s (Rapaport, 2000). There are also Keller’s somewhat more contemporaneous letters, but these—while intrinsically interesting, and exhibiting (especially in the early ones) her gradual mastery of language—do not contain much information on how she learned language. There are also Sullivan’s speeches and reports. Although they contain some useful information and some valuable insights—especially into the nature of teaching—they, like Keller’s autobiography, were written ex post facto; cf. Macy, in Keller (1905, p. 278.)

  30. This has an overtone of holistic reinterpretation (Rapaport, 1995, Sect. 2.6.2): we understand the present in terms of all that has gone before, and the past in terms of all that has come after.

  31. Such page citations are to Keller (1905) unless otherwise indicated.

  32. I.e., at age 1 year, 2 months, 21 days.

  33. Cf. what I have called the “miracle of reading”: “When we read, we seemingly just stare at a bunch of arcane marks on paper, yet we thereby magically come to know of events elsewhere in (or out of!) space and time” (Rapaport, 2003a, Sect. 6; a typo in the original is here corrected). Clifton Fadiman once observed that

    [W]hen I opened and read the first page of a book for the first time, I felt that this was remarkable: that I could learn something very quickly that I could not have learned any other way .... [I] grew bug-eyed over the miracle of language ... [viz.,] decoding the black squiggles on white paper. (Quoted in Severo, 1999.)

    Hofstadter (2001, p. 525) makes similar observations.

  34. Leiber (1996) also notes that Keller had linguistic knowledge and abilities both before and immediately after her illness.

  35. In an earlier autobiography, Keller also called this a ‘mug’/‘milk’ confusion (p. 364). And in Sullivan’s description of the well-house episode (see Sect. ‘Epiphany’, below), she describes “w-a-t-e-r” as a “new word” for Keller (p. 257).

  36. Doll, mug, pin, key, dog, hat, cup, box, water, milk, candy, eye (x), finger (x), toe (x), head (x), cake, baby, mother, sit, stand, walk. ... knife, fork, spoon, saucer, tea, paper, bed, and ... run” (p. 256). “Those with a cross after them are words she asked for herself” (p. 256).

  37. Of course, ‘name’ (or ‘word’) might be overly simplistic. A simple (to us) finger-spelled “name” might be interpreted as a full sentence: possibly, ‘d-o-l-l’ means “Please give me my doll.” Cf. “Please machine give cash” as the meaning of pushing a button on a cash machine; see Sect. ‘T-naming’.

  38. As David Wilkins pointed out to me.

  39. This has been called an expression–expressed case frame (Neal & Shapiro, 1987; Shapiro et al., 1996) or a word–object case frame. I will use ‘ name–object ’ instead of ‘ word–object ’ for consistency with Keller’s and Sullivan’s terminology. The replacement of lex arcs with a name–object case frame is not essential to my point, but makes the exposition clearer.

  40. More precisely, o is the concept associated with (or expressed by) the lexical entry aligned with n. See n. 13.

  41. Sellars (1963, p. 282); Allaire (1963, 1965); Chappell (1964); Baker (1967).

  42. Instead of using “base-node” labels b1 , b2 , etc., we could have used mnemonic (for us) English words, like humans1 and mammals1 , as is often done in AI. For present purposes, as well as for McDermott-1981-like reasons, these would be needlessly confusing.

  43. Fig. 4 can be created, using SNePSLOG, by defining: define-frame Is (nil object property) , where [[ Is( o,p ) ]] = object o has property p, and then asserting: ISA-thing-named(b3,water). ISA-thing-named(b4,wet). Is(b3,b4).

  44. Briefly, this case frame is the SNePS analogue of several English possessive constructions ('s, of, etc.), neutral as to whether the possession is that of whole-to-part, ownership, kinship, etc. E.g., “Frank is Bill’s father” might be represented as: Possession(Frank, Bill, father) , understood as expressing the relationship that Frank is the father of Bill. If, in general, Possession( o,p,r ) , it might follow that o is an r—e.g., Frank is a father—but only because, in this case, ∃ p[Frank is the father of p]. As Stuart C. Shapiro (personal communication, August 2006) observes, if one man’s meat is another’s poison, it doesn’t follow that the first person’s meat is meat simpliciter, but only meat for him, since it is also poison (and thus not meat) for someone else.

  45. In SNePSLOG, node m13 could be constructed by asserting Possession(name,b5,b5) .i.e., [[ b5 ]]’s name is ‘name’. This is the base case of a recursion when rule (1) of Sect. ‘SNePS analysis of learning to name’, below, is applied to m10 . (The first consequent of that rule would not build a new node (because of the Uniqueness Principle; Shapiro, 1986); rather, it would return the already-existing node m10 . The second consequent builds m13 .

  46. A relevant (humorous) take on the nature of conversation appeared in a New Yorker cartoon showing a man, two women, and a chimp, all dressed in suits and ties, sitting in a bar at a table with drinks; the chimp thinks, “Conversation—what a concept!”.

  47. Ann Deakin pointed out to me that color is not a good example for Helen Keller! Perhaps taste or smell would be better? On the other hand, for Cassie, color might be more accessible than taste or smell (cf. Lammens, 1994)!

  48. Lines beginning with semicolons are comments.

  49. Note that, arguably, it does not necessarily also follow that y is a name simpliciter; see n. 44. A similar rule appears in Shapiro (2003); cf. Shapiro and Ismail (2003).

  50. Here, it arguably does make sense to say that y is a property simpliciter, not merely that it is x’s property; see n. 49.

  51. Allen and Perrault (1980); Bruce (1975); Cohen and Levesque (1985, 1990); Cohen and Perrault (1979); Grosz and Sidner (1986); Haller (1994, 1995).

  52. Perhaps like that between the node labeled humans and node m1 in Fig. 1 or between the node labeled humans and node b1 in Fig. 3. (The latter associative link is represented by node m4 .)

  53. Oscar is the Other SNePS Cognitive Agent Representation, first introduced in Rapaport, Shapiro, and Wiebe (1986).

  54. “Hob thinks a witch has blighted Bob’s mare, and Nob wonders whether she (the same witch) killed Cob’s sow” (Geach, 1967, p. 628).

  55. I.e., the intention to communicate should be one of the features of computational NL understanding and generation in addition to those I cited in Rapaport (2000, Sect. 8). There, I said that a computational cognitive agent must be able to “take discourse (not just individual sentences) as input; understand all input, grammatical or not; perform inference and revise beliefs; make plans (including planning speech acts for NL generation, planning for asking and answering questions, and planning to initiate conversations); understand plans (including the speech-act plans of interlocutors); construct a “user model” of its interlocutor; learn (about the world and about language); have lots of knowledge (background knowledge; world knowledge; commonsense knowledge; and practical, “how-to”, knowledge ... and remember what it heard before, what it learns, what it infers, and what beliefs it revised .... And it must have effector organs to be able to generate language. In short, it must have a mind.”

  56. SNePS pic arcs, like lex arcs, point to SNePS nodes representing pictorial images (Rapaport, 1988b; Srihari, 1991a, b, 1993, 1994; Srihari & Rapaport, 1989, 1990). Also cf. anchoring or “alignment”; Shapiro and Ismail (2003).

  57. Note that the predicate “ Equivalent ” is defined in terms of a single arc (“ equiv ”) that can point to a set of nodes; this has the same effect as having a set of arcs with the same label, each of which points to a node. See Maida and Shapiro (1982); Shapiro and Rapaport (1987); Rapaport, Shapiro, and Wiebe (1997) for further discussion of the SNePS notion of equivalence.

  58. In Shapiro (1981), SNePS asks questions as a result of back-chaining.

  59. I consider other aspects of Bruner’s book in Rapaport (2003a, Sect. 8). On the role of deixis in natural-language understanding, cf. Bruder et al. (1986); Rapaport, Segal, Shapiro, Zubin, Bruder, Duchan, Almeida et al. (1989); Rapaport, Segal, Shapiro, Zubin, Bruder, Duchan and Mark (1989); Duchan, Bruder, and Hewitt (1995).

  60. Actually, as we saw, there was a mug in the water hand, but it seems to have been ignored. Cf. Sect. ‘Epiphany’, observation 3, above.

References

  • Allaire, E. B. (1963). Bare particulars. Philosophical Studies, 14; reprinted in Loux 1970, 235–244.

  • Allaire, E. B. (1965). Another look at bare particulars. Philosophical Studies, 16; reprinted in Loux 1970, 250–257.

  • Allen, J. F., & Perrault, C. R. (1980). Analyzing intentions in utterance. Artificial Intelligence, 15, 143–178.

    Article  Google Scholar 

  • Anderson, M. L. (2003). Embodied cognition: A field guide. Artificial Intelligence, 149, 91–130

    Article  Google Scholar 

  • Arahi, K., & Momouchi, Y. (1990). Learning of semantic concept in copular sentence (in Japanese), IPSJ SIG Reports, 90(77).

  • Arrighi, C., & Ferrario, R. (2005). The dynamic nature of meaning. In L. Magnani & R. Dossena (Eds.), Computing, philosophy, and cognition (pp. 295–312). London: [King’s] College Publications.

    Google Scholar 

  • Baker, R. (1967). Particulars: bare, naked, and nude. Noûs, 1, 211–212.

    Article  Google Scholar 

  • Berners-Lee, T., & Fischetti, M. (1999). Weaving the web. New York: HarperCollins.

    Google Scholar 

  • Berners-Lee, T., Hendler, J., & Lassila, O. (2001). The Semantic Web. Scientific American (17 May).

  • Brown, R. (1973). A first language. Cambridge, MA: Harvard Univ. Press.

    Google Scholar 

  • Bruce, B. C. (1975). Generation as a social action. Theoretical issues in natural language processing-I (pp. 64–67). Morristown, NJ: Assoc. Comp. Ling.

  • Bruder, G. A., Duchan, J. F., Rapaport, W. J., Segal, E. M., Shapiro, S. C., & Zubin, D. A. (1986). Deictic centers in narrative. Tech. Rep. 86–20. Buffalo: SUNY Buffalo Dept. Comp. Sci.

  • Bruner, J. (1983). Child’s talk. New York: Norton.

    Google Scholar 

  • Castañeda, H.-N. (1980). On philosophical method. Bloomington, IN: Noûs Publications.

    Google Scholar 

  • Castañeda, H.-N. (1984). Philosophical refutations. In J. H. Fetzer (Ed.), Principles of philosophical reasoning (pp. 227–258). Totawa NJ: Rowman & Allenheld.

    Google Scholar 

  • Ceusters, W. (2005, October 27). Ontology: The need for international coordination [http://www.ncor.buffalo.edu/inaugural/ppt/ceusters.ppt

  • Chappell, V. C. (1964). Particulars re-clothed. Philosophical Studies, 15, reprinted in Loux 1970, 245–249.

  • Chrisley, R. (2003). Embodied artificial intelligence. Artificial Intelligence, 149, 131–150.

    Article  Google Scholar 

  • Chun, S. A. (1987). SNePS implementation of possessive phrases. SNeRG Tech. Note 19 (Buffalo: SUNY Buffalo Dept. Comp. Sci.) [http://www.cse.buffalo.edu/sneps/Bibliography/chun87.pdf

  • Clark, A., & Chalmers, D. J. (1998). The extended mind. Analysis, 58, 10–23.

    Article  Google Scholar 

  • Cohen, P. R., & Levesque, H. J. (1985). Speech acts and rationality. Proc. 23rd Annual Meeting, Assoc. Comp. Ling. (pp. 49–60). Morristown, NJ: Assoc. Comp. Ling.

  • Cohen, P. R., & Levesque, H. J. (1990). Rational interaction as the basis for communication. In P. R. Cohen, J. Morgan, & M. E. Pollack (Eds.), Intentions in communication (pp. 221–256). Cambridge, MA: MIT Press.

    Google Scholar 

  • Cohen, P. R., & Perrault, C. R. (1979). Elements of a plan-based theory of speech acts. Cognitive Science, 3, 177–212

    Article  Google Scholar 

  • Damasio, A. R. (1989). Concepts in the brain. In Forum: What is a concept? Mind and Language, 4, 24–27.

  • Davidson, D. (1967). The logical form of action sentences. In N. Rescher (Ed.), The logic of decision and action. Pittsburgh: Univ. Pittsburgh Press.

    Google Scholar 

  • Dehaene, S. (1992). Varieties of numerical abilities. Cognition, 44, 1–42.

    Article  Google Scholar 

  • Dresner, E. (2002). Holism, language acquisition, and algebraic logic. Linguistics and Philosophy, 25, 419–452.

    Article  Google Scholar 

  • Duchan, J. F., Bruder, G. A., & Hewitt, L. E. (Eds.). (1995), Deixis in narrative. Hillsdale, NJ: Erlbaum.

    Google Scholar 

  • Ehrlich, K. (1995). Automatic vocabulary expansion through narrative context. Tech. Rep. 95-09. Buffalo: SUNY Buffalo Dept. Comp. Sci.

  • Ehrlich, K. (2004). Default reasoning using monotonic logic. Proc., 15th Midwest Artif. Intel. & Cog. Sci. Conf. (pp. 4–54) [http://www.cse.buffalo.edu/∼rapaport/CVA/ehrlich-maics-sub.pdf

  • Ehrlich, K., & Rapaport, W. J. (1997). A computational theory of vocabulary expansion. Proc. 19th Annual Conf., Cog. Sci. Soc. (pp. 205–210). Mahwah, NJ: Erlbaum.

  • Ehrlich, K., & Rapaport, W. J. (2004). A cycle of learning. Proc., 26th Annual Conf., Cog. Sci. Soc. Mahwah, NJ: Erlbaum, 2005, pp. 1555.

  • Elgin, S. H. (1984). Native tongue. New York: DAW.

    Google Scholar 

  • Fitch, W. T. (2006). Hypothetically speaking. American Scientist (July–August), 369–370.

  • Galbraith, M., & Rapaport, W. J. (Eds.). (1995). Where does I come from? Special Issue on subjectivity and the debate over computational cognitive science. Minds and Machines, 5, 513–620.

    Google Scholar 

  • Geach, P. T. (1967). Intentional identity. The Journal of Philosophy, 64, 627–632.

    Article  Google Scholar 

  • Giere, R. N. (2002) Distributed cognition in epistemic cultures. Phil. Sci., 69, 637–644.

    Article  Google Scholar 

  • Goldfain, A. (2004). Using SNePS for mathematical cognition. [http://www.cse.buffalo.edu/∼ag33]

  • Goldfain, A. (2006). A computational theory of early mathematical cognition. [http://www.cse.buffalo.edu/∼ag33]

  • Greenfield, P. M., & Savage-Rumbaugh, E. S. (1990). Grammatical Combination in Pan Paniscus, in Parker & Gibson 1990 (pp. 540–577).

  • Grosz, B. J., & Sidner, C. L. (1986) Attention, intentions, and the structure of discourse. Computational Linguistics, 12, 175–204.

    Google Scholar 

  • Haller, S. M. (1994). Interactive generation of plan descriptions and justifications. Tech. Rep. 94-40. Buffalo, NY: SUNY Buffalo Dept. Comp. Sci.

  • Haller, S. M. (1995). Planning text for interactive plan explanations. In E. A. Yfantis (Ed.), Intelligent systems (pp. 61–67). Dordrecht: Kluwer.

    Google Scholar 

  • Harel, G. (1998). Two dual assertions. The American Mathematical Monthly, 105, 497–507.

    Article  MATH  MathSciNet  Google Scholar 

  • Harman, G. (1987). (Nonsolipsistic) Conceptual role semantics. In E. Lepore (Ed.), New directions in semantics (pp. 55–81). London: Academic.

    Google Scholar 

  • Harnad, S. (1990) The symbol grounding problem. Physica D, 42, 335–346.

    Article  Google Scholar 

  • Hofstadter, D. R. (2001). Analogy as the core of cognition. In D. Gentner et al. (Eds.), The analogical mind (pp 499–538). Cambridge, MA: MIT Press.

  • Hutchins, E. (1995a) Cognition in the wild. Cambridge, MA: MIT Press.

    Google Scholar 

  • Hutchins, E. (1995b). How a cockpit remembers its speeds. Cog. Sci., 19, 265–288.

    Article  Google Scholar 

  • Ismail, H. O., & Shapiro, S. C. (2000). Two problems with reasoning and acting in time. In A. G. Cohn et al. (Eds.), Principles of knowledge representation and reasoning: Proc., 7th Int’l. Conf. (pp. 355–365). San Francisco: Morgan Kaufmann.

  • Iwańska, Ł. M., & Shapiro, S. C. (Eds.). (2000). Natural language processing and knowledge representation. Menlo Park CA/Cambridge MA: AAAI Press/MIT Press.

    MATH  Google Scholar 

  • Jackendoff, R. (2002). Foundations of language. Oxford: Oxford Univ. Press.

    Google Scholar 

  • Kalderon, M. E. (2001). Reasoning and representing. Philosophical Studies, 105, 129–160.

    Article  MathSciNet  Google Scholar 

  • Kamp, H., & Reyle, U. (1993). From discourse to logic. Dordrecht: Kluwer.

    Google Scholar 

  • Keller, H. (1903). Optimism. New York: Crowell.

    Google Scholar 

  • Keller, H. (1905). The story of my life. Garden City, NY: Doubleday (1954).

  • Kibby, M. W., Rapaport, W. J., Wieland, K. M., & Dechert, D. A. (in press), CSI: Contextual semantic investigation for word meaning. In L. A. Baines (Ed.), Multisensory learning. Alexandria, VA: Association for Supervision and Curriculam Development [http://www.cse.buffalo.edu/∼rapaport/CVA/CSI.pdf]

  • Lammens, J. (1994). A computational model of color perception and color naming. Tech. Rep. 94-26. Buffalo: SUNY Buffalo Dept. Comp. Sci.). [http://www.cse.buffalo.edu/sneps/Bibliography/lammens94.pdf]

  • Leiber, J. (1996). Helen Keller as cognitive scientist. Philosophical Psychology, 9, 419–440.

    Google Scholar 

  • Loux, M. J. (Ed.). (1970). Universals and particulars. Garden City, NY: Anchor.

    Google Scholar 

  • Maida, A. S., & Shapiro, S. C. (1982). Intensional concepts in propositional semantic networks. Cognitive Science, 6, 291–330.

    Article  Google Scholar 

  • Maloney, J. C. (1987). The right stuff. Synthese, 70, 349–372.

    Article  Google Scholar 

  • Maloney, J. C. (1989). The mundane matter of the mental language. Cambridge, UK: Cambridge Univ. Press.

    Google Scholar 

  • Martins, J., & Shapiro, S. C. (1988). A model for belief revision. Artificial Intellegence, 35, 25–79.

    Article  MATH  MathSciNet  Google Scholar 

  • Mayes, A. R. (1991). Review of H. Damasio & A. R. Damasio, Lesion analysis in neuropsychology (inter alia). British Journal of Psychology, 2, 109–112.

  • McDermott, D. (1981). Artificial intelligence meets natural stupidity. In J. Haugeland (Ed.), Mind design (pp. 143–160). Cambridge, MA: MIT Press.

    Google Scholar 

  • Meinong, A. (1904). Über Gegenstandstheorie. In R. Haller (Ed.), Alexius Meinong Gesamtausgabe (Vol II. pp. 481–535). Graz, Austria: Akademische Druck- u. Verlagsanstalt (1971).

  • Miles, H. L. W. (1990). The cognitive foundations for reference in a Signing Orangutan. (pp. 511–539).

  • Morris, C. (1938). Foundations of the theory of signs. Chicago: Univ. Chicago Press.

    Google Scholar 

  • Nagel, T. (1986). The view from nowhere. New York: Oxford Univ. Press.

    Google Scholar 

  • Neal, J. G., & Shapiro, S. C. (1987). Knowledge-based Parsing. In L. Bolc (Ed.), Natural language parsing systems (pp. 49–92). Berlin: Springer-Verlag.

    Google Scholar 

  • Papineau, D. (1998). “Get a Grip”, review of Wilson 1998. New York Times Book Review (19 July), p. 9.

  • Parker, S. T., & Gibson, K. R. (Eds.). (1990). “Language” and intellect in monkeys and apes. Cambridge, UK: Cambridge Univ. Press.

    Google Scholar 

  • Parnas, D. L. (1972). A technique for software module specification with examples. Communication of the Association for Computing Machinery, 15, 330–336.

    Google Scholar 

  • Preston, J., & Bishop, M. (Eds.). (2002). Views into the Chinese Room. Oxford: Oxford Univ. Press.

    MATH  Google Scholar 

  • Proudfoot, D. (2002). Wittgenstein’s anticipation of the Chinese Room, in Preston & Bishop 2002 (pp. 167–180).

  • Putnam, H. (1975). The meaning of ‘meaning’. reprinted in Mind, language and reality (pp. 215–271). Cambridge, UK: Cambridge Univ. Press.

  • Quine, W. V. O. (1953). “On what there is” and “Logic and the reification of universals”. In From a logical point of view (Chs. I, VI). New York: Harper & Row.

  • Quine, W. V. O. (1960). Word and object. Cambridge, MA: MIT Press.

    MATH  Google Scholar 

  • Quine, W. V. O. (1969). Ontological relativity. In W. V. O. Quine (Ed.), Ontological relativity and other essays (pp. 26–68). New York: Columbia Univ. Press.

    Google Scholar 

  • Rapaport, W. J. (1976), Intentionality and the structure of existence. Ph.D. diss. Bloomington: Indiana Univ. Dept. Phil.

  • Rapaport, W. J. (1978). Meinongian theories and a Russellian paradox. Noûs, 12, 153–110; errata, Noûs, 13 (1979) 125.

  • Rapaport, W. J. (1981). How to make the world fit our language. Grazer Philosophische Studien, 14, 1–21.

    Google Scholar 

  • Rapaport, W. J. (1982). Unsolvable problems and philosophical progress. American Philosophical Quarterly, 19, 289–298.

    Google Scholar 

  • Rapaport, W. J. (1985). Machine understanding and data abstraction in Searle’s Chinese Room. Proc., 7th Annual Meeting, Cog. Sci. Soc. (pp. 341–345) Hillsdale, NJ: Erlbaum.

  • Rapaport, W. J. (1985/1986). Non-existent objects and epistemological ontology. Grazer Philosophische Studien 25/26, 61–95.

    Google Scholar 

  • Rapaport, W. J. (1986a). Logical foundations for belief representation. Cognitive Science, 10, 371–422.

    Article  Google Scholar 

  • Rapaport, W. J. (1986b). Searle’s experiments with thought. Philosophy of Science, 53, 271–279.

    Article  Google Scholar 

  • Rapaport, W. J. (1988a). To think or not to think. Noûs, 22, 585–609.

    Article  Google Scholar 

  • Rapaport, W. J. (1988b). Syntactic semantics. In J. H. Fetzer (Ed.), Aspects of artificial intelligence (pp. 1–131). Dordrecht: Kluwer.

    Google Scholar 

  • Rapaport, W. J. (1990). Computer processes and virtual persons. Tech. Rep. 90-13 (Buffalo: SUNY Buffalo Dept. Comp. Sci., May 1990) [http://www.cse.buffalo.edu/∼rapaport/Papers/cole.tr.17my90.pdf]

  • Rapaport, W. J. (1991a). Predication, fiction, and artificial intelligence. Topoi ,10, 79–111.

    Article  MathSciNet  Google Scholar 

  • Rapaport, W. J. (1991b). Meinong, Alexius I: Meinongian semantics. In H. Burkhardt, & B. Smith (Eds.), Handbook of metaphysics and ontology (pp. 516–519). Munich: Phil Verlag.

    Google Scholar 

  • Rapaport, W. J. (1993). Because mere calculating isn’t thinking. Minds & Machines, 3, 11–20.

    Article  Google Scholar 

  • Rapaport, W. J. (1995). Understanding understanding. In J. E. Tomberlin (Ed.), Philosophical perspectives: AI, connectionism, and philosophical psychology (Vol. 9, pp. 49–88). Atascadero, CA: Ridgeview.

    Google Scholar 

  • Rapaport, W. J. (1996). Understanding understanding, Tech. Rep. 96-26. Buffalo: SUNY Buffalo Dept. Comp. Sci. [http://www.cse.buffalo.edu/tech-reports/96-26.ps]

  • Rapaport, W. J. (1998). How minds can be computational systems. JETAI, 10, 403–419.

    Article  MATH  Google Scholar 

  • Rapaport, W. J. (1999). Implementation is semantic interpretation. Monist, 82, 109–130.

    Google Scholar 

  • Rapaport, W. J. (2000). How to pass a Turing test. Journal of Logic Language and Information, 9, 467–490.

    Article  MATH  Google Scholar 

  • Rapaport, W. J. (2002). Holism, conceptual-role semantics, and syntactic semantics. Minds & Machines, 12, 3–59.

    Article  MATH  Google Scholar 

  • Rapaport, W. J. (2003a). What did you mean by that? Minds & Machines, 13, 397–427.

    Article  Google Scholar 

  • Rapaport, W. J. (2003b). What is the ‘context’ for contextual vocabulary acquisition? In P. P. Slezak (Ed.), Proc., 4th Int’l. Conf. Cog. Sci./7th Australasian Soc. Cog. Sci. Conf (Vol. 2, pp. 547–552). Sydney: Univ. New South Wales.

  • Rapaport, W. J. (2005a). In defense of contextual vocabulary acquisition. In A. Dey et al. (Eds.), Proc., 5th Int’l. & Interdisc. Conf., Modeling and Using Context (pp. 396–409). Berlin: Springer-Verlag Lecture Notes in AI 3554.

  • Rapaport, W. J. (2005b). Implementation is semantic interpretation: further thoughts. JETAI, 17, 385–417.

    Article  Google Scholar 

  • Rapaport, W. J. (2005c). Review of Shieber 2004. Computational Linguistics, 31, 407–412.

    Article  Google Scholar 

  • Rapaport, W. J. (2005d). The Turing test. In: Ency. Lang. & Ling., (2nd ed., Vol. 13, pp. 151–159). Oxford: Elsevier.

  • Rapaport, W. J. (2006). Review of Preston & Bishop 2002. Australasian Journal of Philosophy, 84, 129–133.

    Article  Google Scholar 

  • Rapaport, W. J., & Ehrlich, K. (2000). A Computational theory of vocabulary acquisition. In Iwańska & Shapiro 2000 (pp. 347–375).

  • Rapaport, W. J., & Kibby, M. W. (2002). Contextual vocabulary acquisition. In N. Callaos et al. (Eds.), Proc., 6th World Multiconf., Systemics, Cybernetics & Informatics (Vol. 2, pp. 261–266). Orlando: Int’l. Inst. Informatics & Systemics.

  • Rapaport, W. J., Segal, E. M., Shapiro, S. C., Zubin, D. A., Bruder, G. A., Duchan, J. F., Almeida, M. J., Daniels, J. H., Galbraith, M. M., Wiebe, J. M., & Yuhan, A. H. (1989). Deictic centers and the cognitive structure of narrative comprehension. Tech. Rep. 89-01. Buffalo: SUNY Buffalo Dept. Comp. Sci. [http://www.cse.buffalo.edu/∼rapaport/Papers/DC.knuf.pdf]

  • Rapaport, W. J., Segal, E. M., Shapiro, S. C., Zubin, D. A., Bruder, G. A., Duchan, J. F., & Mark, D. M. (1989). Cognitive and computer systems for understanding narrative text. Tech. Rep. 89-07. Buffalo: SUNY Buffalo Dept. Comp. Sci.

  • Rapaport, W. J., Shapiro, S. C., & Wiebe, J. M. (1986). Quasi-indicators, knowledge reports, and discourse. Tech. Rep. 86-15. Buffalo: SUNY Buffalo Dept. Comp. Sci; revised version published as Rapaport, Shapiro, and Wiebe (1997).

  • Rapaport, W. J., Shapiro, S. C., & Wiebe, J. M. (1997). Quasi-indexicals and knowledge reports. Cognitive Science, 21, 63–107.

    Article  Google Scholar 

  • Rogers, H. Jr. (1959). The present theory of Turing machine computability. Journal of the Society for Industrial and Applied Mathematics, 7, 114–130.

    Article  MATH  Google Scholar 

  • Santore, J. F., & Shapiro, S. C. (2004). Identifying perceptually indistinguishable objects. In S. Coradeschi & A. Saffiotti (Eds.), Anchoring symbols to sensor data (pp. 1–9). Menlo Park, CA: AAAI Press.

    Google Scholar 

  • Searle, J. R. (1980). Minds, brains, and programs. The Behavioral and Brain Sciences, 3, 417–457.

    Article  Google Scholar 

  • Searle, J. R. (1993). The failures of computationalism. Think (Vol 2 (June), pp. 68–71). Tilburg: Tilburg Univ. Inst. Lang. Tech. & AI.

  • Searle, J. R. (2002). Twenty-one years in the Chinese Room. In Preston & Bishop 2002 (pp. 51–69).

  • Sellars, W. (1963). Science, perception and reality. London: Routledge & Kegan Paul.

    Google Scholar 

  • Severo, R. (1999). Clifton Fadiman, a wordsmith known for his encyclopedic knowledge, is dead at 95. New York Times(21 June): B5.

  • Shapiro, S. C. (1979). The SNePS semantic network processing system. In N. Findler (Ed.), Associative networks (pp. 179–203). New York: Academic.

    Google Scholar 

  • Shapiro, S. C. (1981). COCCI: A deductive semantic network program for solving microbiology unknowns. Tech. Rep. 173 Buffalo: SUNY Buffalo Dept. Comp. Sci.

  • Shapiro, S. C. (1982). Generalized augmented transition network grammars for generation from semantic networks. American Journal of Computational Linguistics, 8, 12–25.

    Google Scholar 

  • Shapiro, S. C. (1986). Symmetric relations, intensional individuals, and variable binding. Proceedings of IEEE, 74, 1354–1363.

    Article  Google Scholar 

  • Shapiro, S. C. (1989). The CASSIE projects. In J. P. Martins & E. M. Morgado (Eds.), EPIA89: 4th Portuguese Conf. AI, Proc. (pp. 362–380). Berlin: Springer-Verlag Lecture Notes in AI 390.

  • Shapiro, S. C. (1992). Artificial intelligence. In S. C. Shapiro (Ed.), Ency. AI (2nd ed., pp. 54–57) New York: Wiley.

    Google Scholar 

  • Shapiro, S. C. (1998). Embodied cassie. In Cog. Robotics (pp. 136–143). Menlo Park, CA: AAAI Press.

  • Shapiro, S. C. (2000). SNePS: A logic for natural language understanding and commonsense reasoning. In Iwańska & Shapiro 2000 (pp. 175–195).

  • Shapiro, S. C. (2003). FevahrCassie. SNeRG Tech. Note 35 (Buffalo, NY: SUNY Buffalo Dept. Comp. Sci. & Eng’g.) [http://www.cse.buffalo.edu/∼shapiro/Papers/buildingFevahrAgents.pdf]

  • Shapiro, S. C., & Ismail, H. O. (2003). Anchoring in a grounded layered architecture with integrated reasoning. Robotics and Autonomous Systems, 43, 97–108.

    Article  Google Scholar 

  • Shapiro, S. C., Ismail, H. O., & Santore, J. F. (2000). Our dinner with Cassie. Working notes for the AAAI 2000 Spring symposium on natural dialogues with practical robotic devices (pp. 57–61). Menlo Park, CA: AAAI Press.

  • Shapiro, S. C., & Rapaport, W. J. (1987). SNePS considered as a fully intensional propositional semantic network. In N. Cercone & G. McCalla (Eds.), The knowledge frontier (pp. 262–315). New York: Springer-Verlag.

    Google Scholar 

  • Shapiro, S. C., & Rapaport, W. J. (1991). Models and minds. In R. Cummins & J. Pollock (Eds.), Philosophy and AI (pp. 215–259). Cambridge, MA: MIT Press.

    Google Scholar 

  • Shapiro, S. C., & Rapaport, W. J. (1992). The SNePS family. Computers & Mathematics with Applications, 23, 243–275.

    Article  MATH  Google Scholar 

  • Shapiro, S. C., & Rapaport, W. J. (1995), An introduction to a computational reader of narratives. In Duchan et al. 1995 (pp. 79–105).

  • Shapiro, S. C., Rapaport, W. J., Cho, S.-H., Choi, J., Feit, E., Haller, S., Kankiewicz, J., & Kumar, D. (1996). A dictionary of SNePS case frames. [http://www.cse.buffalo.edu/sneps/Manuals/dictionary.pdf]

  • Shapiro, S. C., & the SNePS Research Group (2006). SNePS. Wikipedia [http://www.en.wikipedia.org/wiki/SNePS]

  • Sheckley, R. (1954). Ritual. In R. Sheckley (Ed.), Untouched by human hands (pp. 155–165). New York: Ballantine.

    Google Scholar 

  • Shieber, S. M. (2004). The Turing test. Cambridge, MA: MIT Press.

    MATH  Google Scholar 

  • Smith, B. C. (1982). Linguistic and Computational Semantics. Proc., 20th Annual Meeting, Assoc. Comp. Ling. (pp. 9–15). Morristown, NJ: Assoc. Comp. Ling.

  • Spärck Jones, K. (1967). Dictionary circles, Tech. Memo. TM-3304. Santa Monica, CA: System Development Corp.

  • Srihari, R. K. (1991a). PICTION: A system that uses captions to label human faces in newspaper photographs. Proc., 9th Nat’l. Conf. AI (pp. 80–85). Menlo Park, CA: AAAI Press/MIT Press.

  • Srihari, R. K. (1991b). Extracting visual information from text. Tech. Rep. 91-17. Buffalo: SUNY Buffalo Dept. Comp. Sci.

  • Srihari, R. K. (1993). Intelligent document understanding. Proc., Int’l. Conf. Document analysis and recognition (pp. 664–667).

  • Srihari, R. K. (1994). Use of collateral text in understanding photos in documents. Proc., Conf. Applied imagery and pattern recognition (pp. 186–199).

  • Srihari, R. K., & Rapaport, W. J. (1989). Extracting visual information from text. Proc., 11th Annual Conf. Cog. Sci. Soc. (pp. 364–371). Hillsdale, NJ: Erlbaum.

  • Srihari, R. K., & Rapaport, W. J. (1990). Combining linguistic and pictorial information. In D. Kumar (Ed.), Current trends in SNePS (pp. 5–96). Berlin: Springer-Verlag Lecture Notes in AI 437.

  • Swan, J. (1994). Touching words. In M. Woodmansee & P. Jaszi (Eds.), The construction of authorship (pp. 57–100). Durham, NC: Duke Univ. Press.

    Google Scholar 

  • Tarski, A. (1969). Truth and proof. Scientific American, 220, 63–70, 75–77.

    Google Scholar 

  • Taylor, J. G. (2002). Do virtual actions avoid the Chinese Room? In Preston & Bishop 2002 (pp. 269–293).

  • Terrace, H. S. (1985). In the beginning was the ‘name’. American Psychologist, 40, 1011–1028.

    Article  Google Scholar 

  • Terrace, H. S. (1991). Letter to the editor. New York Review of Books, 3(15), 53.

    Google Scholar 

  • Thomason, R. H. (2003). Dynamic contextual intensional logic. In P. Blackburn et al. (Eds.), CONTEXT 2003 (pp. 328–341). Berlin: Springer-Verlag Lecture Notes in AI 2680.

  • Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.

    Article  MathSciNet  Google Scholar 

  • Vauclair, J. (1990). Primate cognition. In Parker & Gibson 1990 (pp. 312–329).

  • Von Glasersfeld, E. (1977). Linguistic communication. In D. M. Rumbaugh (Ed.), Language learning by a Chimpanzee (pp. 55–71). New York: Academic.

    Google Scholar 

  • Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the Association for Computing Machinery 9, 36–45.

    Google Scholar 

  • Wilson, F. R. (1998). The hand. New York: Pantheon.

    Google Scholar 

  • Winston, P. H. (1975). Learning structural descriptions from examples. Reprinted In R. J. Brachman, & H. J. Levesque (Eds.), Readings in knowledge representation (pp. 141–168). Los Altos, CA: Morgan Kaufmann (1985).

  • Wittgenstein, L. (1958). Philosophical investigations (3rd ed., trans. by G. E. M. Anscombe). New York: Macmillan.

  • Woods, W. A. (1975). What’s in a link. In D. G. Bobrow & A. Collins (Eds.), Representation and understanding (pp. 35–82). New York: Academic.

    Google Scholar 

  • Zuckermann, L. S. (1991). Letter to the editor. New York Review of Books, 3(15), 53.

    Google Scholar 

Download references

Acknowledgments

An ancestor of this essay was originally written around 1992, was discussed in my seminar in Spring 1993 on “Semantics, Computation, and Cognition” at SUNY Buffalo, and first appeared in an unpublished technical report (Rapaport, 1996). I am grateful to Albert Goldfain, Frances L. Johnson, David Pierce, Stuart C. Shapiro, and the other members of the SNePS Research Group for comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to William J. Rapaport.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Rapaport, W.J. How Helen Keller used syntactic semantics to escape from a Chinese Room. Minds & Machines 16, 381–436 (2006). https://doi.org/10.1007/s11023-007-9054-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-007-9054-6

Keywords

Navigation