Skip to main content
Log in

Symbol Grounding in Computational Systems: A Paradox of Intentions

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

The paper presents a paradoxical feature of computational systems that suggests that computationalism cannot explain symbol grounding. If the mind is a digital computer, as computationalism claims, then it can be computing either over meaningful symbols or over meaningless symbols. If it is computing over meaningful symbols its functioning presupposes the existence of meaningful symbols in the system, i.e. it implies semantic nativism. If the mind is computing over meaningless symbols, no intentional cognitive processes are available prior to symbol grounding. In this case, no symbol grounding could take place since any grounding presupposes intentional cognitive processes. So, whether computing in the mind is over meaningless or over meaningful symbols, computationalism implies semantic nativism.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Copeland (e.g. 1997, 2000, 2002) and others have recently argued that this interpretation of the Church-Turing thesis is mistaken and that there are possible machines, termed “hypercomputers”, that could compute functions that are not computable by some Turing machine. For the purposes of this paper, we only need a defining characteristic of “computationalism” and propose to use this standard interpretation of the Church-Turing thesis. Whether the mind is a computer in some different sense is a separate question (and I have tried to undermine the arguments for hypercomputing elsewhere).

  2. The use of type/token is from Harnad 1990.

  3. This understanding may be prompted by the metaphorical use of “command” and similar expressions on several levels of computer use. Not only do we say that a computer “obeys commands” of the user, we also say that a programmer writes commands, even that he/she uses algorithms. This is on a much higher technical level, however, than the one relevant here. A command in a conventional “higher” programming language, in order to be executed, must be translated (“compiled” or “interpreted”) into “machine code”, a code that the particular machine with a particular operating system can load into its storage, where it is present in a form that the particular CPU can process. The CPU again, will have thousands of algorithms already built-in (“hard-wired”); it will not need to be programmed to the lowest level of switches each time.

  4. Accordingly, the solution to symbol grounding cannot be to give basic rules, as does for example Hofstadter in his discussion of the matter. For his MU and MIU systems (Hofstadter 1979, Chaps. I & II, p. 170, 264), you assume that rules have meaning. If you do not, then you have to postulate that “absolute meaning” comes about somehow by itself, in “strange loops” (Chap. VI and passim).

  5. What is relevant here is not so much semantic externalism (that has lead to externalism about mental states) but Putnam’s critique of his own earlier causal theories of reference. This critique shows that a successful story of the causal relation between my tokens of “gold” and gold has to involve my desire to refer to that particular metal with that particular word. Putnam has tried to show this in his model-theoretic argument (1981, p. 34 etc.) and in the point that we need to single out what we mean by “cause”, given that any event has several causes—whereas we need the one “explanatory” cause (Putnam 1982, 1985). This is supported by Wittgensteinian arguments to the effect that deixis is necessarily ambiguous (sometimes called the “disjunction problem”). When Kripke pointed at the cat (and Quine’s native pointed at the rabbit), were they pointing at a cat, a feline, an animal, a flea, a colour, or a symbol? When Putnam pointed at water, how much H2O did we need in the sample for reference to be successful?

  6. Fodor’s recent battle against behaviorist accounts of concept possession fires back on his Cartesian theory, when he insists on the problem that knowing how to apply “trilateral” is necessarily also knowing how to apply “triangular”, even in counterfactual cases (Fodor 2004, p. 39), since whatever thing typically causes an instance of “triangular” also causes an instance of “trilateral”. This is worse than Quine’s undetached rabbit parts and, of course, than the rabbit fly as reference for “gavagai”.

  7. Fodor and Pinker in their 2005 exchange alone use the following, most of which are obviously either too narrow or too broad:

    1) Literally being a Turing machine with tape and all (attributed to Fodor by Pinker 2005, p. 6). Falsely attributed and failing to mention that the relevant notion is that of the “universal” Turing machine.

    2) “Cognitive architecture is Classical Turing architecture” (Pinker 2005, p. 6). If “architecture” is taken sufficiently abstractly this is different from 1). But what is that “architecture”? Perhaps being able to “compute any partial recursive function, any grammar composed of rewrite rules, and, it is commonly thought, anything that can be computed by any other physically realizable machine that works on discrete symbols and that arrives at an answer in a finite number of steps” (Pinker 2005, p. 6) on Turing). But this is a description of abilities, not of structure.

    3) Having “the architecture of a Turing machine or some other serial, discrete, local processor” (Pinker 2005, p. 22—attributed to Fodor). False attribution, since in 2000, Fodor did not mention the possibility of other processors. Suggests that “architecture” means physical setup (tape and reader), after all—see problems in 2).

    4) Being ‘Turing-equivalent’, in the sense of ‘input–output equivalent’ (Fodor 2000, pp. 30, 33, 105n3). Surely too weak. Any information processing system is input–output equivalent to more than one Turing machine.

    5) Being ‘defined on syntactically structured mental representations that are much like sentences’ (Fodor 2000, p. 4). “Defined on” and “much like sentences”? A definition of the language of thought? Not of computation, surely.

    6) Being supervenient “on some syntactic fact or other”—“minimal CTM” (Fodor 2000, p. 29). Too minimal, as Fodor himself agrees.

    7) Being “causally sensitive to, and only to, the syntax of the mental representations they are defined over” [not to meaning] AND being “sensitive only to the local syntactic properties of mental representations” (Fodor’s upshot in 2005, 26)—delete “mental” above and note that none of this makes for a computational process.

    8) “In this conception, a computational system is one in which knowledge and goals are represented as patterns in bits of matter (‘representations’). The system is designed in such a way that one representation causes another to come into existence; and these changes mirror the laws of some normatively valid system like logic, statistics, or laws of cause and effect in the world.” (Pinker 2005, p. 2). Any representational systematic process is computational, then.

    9) “… human cognition is like some kind of computer, presumably one that engages in parallel, analog computation as well as the discrete serial variety”. Pinker 2005, p. 34 on Pinker—note the “like”, “some kind” and “presumably”, plus the circularity of using “computer”!

  8. There are at least two notions of algorithm possible here, depending on whether the step-by-step process is one of symbol manipulation or not. (e.g. Harel 2000 introduces the notion of algorithm via a recipe for making chocolate mousse).

References

  • Chalmers, D. J. (1993). A computational foundation for the study of cognition. Online at http://consc.net/papers/computation.html.

  • Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. Oxford: Oxford University Press.

  • Churchland, P. (2005). Functionalism at forty: A critical retrospective. Journal of Philosophy, 102(1), 33–50.

    Google Scholar 

  • Copeland, D. (1997). The broad conception of computation. American Behavioral Scientist, 40, 690–716.

    Article  Google Scholar 

  • Copeland, D. (2000). Narrow versus wide mechanism, including a re-examination of Turing’s views on the mind–machine issue. Journal of Philosophy, 97/1, 5–32.

    Google Scholar 

  • Copeland, D. (2002). Hypercomputation. Minds and Machines, 12, 461–502.

    Article  MATH  Google Scholar 

  • Davis, M. (2000). The universal computer: The road from Leibniz to Turing. New York: W. W. Norton.

    Google Scholar 

  • Fodor, J. (1981). The mind-body problem. Scientific American 244. Reprinted in J. Heil (Ed.), Philosophy of mind: A guide and anthology (pp. 168–182). Oxford: Oxford University Press 2004.

  • Fodor, J. (1994a). The elm and the expert: Mentalese and Its semantics. Cambridge, Mass: MIT Press.

    Google Scholar 

  • Fodor, J. (1994b). Fodor, Jerry A., In S. Guttenplan (Ed.), A companion to the philosophy of mind (pp. 292–300). Oxford: Blackwell.

  • Fodor, J. (1998). Concepts: Where cognitive science went wrong. Oxford: Oxford University Press.

    Google Scholar 

  • Fodor, J. (2000). The mind doesn’t work that way: The scope and limits of computational psychology. Cambridge, Mass: MIT Press.

    Google Scholar 

  • Fodor, J. A. (2003). More peanuts. The London Review of Books, 25, 09.10.2003.

  • Fodor, J. (2004). Having concepts: A brief refutation of the twentieth century, with “Reply to Commentators”. Mind and Language, 19, 29–47, 99–112.

    Google Scholar 

  • Fodor, J. (2005). Reply to Steven Pinker ‘so how does the mind work?’. Mind & Language, 20(1), 25–32.

    Article  Google Scholar 

  • Gärdenfors, P. (2000). Conceptual spaces: The geometry of thought. Cambridge, Mass: MIT Press.

    Google Scholar 

  • Harel, D. (2000). Computers Ltd.: What they really can’t do. Oxford: Oxford University Press.

    MATH  Google Scholar 

  • Harnad, S. (1990). The symbol grounding problem. Physica D, 42, 335–346.

    Article  Google Scholar 

  • Haugeland, J. (1985). Artificial intelligence: The very idea. Cambridge, Mass: MIT-Press.

    Google Scholar 

  • Haugeland, J. (2002). Syntax, semantics, physics. In Preston & Bishop (pp. 379–392).

  • Hauser, L. (2002). Nixin’ goes to China. In Preston & Bishop (pp. 123–143).

  • Hofstadter, D. R. (1979). Gödel, Escher, Bach: An eternal golden Braid. New York: Basic Books.

    Google Scholar 

  • Kahneman, D., Treisman, A., & Gibbs, B. J. (1992). The reviewing of the object files: Object-specific integration of information. Cognitive Psychology, 24, 174–219.

    Article  Google Scholar 

  • Kim, J. (1996). Philosophy of mind. Boulder, Col: Westview Press.

    Google Scholar 

  • Lycan, W. G. (2003). Philosophy of mind. In N. Bunnin & E. P. Tsui-James (Eds.), The Blackwell companion to philosophy (2nd ed., pp. 173–202). Oxford: Blackwell.

    Google Scholar 

  • Müller, V. C. (2004). There must be encapsulated nonconceptual content in vision. In A. Raftopoulos (Ed.), Cognitive penetrability of perception: Attention, action, attention and bottom-up constraints (pp. 181–194). Huntington, NY: Nova Science.

    Google Scholar 

  • Müller, V. C. (2007). Is there a future for AI without representation? Minds and Machines, 17, 101–115.

    Google Scholar 

  • Müller, V. C. (2008). Representation in digital systems. In A. Briggle, K. Waelbers & P. Brey (Eds.), Current issues in computing and philosophy (pp. 116–121). Amsterdam: IOS Press.

  • Piccinini, G. (2007). Computational modeling vs. computational explanation: Is everything a Turing machine and does it matter to the philosophy of mind? The Australasian Journal of Philosophy, 85, 93–116.

    Google Scholar 

  • Piccinini, G. (2008). Computation without representation. Philosophical Studies, 134, 205–241.

    Google Scholar 

  • Pinker, S. (2005). So how does the mind work? and A reply to Jerry Fodor on how the mind works. Mind & Language, 20(1), 1–24, 33–38.

    Google Scholar 

  • Preston, J. (2002). Introduction. In Preston & Bishop (pp. 1–50).

  • Preston, J., & Bishop, M. (Eds.). (2002). Views into the Chinese room: New essays on searle and artificial intelligence. Oxford: Oxford University Press.

    MATH  Google Scholar 

  • Putnam, H. (1981). Reason, truth and history. Cambridge: Cambridge University Press.

    Google Scholar 

  • Putnam, H. (1982). Why there isn’t a ready-made world. In Realism and reason: Philosophical papers (Vol. 3, pp. 205–228). Cambridge: Cambridge University Press.

  • Putnam, H. (1985). Reflexive reflections. In Words and life (pp. 416–427). Cambridge, Mass: Harvard University Press 1994.

  • Raftopoulos, A. (2006). Defending realism on the proper ground. Philosophical Psychology, 19(1), 47–77.

    Article  Google Scholar 

  • Raftopoulos, A., & Müller, V. C. (2006a). The phenomenal content of experience. Mind and Language, 21(2), 187–219.

    Article  Google Scholar 

  • Raftopoulos, A., & Müller, V. C. (2006b). Nonconceptual demonstrative reference. Philosophy and Phenomenological Research, 72, 251–285.

    Google Scholar 

  • Rey, G. (2002). Searle’s misunderstandings of functionalism and strong AI. In Preston & Bishop (pp. 201–225).

  • Schneider, U., & Werner, D. (2001). Taschenbuch der Informatik (4th ed ed.). Leipzig: Fachbuchverlag Leipzig.

    Google Scholar 

  • Searle, J. (1980). Minds, brains and programs. Behavioral and Brain Sciences, 3, 417–457.

    Google Scholar 

  • Searle, J. (2002). Consciousness and language. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Smolensky, P. (1988). Computational models of mind. In S. Guttenplan (Ed.), A companion to the philosophy of mind (pp. 176–185). Oxford: Blackwell.

    Google Scholar 

  • Smolensky, P. (1999). On the proper treatment of connectionism. Behavioral and Brain Sciences, 11, 1–23.

    Article  Google Scholar 

  • Steels, L. (2008). The symbol grounding problem has been solved, so what's next? In M. de Vega, A. Glenberg & A. Graesser (Eds.), Symbols and embodiment: Debates on meaning and cognition (pp. 223–244). Oxford: Oxford University Press.

  • Taddeo, M., & Floridi, L. (2005). Solving the symbol grounding problem: A critical review of fifteen years of research. Journal of Experimental and Theoretical Artificial Intelligence, 17, 419–445.

    Google Scholar 

  • Van Gelder, T. (1995). What might cognition be if not computation? The Journal of Philosophy, 91(7), 345–381.

    Article  Google Scholar 

  • Wakefield, J. C. (2003). The Chinese room argument reconsidered: Essentialism, indeterminacy, and strong AI. Minds and Machines, 13, 285–319.

    Article  Google Scholar 

Download references

Acknowledgments

My thanks to the people with whom I have discussed this paper, especially to Thanos Raftopoulos, Kostas Pagondiotis and the attendants of the “Philosophy on the Hill” colloquium. I am very grateful to two anonymous reviewers for detailed written comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vincent C. Müller.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Müller, V.C. Symbol Grounding in Computational Systems: A Paradox of Intentions. Minds & Machines 19, 529–541 (2009). https://doi.org/10.1007/s11023-009-9175-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-009-9175-1

Keywords

Navigation