There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the symbol grounding problem : How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their shapes, be grounded (...) in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: iconic representations, which are analogs of the proximal sensory projections of distal objects and events, and categorical representations, which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their categorical representations. Higher-order symbolic representations, grounded in these elementary symbols, consist of symbol strings describing category membership relations. Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic module, however; the symbolic functions would emerge as an intrinsically dedicated symbol system as a consequence of the bottom-up grounding of categories ' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded. (shrink)
Explaining the mind by building machines with minds runs into the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what is (...) "everything" a body with a mind can do? Turing's original "pen-pal" version (the TT) only tested linguistic capacity, but Searle has shown that a mindless symbol-manipulator could pass the TT undetected. The Total Turing Test (TTT) calls for all of our linguistic and robotic capacities; immune to Searle's argument, it suggests how to ground a symbol manipulating system in the capacity to pick out the objects its symbols refer to. No Turing Test, however, can guarantee that a body has a mind. Worse, nothing in the explanation of its successful performance requires a model to have a mind at all. Minds are hence very different from the unobservables of physics (e.g., superstrings); and Turing Testing, though essential for machine-modeling the mind, can really only yield an explanation of the body. (shrink)
Human cognition is not an island unto itself. As a species, we are not Leibnizian Monads independently engaging in clear, Cartesian thinking. Our minds interact. That's surely why our species has language. And that interactivity probably constrains both what and how we think.
Connectionism and computationalism are currently vying for hegemony in cognitive modeling. At first glance the opposition seems incoherent, because connectionism is itself computational, but the form of computationalism that has been the prime candidate for encoding the "language of thought" has been symbolic computationalism (Dietrich 1990, Fodor 1975, Harnad 1990c; Newell 1980; Pylyshyn 1984), whereas connectionism is nonsymbolic (Fodor & Pylyshyn 1988, or, as some have hopefully dubbed it, "subsymbolic" Smolensky 1988). This paper will examine what is and is not (...) a symbol system. A hybrid nonsymbolic/symbolic system will be sketched in which the meanings of the symbols are grounded bottom-up in the system's capacity to discriminate and identify the objects they refer to. Neural nets are one possible mechanism for learning the invariants in the analog sensory projection on which successful categorization is based. "Categorical perception" (Harnad 1987a), in which similarity space is "warped" in the service of categorization, turns out to be exhibited by both people and nets, and may mediate the constraints exerted by the analog world of objects on the formal world of symbols. (shrink)
Libet, Gleason, Wright, & Pearl (1983) asked participants to report the moment at which they freely decided to initiate a pre-specified movement, based on the position of a red marker on a clock. Using event-related potentials (ERPs), Libet found that the subjective feeling of deciding to perform a voluntary action came after the onset of the motor “readiness potential,” RP). This counterintuitive conclusion poses a challenge for the philosophical notion of free will. Faced with these findings, Libet (1985) proposed that (...) conscious volitional control might operate as a selector and a controller of volitional processes rather than as an initiator of them. (shrink)
There are many entry points into the problem of categorization. Two particularly important ones are the so-called top-down and bottom-up approaches. Top-down approaches such as artificial intelligence begin with the symbolic names and descriptions for some categories already given; computer programs are written to manipulate the symbols. Cognitive modeling involves the further assumption that such symbol-interactions resemble the way our brains do categorization. An explicit expectation of the top-down approach is that it will eventually join with the bottom-up approach, which (...) tries to model how the hardware of the brain works: sensory systems, motor systems and neural activity in general. The assumption is that the symbolic cognitive functions will be implemented in brain function and linked to the sense organs and the organs of movement in roughly the way a program is implemented in a computer, with its links to peripheral devices such as transducers and effectors. (shrink)
Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing (“know-how”) This is called the Turing Test. It cannot test whether a process can generate feeling, (...) hence thinking -- only whether it can generate doing. The processes that generate thinking and know-how are “distributed” within the heads of thinkers, but not across thinkers’ heads. Hence there is no such thing as distributed cognition, only collaborative cognition. Email and the Web have spawned a new form of collaborative cognition that draws upon individual brains’ real-time interactive potential in ways that were not possible in oral, written or print interactions. (shrink)
How many words—and which ones—are sufficient to define all other words? When dictionaries are analyzed as directed graphs with links from defining words to defined words, they reveal a latent structure. Recursively removing all words that are reachable by definition but that do not define any further words reduces the dictionary to a Kernel of about 10% of its size. This is still not the smallest number of words that can define all the rest. About 75% of the Kernel turns (...) out to be its Core, a “Strongly Connected Subset” of words with a definitional path to and from any pair of its words and no word's definition depending on a word outside the set. But the Core cannot define all the rest of the dictionary. The 25% of the Kernel surrounding the Core consists of small strongly connected subsets of words: the Satellites. The size of the smallest set of words that can define all the rest—the graph's “minimum feedback vertex set” or MinSet—is about 1% of the dictionary, about 15% of the Kernel, and part-Core/part-Satellite. But every dictionary has a huge number of MinSets. The Core words are learned earlier, more frequent, and less concrete than the Satellites, which are in turn learned earlier, more frequent, but more concrete than the rest of the Dictionary. In principle, only one MinSet's words would need to be grounded through the sensorimotor capacity to recognize and categorize their referents. In a dual-code sensorimotor/symbolic model of the mental lexicon, the symbolic code could do all the rest through recombinatory definition. (shrink)
Peer Review and Copyright each have a double role: Formal refereeing protects (R1) the author from publishing and (R2) the reader from reading papers that are not of sufficient quality. Copyright protects the author from (C1) theft of text and (C2) theft of authorship. It has been suggested that in the electronic medium we can dispense with peer review, "publish" everything, and let browsing and commentary do the quality control. It has also been suggested that special safeguards and laws may (...) be needed to enforce copyright on the Net. I will argue, based on 20 years of editing Behavioral and Brain Sciences, a refereed (paper) journal of peer commentary, 8 years of editing Psycoloquy, a refereed electronic journal of peer commentary, and 1 year of implementing CogPrints, an electronic archive of unrefereed preprints and refereed reprints in the cognitive sciences modeled on the Los Alamos Physics Eprint Archive, that (i) peer commentary is a supplement, not a substitute, for peer review, (ii) the authors of refereed papers, who get and seek no royalties from the sale of their texts, only want protection from theft of authorship on the Net, not from theft of text, which is a victimless crime, and hence (iii) the trade model (subscription, site license or pay- per-view) should be replaced by author page-charges to cover the much reduced cost of implementing peer review, editing and archiving on the Net, in exchange for making the learned serial corpus available for free for all forever. (shrink)
Turing's celebrated 1950 paper proposes a very general methodological criterion for modelling mental function: total functional equivalence and indistinguishability. His criterion gives rise to a hierarchy of Turing Tests, from subtotal ("toy") fragments of our functions (t1), to total symbolic (pen-pal) function (T2 -- the standard Turing Test), to total external sensorimotor (robotic) function (T3), to total internal microfunction (T4), to total indistinguishability in every empirically discernible respect (T5). This is a "reverse-engineering" hierarchy of (decreasing) empirical underdetermination of the theory (...) by the data. Level t1 is clearly too underdetermined, T2 is vulnerable to a counterexample (Searle's Chinese Room Argument), and T4 and T5 are arbitrarily overdetermined. Hence T3 is the appropriate target level for cognitive science. When it is reached, however, there will still remain more unanswerable questions than when Physics reaches its Grand Unified Theory of Everything (GUTE), because of the mind/body problem and the other-minds problem, both of which are inherent in this empirical domain, even though Turing hardly mentions them. (shrink)
When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaningful conversation). These higher levels of interpretability are called "virtual" systems. If such a virtual system is interpretable as if it had a mind, is such a "virtual (...) mind" real? This is the question addressed in this "virtual" symposium, originally conducted electronically among four cognitive scientists: Donald Perlis, a computer scientist, argues that according to the computationalist thesis, virtual minds are real and hence Searle's Chinese Room Argument fails, because if Searle memorized and executed a program that could pass the Turing Test in Chinese he would have a second, virtual, Chinese-understanding mind of which he was unaware (as in multiple personality). Stevan Harnad, a psychologist, argues that Searle's Argument is valid, virtual minds are just hermeneutic overinterpretations, and symbols must be grounded in the real world of objects, not just the virtual world of interpretations. Computer scientist Patrick Hayes argues that Searle's Argument fails, but because Searle does not really implement the program: A real implementation must not be homuncular but mindless and mechanical, like a computer. Only then can it give rise to a mind at the virtual level. Philosopher Ned Block suggests that there is no reason a mindful implementation would not be a real one. (shrink)
A "machine" is any causal physical system, hence we are machines, hence machines can be conscious. The question is: which kinds of machines can be conscious? Chances are that robots that can pass the Turing Test -- completely indistinguishable from us in their behavioral capacities -- can be conscious (i.e. feel), but we can never be sure (because of the "other-minds" problem). And we can never know HOW they have minds, because of the "mind/body" problem. We can only know how (...) they pass the Turing Test, but not how, why or whether that makes them feel. (shrink)
The usual way to try to ground knowing according to contemporary theory of knowledge is: We know something if (1) it’s true, (2) we believe it, and (3) we believe it for the “right” reasons. Floridi proposes a better way. His grounding is based partly on probability theory, and partly on a question/answer network of verbal and behavioural interactions evolving in time. This is rather like modeling the data-exchange between a data-seeker who needs to know which button to press on (...) a food-dispenser and a data-knower who already knows the correct number. The success criterion, hence the grounding, is whether the seeker’s probability of lunch is indeed increasing (hence uncertainty is decreasing) as a result of the interaction. Floridi also suggests that his philosophy of information casts some light on the problem of consciousness. I’m not so sure. (shrink)
2. Invariant Sensorimotor Features ("Affordances"). To say this is not to declare oneself a Gibsonian, whatever that means. It is merely to point out that what a sensorimotor system can do is determined by what can be extracted from its motor interactions with its sensory input. If you lack sonar sensors, then your sensorimotor system cannot do what a bat's can do, at least not without the help of instruments. Light stimulation affords color vision for those of us with the (...) right sensory apparatus, but not for those of us who are color-blind. The geometric fact that, when we move, the "shadows" cast on our retina by nearby objects move faster than the shadows of further objects means that, for those of us with normal vision, our visual input affords depth perception. From more complicated facts of projective and solid geometry it follows that a 3-dimensional shape, such as, say, a boomerang, can be recognized as being the same shape Ð and the same size Ð even though the size and shape of its shadow on our retinas changes as we move in relation to it or it moves in relation to us. Its shape is said to be invariant under these sensorimotor transformations, and our visual systems can detect and extract that invariance, and translate it into a visual constancy. So we keep seeing a boomerang of the same shape and size even though the shape and size of its retinal shadows keep changing. (shrink)
Searle's celebrated Chinese Room Argument has shaken the foundations of Artificial Intelligence. Many refutations have been attempted, but none seem convincing. This paper is an attempt to sort out explicitly the assumptions and the logical, methodological and empirical points of disagreement. Searle is shown to have underestimated some features of computer modeling, but the heart of the issue turns out to be an empirical question about the scope and limits of the purely symbolic (computational) model of the mind. Nonsymbolic modeling (...) turns out to be immune to the Chinese Room Argument. The issues discussed include the Total Turing Test, modularity, neural modeling, robotics, causality and the symbol-grounding problem. (shrink)
Do scientists agree? It is not only unrealistic to suppose that they do, but probably just as unrealistic to think that they ought to. Agreement is for what is already established scientific history. The current and vital ongoing aspect of science consists of an active and often heated interaction of data, ideas and minds, in a process one might call "creative disagreement." The "scientific method" is largely derived from a reconstruction based on selective hindsight. What actually goes on has much (...) less the flavor of a systematic method than of trial and error, conjecture, chance, competition and even dialectic. (shrink)
The causal structure of cognition can be simulated but not implemented computationally, just as the causal structure of a furnace can be simulated but not implemented computationally. Heating is a dynamical property, not a computational one. A computational simulation of a furnace cannot heat a real house (only a simulated house). It lacks the essential causal property of a furnace. This is obvious with computational furnaces. The only thing that allows us even to imagine that it is otherwise in the (...) case of computational cognition is the fact that cognizing, unlike heating, is invisible (to eveyrone except the cognizer). Chalmers’s “Dancing Qualia” Argument is hence invalid: Even if there could be a computational model of cognition that was behaviorally indistinguishable from a real, feeling cognizer, it would still be true that if, like heat, feeling is a dynamical property of the brain, a flip-flop from the presence to the absence of feeling would be undetectable anywhere along Chalmers’s hypothetical component-swapping continuum from a human cognizer to a computational cognizer -- undetectable to everyone except the cognizer. But that would only be because the cognizer was locked into being incapable of doing anything to settle the matter simply because of Chalmers’s premise of input/output indistinguishability. That is not a demonstration that cognition is computation; it is just the demonstation that you get out of a premise what you put into it. But even if the causal topography of feeling, hence of cognizing, is dynamic rather than just computational, the problem of explaining the causal role played by feeling itself – how and why we feel – in the generation of our behavioral capacity – how and why we can do what we can do – will remain a “hard” (and perhaps insoluble) problem. (shrink)
Some of the papers in this special issue distribute cognition between what is going on inside individual cognizers' heads and their outside worlds; others distribute cognition among different individual cognizers. Turing's criterion for cognition was individual, autonomous input/output capacity. It is not clear that distributed cognition could pass the Turing Test.
SUMMARY: Universities (the universal research-providers) as well as research funders (public and private) are beginning to make it part of their mandates to ensure not only that researchers conduct and publish peer-reviewed research (“publish or perish”), but that they also make it available online, free for all. This is called Open Access (OA), and it maximizes the uptake, impact and progress of research by making it accessible to all potential users worldwide, not just those whose universities can afford to subscribe (...) to the journal in which it is published. Researchers can provide OA to their published journal articles by self-archiving them in their own university’s online repository. Students and junior faculty – the next generation of research providers and consumers -- are in a position to help accelerate the adoption of OA self-archiving mandates by their universities, ushering in the era of universal OA. (shrink)
Harnad's main argument can be roughly summarised as follows: due to Searle's Chinese Room argument, symbol systems by themselves are insufficient to exhibit cognition, because the symbols are not grounded in the real world, hence without meaning. However, a symbol system that is connected to the real world through transducers receiving sensory data, with neural nets translating these data into sensory categories, would not be subject to the Chinese Room argument. Harnad's article is not only the starting point for the (...) present debate, but is also a contribution to a longlasting discussion about such questions as: Can a computer think? If yes, would this be solely by virtue of its program? Is the Turing Test appropriate for deciding whether a computer thinks? (shrink)
Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind).
Searle's celebrated Chinese Room Argument has shaken the foundations of Artificial Intelligence. Many refutations have been attempted, but none seem convincing. This paper is an attempt to sort out explicitly the assumptions and the logical, methodological and empirical points of disagreement. Searle is shown to have underestimated some features of computer modeling, but the heart of the issue turns out to be an empirical question about the scope and limits of the purely symbolic model of the mind. Nonsymbolic modeling turns (...) out to be immune to the Chinese Room Argument. The issues discussed include the Total Turing Test, modularity, neural modeling, robotics, causality and the symbol-grounding problem. (shrink)
The Mind/Body Problem is about causation not correlation. And its solution will require a mechanism in which the mental component somehow manages to play a causal role of its own, rather than just supervening superflously on other, nonmental components that look, for all the world, as if they can do the full causal job perfectly well without it. Correlations confirm that M does indeed "supervene" on B, but causality is needed to show how/why M is not supererogatory; and that's the (...) hard part. (shrink)
A robot that is functionally indistinguishable from us may or may not be a mindless Zombie. There will never be any way to know, yet its functional principles will be as close as we can ever get to explaining the mind.
Both Artificial Life and Artificial Mind are branches of what Dennett has called "reverse engineering": Ordinary engineering attempts to build systems to meet certain functional specifications, reverse bioengineering attempts to understand how systems that have already been built by the Blind Watchmaker work. Computational modelling (virtual life) can capture the formal principles of life, perhaps predict and explain it completely, but it can no more be alive than a virtual forest fire can be hot. In itself, a computational model is (...) just an ungrounded symbol system; no matter how closely it matches the properties of what is being modelled, it matches them only formally, with the mediation of an interpretation. Synthetic life is not open to this objection, but it is still an open question how close a functional equivalence is needed in order to capture life. Close enough to fool the Blind Watchmaker is probably close enough, but would that require molecular indistinguishability, and if so, do we really need to go that far? (shrink)
When in 1979 Zenon Pylyshyn, associate editor of Behavioral and Brain Sciences (BBS, a peer commentary journal which I edit) informed me that he had secured a paper by John Searle with the unprepossessing title of [XXXX], I cannot say that I was especially impressed; nor did a quick reading of the brief manuscript -- which seemed to be yet another tedious "Granny Objection" about why/how we are not computers -- do anything to upgrade that impression.
After people learn to sort objects into categories they see them differently. Members of the same category look more alike and members of different categories look more different. This phenomenon of within-category compression and between-category separation in similarity space is called categorical perception (CP). It is exhibited by human subjects, animals and neural net models. In backpropagation nets trained first to auto-associate 12 stimuli varying along a onedimensional continuum and then to sort them into 3 categories, CP arises as a (...) natural side-effect because of four factors: (1) Maximal interstimulus separation in hidden-unit space during autoassociation learning, (2) movement toward linear separability during categorization learning, (3) inverse-distance repulsive force exerted by the between-category boundary, and (4) the modulating effects of input iconicity, especially in interpolating CP to untrained regions of the continuum. Once similarity space has been "warped" in this way, the compressed and separated "chunks" have symbolic labels which could then be combined into symbol strings that constitute propositions about objects. The meanings of such symbolic representations would be "grounded" in the system's capacity to pick out from their sensory projections the object categories that the propositions were about. (shrink)
Computation is interpretable symbol manipulation. Symbols are objects that are manipulated on the basis of rules operating only on theirshapes, which are arbitrary in relation to what they can be interpreted as meaning. Even if one accepts the Church/Turing Thesis that computation is unique, universal and very near omnipotent, not everything is a computer, because not everything can be given a systematic interpretation; and certainly everything can''t be givenevery systematic interpretation. But even after computers and computation have been successfully distinguished (...) from other kinds of things, mental states will not just be the implementations of the right symbol systems, because of the symbol grounding problem: The interpretation of a symbol system is not intrinsic to the system; it is projected onto it by the interpreter. This is not true of our thoughts. We must accordingly be more than just computers. My guess is that the meanings of our symbols are grounded in the substrate of our robotic capacity to interact with that real world of objects, events and states of affairs that our symbols are systematically interpretable as being about. (shrink)
What language allows us to do is to "steal" categories quickly and effortlessly through hearsay instead of having to earn them the hard way, through risky and time-consuming sensorimotor "toil" (trial-and-error learning, guided by corrective feedback from the consequences of miscategorisation). To make such linguistic "theft" possible, however, some, at least, of the denoting symbols of language must first be grounded in categories that have been earned through sensorimotor toil (or else in categories that have already been "prepared" for us (...) through Darwinian theft by the genes of our ancestors); it cannot be linguistic theft all the way down. The symbols that denote categories must be grounded in the capacity to sort, label and interact with the proximal sensorimotor projections of their distal category-members in a way that coheres systematically with their semantic interpretations, both for individual symbols, and for symbols strung together to express truth-value-bearing propositions. (shrink)
The mind/body problem is the feeling/function problem: How and why do feeling systems feel? The problem is not just "hard" but insoluble . Fortunately, the "easy" problems of cognitive science are not insoluble. Five books are reviewed in this context.
Some of the features of animal and human categorical perception (CP) for color, pitch and speech are exhibited by neural net simulations of CP with one-dimensional inputs: When a backprop net is trained to discriminate and then categorize a set of stimuli, the second task is accomplished by "warping" the similarity space (compressing within-category distances and expanding between-category distances). This natural side-effect also occurs in humans and animals. Such CP categories, consisting of named, bounded regions of similarity space, may be (...) the ground level out of which higher-order categories are constructed; nets are one possible candidate for the mechanism that learns the sensorimotor invariants that connect arbitrary names (elementary symbols?) to the nonarbitrary shapes of objects. This paper examines how and why such compression/expansion effects occur in neural nets. (shrink)
This quote/commented critique of Turing's classical paper suggests that Turing meant -- or should have meant -- the robotic version of the Turing Test (and not just the email version). Moreover, any dynamic system (that we design and understand) can be a candidate, not just a computational one. Turing also dismisses the other-minds problem and the mind/body problem too quickly. They are at the heart of both the problem he is addressing and the solution he is proposing.
Scholars studying the origins and evolution of language are also interested in the general issue of the evolution of cognition. Language is not an isolated capability of the individual, but has intrinsic relationships with many other behavioral, cognitive, and social abilities. By understanding the mechanisms underlying the evolution of linguistic abilities, it is possible to understand the evolution of cognitive abilities. Cognitivism, one of the current approaches in psychology and cognitive science, proposes that symbol systems capture mental phenomena, and attributes (...) cognitive validity to them. Therefore, in the same way that language is considered the prototype of cognitive abilities, a symbol system has become the prototype for studying language and cognitive systems. Symbol systems are advantageous as they are easily studied through computer simulation (a computer program is a symbol system itself), and this is why language is often studied using computational models. (shrink)
In our century a Frege/Brentano wedge has gradually been driven into the mind/body problem so deeply that it appears to have split it into two: The problem of "qualia" and the problem of "intentionality." Both problems use similar intuition pumps: For qualia, we imagine a robot that is indistinguishable from us in every objective respect, but it lacks subjective experiences; it is mindless. For intentionality, we again imagine a robot that is indistinguishable from us in every objective respect but its (...) "thoughts" lack "aboutness"; they are meaningless. I will try to show that there is a way to re-unify the mind/body problem by grounding the "language of thought" (symbols) in our perceptual categorization capacity. The model is bottom-up and hybrid symbolic/nonsymbolic. (shrink)
We are accustomed to thinking that a primrose is "concrete" and a prime number is "abstract," that "roundness" is more abstract than "round," and that "property" is more abstract than "roundness." In reality, the relation between "abstract" and "concrete" is more like the (non)relation between "abstract" and "concave," "concrete" being a sensory term [about what something feels like] and "abstract" being a functional term (about what the sensorimotor system is doing with its input in order to produce its output): Feelings (...) and things are correlated, but otherwise incommensurable. Everything that any sensorimotor system such as ourselves manages to categorize successfully is based on abstracting sensorimotor "affordances" (invariant features). The rest is merely a question of what inputs we can and do categorize, and what we must abstract from the particulars of each sensorimotor interaction in order to be able to categorize them correctly. To categorize, in other words, is to abstract. And not to categorize is merely to experience. Borges's Funes the Memorious, with his infinite, infallible rote memory, is a fictional hint at what it would be like not to be able to categorize, not to be able to selectively forget and ignore most of our input by abstracting only its reliably recurrent invariants. But a sensorimotor system like Funes would not really be viable, for if something along those lines did exist, it could not categorize recurrent objects, events or states, hence it could have no language, private or public, and could at most only feel, not function adaptively (hence survive). Luria's "S" in "The Mind of a Mnemonist" is a real-life approximation whose difficulties in conceptualizing were directly proportional to his difficulties in selectively forgetting and ignoring. Watanabe's "Ugly Duckling Theorem" shows how, if we did not selectively weight some properties more heavily than others, everything would be equally (and infinitely and indifferently) similar to everything else. Miller's "Magical Number Seven Plus or Minus Two" shows that there are (and must be) limitations on our capacity to process and remember information, both in our capacity to discriminate relatively (detect sameness/difference, degree-of-similarity) and in our capacity to discriminate absolutely (identify, categorize, name), The phenomenon of categorical perception shows how selective feature-detection puts a Whorfian "warp" on our feelings of similarity in the service of categorization, compressing within-category similarities and expanding between-category differences by abstracting and selectively filtering inputs through their invariant features, thereby allowing us to sort and name things reliably. Language does allow us to acquire categories indirectly through symbolic description.... (shrink)
"Symbol Grounding" is beginning to mean too many things to too many people. My own construal has always been simple: Cognition cannot be just computation, because computation is just the systematically interpretable manipulation of meaningless symbols, whereas the meanings of my thoughts don't depend on their interpretability or interpretation by someone else. On pain of infinite regress, then, symbol meanings must be grounded in something other than just their interpretability if they are to be candidates for what is going on (...) in our heads. Neural nets may be one way to ground the names of concrete objects and events in the capacity to categorize them (by learning the invariants in their sensorimotor projections). These grounded elementary symbols could then be combined into symbol strings expressing propositions about more abstract categories. Grounding does not equal meaning, however, and does not solve any philosophical problems. (shrink)
"in an academic generation a little overaddicted to "politesse," it may be worth saying that violent destruction is not necessarily worthless and futile. Even though it leaves doubt about the right road for London, it helps if someone rips up, however violently, a.
Many special problems crop up when evolutionary theory turns, quite naturally, to the question of the adaptive value and causal role of consciousness in human and nonhuman organisms. One problem is that -- unless we are to be dualists, treating it as an independent nonphysical force -- consciousness could not have had an independent adaptive function of its own, over and above whatever behavioral and physiological functions it "supervenes" on, because evolution is completely blind to the difference between a conscious (...) organism and a functionally equivalent (Turing Indistinguishable) nonconscious "Zombie" organism: In other words, the Blind Watchmaker, a functionalist if ever there was one, is no more a mind reader than we are. Hence Turing-Indistinguishability = Darwin-Indistinguishability. It by no means follows from this, however, that human behavior is therefore to be explained only by the push-pull dynamics of Zombie determinism, as dictated by calculations of "inclusive fitness" and "evolutionarily stable strategies." We are conscious, and, more important, that consciousness is piggy-backing somehow on the vast complex of unobservable internal activity -- call it "cognition" -- that is really responsible for generating all of our behavioral capacities. Hence, except in the palpable presence of the irrational (e.g., our sexual urges) where distal Darwinian factors still have some proximal sway, it is as sensible to seek a Darwinian rather than a cognitive explanation for most of our current behavior as it is to seek a cosmological rather than an engineering explanation of an automobile's behavior. Let evolutionary theory explain what shaped our cognitive capacity (Steklis & Harnad 1976; Harnad 1996, but let cognitive theory explain our resulting behavior. (shrink)
Critique of Computationalism as merely projecting hermeneutics (i.e., meaning originating from the mind of an external interpreter) onto otherwise intrinsically meaningless symbols. Projecting an interpretation onto a symbol system results in its being reflected back, in a spuriously self-confirming way.
A robot that is functionally indistinguishable from us may or may not be a mindless Zombie. There will never be any way to know, yet its functional principles will be as close as we can ever get to explaining the mind.
It is important to understand that the Turing Test is not, nor was it intended to be, a trick; how well one can fool someone is not a measure of scientific progress. The TT is an empirical criterion: It sets AI's empirical goal to be to generate human-scale performance capacity. This goal will be met when the candidate's performance is totally indistinguishable from a human's. Until then, the TT simply represents what it is that AI must endeavor eventually to accomplish (...) scientifically. (shrink)
Jerry Fodor argues that Darwin was wrong about "natural selection" because (1) it is only a tautology rather than a scientific law that can support counterfactuals ("If X had happened, Y would have happened") and because (2) only minds can select. Hence Darwin's analogy with "artificial selection" by animal breeders was misleading and evolutionary explanation is nothing but post-hoc historical narrative. I argue that Darwin was right on all counts. Until Darwin's "tautology," it had been believed that either (a) God (...) had created all organisms as they are, or (b) organisms had always been as they are. Darwin revealed instead that (c) organisms have heritable traits that evolved across time through random variation, with survival and reproduction in (changing) environments determining (mindlessly) which variants were successfully transmitted to the next generation. This not only provided the (true) alternative (c), but also the methodology for investigating which traits had been adaptive, how and why; it also led to the discovery of the genetic mechanism of the encoding, variation and evolution of heritable traits. Fodor also draws erroneous conclusions from the analogy between Darwinian evolution and Skinnerian reinforcement learning. Fodor’s skepticism about both evolution and learning may be motivated by an overgeneralization of Chomsky’s “poverty of the stimulus argument” -- from the origin of Universal Grammar (UG) to the origin of the “concepts” underlying word meaning, which, Fodor thinks, must be “endogenous,” rather than evolved or learned. (shrink)
A provisional model is presented in which categorical perception (CP) provides our basic or elementary categories. In acquiring a category we learn to label or identify positive and negative instances from a sample of confusable alternatives. Two kinds of internal representation are built up in this learning by "acquaintance": (1) an iconic representation that subserves our similarity judgments and (2) an analog/digital feature-filter that picks out the invariant information allowing us to categorize the instances correctly. This second, categorical representation is (...) associated with the category name. Category names then serve as the atomic symbols for a third representational system, the (3) symbolic representations that underlie language and that make it possible for us to learn by "description." Connectionism is one possible mechainsm for learning the sensory invariants underlying categorization and naming. Among the implications of the model are (a) the "cognitive identity of (current) indiscriminables": Categories and their representations can only be provisional and approximate, relative to the alternatives encountered to date, rather than "exact." There is also (b) no such thing as an absolute "feature," only those features that are invariant within a particular context of confusable alternatives. Contrary to prevailing "prototype" views, however, (c) such provisionally invariant features must underlie successful categorization, and must be "sufficient" (at least in the "satisficing" sense) to subserve reliable performance with all-or-none, bounded categories, as in CP. Finally, the model brings out some basic limitations of the "symbol-manipulative" approach to modeling cognition, showing how (d) symbol meanings must be functionally grounded in nonsymbolic, "shape-preserving" representations -- iconic and categorical ones. Otherwise, all symbol interpretations are ungrounded and indeterminate. This amounts to a principled call for a psychophysical (rather than a neural) "bottom-up" approach to cognition. (shrink)