133 found
Order:
Disambiguations
Stevan Harnad [133]Stevan Robert Harnad [1]
See also
Stevan Harnad
Université du Québec à Montréal
  1. The symbol grounding problem.Stevan Harnad - 1990 - Physica D 42:335-346.
    There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the symbol grounding problem : How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their shapes, be grounded (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   335 citations  
  2.  92
    Consciousness: An afterthought.Stevan Harnad - 1982 - Cognition and Brain Theory 5:29-47.
    There are many possible approaches to the mind/brain problem. One of the most prominent, and perhaps the most practical, is to ignore it.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   143 citations  
  3. Other bodies, other minds: A machine incarnation of an old philosophical problem. [REVIEW]Stevan Harnad - 1991 - Minds and Machines 1 (1):43-54.
    Explaining the mind by building machines with minds runs into the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what is (...)
    Direct download (12 more)  
     
    Export citation  
     
    Bookmark   87 citations  
  4.  3
    Rational Disagreement in Peer Review. [REVIEW]Stevan Harnad - 1985 - Science, Technology and Human Values 10 (3):55-62.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   96 citations  
  5.  88
    Categorical perception.Stevan Harnad - 2003 - In L. Nadel (ed.), Encyclopedia of Cognitive Science. Nature Publishing Group. pp. 67--4.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   56 citations  
  6. Category induction and representation.Stevan Harnad - 1987 - In Categorical Perception. Cambridge University Press.
    A provisional model is presented in which categorical perception (CP) provides our basic or elementary categories. In acquiring a category we learn to label or identify positive and negative instances from a sample of confusable alternatives. Two kinds of internal representation are built up in this learning by "acquaintance": (1) an iconic representation that subserves our similarity judgments and (2) an analog/digital feature-filter that picks out the invariant information allowing us to categorize the instances correctly. This second, categorical representation is (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   52 citations  
  7. Connecting object to symbol in modeling cognition.Stevan Harnad - 1992 - In A. Clark & Ronald Lutz (eds.), Connectionism in Context. Springer Verlag. pp. 75--90.
    Connectionism and computationalism are currently vying for hegemony in cognitive modeling. At first glance the opposition seems incoherent, because connectionism is itself computational, but the form of computationalism that has been the prime candidate for encoding the "language of thought" has been symbolic computationalism (Dietrich 1990, Fodor 1975, Harnad 1990c; Newell 1980; Pylyshyn 1984), whereas connectionism is nonsymbolic (Fodor & Pylyshyn 1988, or, as some have hopefully dubbed it, "subsymbolic" Smolensky 1988). This paper will examine what is and is not (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   39 citations  
  8.  63
    Psychophysical and cognitive aspects of categorical perception:A critical overview.Stevan Harnad - unknown
    There are many entry points into the problem of categorization. Two particularly important ones are the so-called top-down and bottom-up approaches. Top-down approaches such as artificial intelligence begin with the symbolic names and descriptions for some categories already given; computer programs are written to manipulate the symbols. Cognitive modeling involves the further assumption that such symbol-interactions resemble the way our brains do categorization. An explicit expectation of the top-down approach is that it will eventually join with the bottom-up approach, which (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  9. Minds, machines and Searle.Stevan Harnad - 1989 - Journal of Theoretical and Experimental Artificial Intelligence 1:5-25.
    Searle's celebrated Chinese Room Argument has shaken the foundations of Artificial Intelligence. Many refutations have been attempted, but none seem convincing. This paper is an attempt to sort out explicitly the assumptions and the logical, methodological and empirical points of disagreement. Searle is shown to have underestimated some features of computer modeling, but the heart of the issue turns out to be an empirical question about the scope and limits of the purely symbolic (computational) model of the mind. Nonsymbolic modeling (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   34 citations  
  10. Minds, machines and Searle.Stevan Harnad - 1989 - Journal of Experimental and Theoretical Artificial Intelligence 1 (4):5-25.
    Searle's celebrated Chinese Room Argument has shaken the foundations of Artificial Intelligence. Many refutations have been attempted, but none seem convincing. This paper is an attempt to sort out explicitly the assumptions and the logical, methodological and empirical points of disagreement. Searle is shown to have underestimated some features of computer modeling, but the heart of the issue turns out to be an empirical question about the scope and limits of the purely symbolic model of the mind. Nonsymbolic modeling turns (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  11. [Book Chapter].Stevan Harnad - 1987
    No categories
     
    Export citation  
     
    Bookmark   16 citations  
  12. To Cognize is to Categorize: Cognition is Categorization.Stevan Harnad - 2005 - In C. Lefebvre & H. Cohen (eds.), Handbook of Categorization. Elsevier.
    2. Invariant Sensorimotor Features ("Affordances"). To say this is not to declare oneself a Gibsonian, whatever that means. It is merely to point out that what a sensorimotor system can do is determined by what can be extracted from its motor interactions with its sensory input. If you lack sonar sensors, then your sensorimotor system cannot do what a bat's can do, at least not without the help of instruments. Light stimulation affords color vision for those of us with the (...)
     
    Export citation  
     
    Bookmark   14 citations  
  13. Why and how we are not zombies.Stevan Harnad - 1994 - Journal of Consciousness Studies 1 (2):164-67.
    A robot that is functionally indistinguishable from us may or may not be a mindless Zombie. There will never be any way to know, yet its functional principles will be as close as we can ever get to explaining the mind.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  14. Minds, machines and Turing: The indistinguishability of indistinguishables.Stevan Harnad - 2000 - Journal of Logic, Language and Information 9 (4):425-445.
    Turing's celebrated 1950 paper proposes a very general methodological criterion for modelling mental function: total functional equivalence and indistinguishability. His criterion gives rise to a hierarchy of Turing Tests, from subtotal ("toy") fragments of our functions (t1), to total symbolic (pen-pal) function (T2 -- the standard Turing Test), to total external sensorimotor (robotic) function (T3), to total internal microfunction (T4), to total indistinguishability in every empirically discernible respect (T5). This is a "reverse-engineering" hierarchy of (decreasing) empirical underdetermination of the theory (...)
    Direct download (17 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  15. 4 Years of Animal Sentience.Walter Veit & Stevan Harnad - forthcoming - Psychology Today.
    No categories
     
    Export citation  
     
    Bookmark   2 citations  
  16. Virtual symposium on virtual mind.Patrick Hayes, Stevan Harnad, Donald Perlis & Ned Block - 1992 - Minds and Machines 2 (3):217-238.
    When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaningful conversation). These higher levels of interpretability are called "virtual" systems. If such a virtual system is interpretable as if it had a mind, is such a "virtual (...)
    Direct download (14 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  17. The Turing test is not a trick: Turing indistinguishability is a scientific criterion.Stevan Harnad - 1992 - SIGART Bulletin 3 (4):9-10.
    It is important to understand that the Turing Test is not, nor was it intended to be, a trick; how well one can fool someone is not a measure of scientific progress. The TT is an empirical criterion: It sets AI's empirical goal to be to generate human-scale performance capacity. This goal will be met when the candidate's performance is totally indistinguishable from a human's. Until then, the TT simply represents what it is that AI must endeavor eventually to accomplish (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  18. Computation is just interpretable symbol manipulation; cognition isn't.Stevan Harnad - 1994 - Minds and Machines 4 (4):379-90.
    Computation is interpretable symbol manipulation. Symbols are objects that are manipulated on the basis of rules operating only on theirshapes, which are arbitrary in relation to what they can be interpreted as meaning. Even if one accepts the Church/Turing Thesis that computation is unique, universal and very near omnipotent, not everything is a computer, because not everything can be given a systematic interpretation; and certainly everything can''t be givenevery systematic interpretation. But even after computers and computation have been successfully distinguished (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  19. Can a machine be conscious? How?Stevan Harnad - 2003 - Journal of Consciousness Studies 10 (4-5):67-75.
    A "machine" is any causal physical system, hence we are machines, hence machines can be conscious. The question is: which kinds of machines can be conscious? Chances are that robots that can pass the Turing Test -- completely indistinguishable from us in their behavioral capacities -- can be conscious (i.e. feel), but we can never be sure (because of the "other-minds" problem). And we can never know HOW they have minds, because of the "mind/body" problem. We can only know how (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  20. Distributed processes, distributed cognizers and collaborative cognition.Stevan Harnad - 2005 - [Journal (Paginated)] (in Press) 13 (3):01-514.
    Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing (“know-how”) This is called the Turing Test. It cannot test whether a process can generate feeling, (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  21. Symbol grounding and the symbolic theft hypothesis.Angelo Cangelosi, Alberto Greco & Stevan Harnad - 2002 - In A. Cangelosi & D. Parisi (eds.), Simulating the Evolution of Language. Springer Verlag. pp. 191--210.
    Scholars studying the origins and evolution of language are also interested in the general issue of the evolution of cognition. Language is not an isolated capability of the individual, but has intrinsic relationships with many other behavioral, cognitive, and social abilities. By understanding the mechanisms underlying the evolution of linguistic abilities, it is possible to understand the evolution of cognitive abilities. Cognitivism, one of the current approaches in psychology and cognitive science, proposes that symbol systems capture mental phenomena, and attributes (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  22.  78
    Grounding symbols in the analog world with neural nets.Stevan Harnad - 1993 - Think (misc) 2 (1):12-78.
    Harnad's main argument can be roughly summarised as follows: due to Searle's Chinese Room argument, symbol systems by themselves are insufficient to exhibit cognition, because the symbols are not grounded in the real world, hence without meaning. However, a symbol system that is connected to the real world through transducers receiving sensory data, with neural nets translating these data into sensory categories, would not be subject to the Chinese Room argument. Harnad's article is not only the starting point for the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  23.  96
    Turing indistinguishability and the blind watchmaker.Stevan Harnad - 2002 - In James H. Fetzer (ed.), Consciousness Evolving. John Benjamins. pp. 3-18.
    Many special problems crop up when evolutionary theory turns, quite naturally, to the question of the adaptive value and causal role of consciousness in human and nonhuman organisms. One problem is that -- unless we are to be dualists, treating it as an independent nonphysical force -- consciousness could not have had an independent adaptive function of its own, over and above whatever behavioral and physiological functions it "supervenes" on, because evolution is completely blind to the difference between a conscious (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  24.  13
    Distributed processes, distributed cognizers, and collaborative cognition.Stevan Harnad - 2005 - Pragmatics and Cognition 13 (3):501-514.
    Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able to do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing.This is called the Turing Test. It cannot test whether a process can generate feeling, hence (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  25. Symbol‐grounding Problem.Stevan Harnad - 2003 - In L. Nadel (ed.), Encyclopedia of Cognitive Science. Nature Publishing Group.
     
    Export citation  
     
    Bookmark   11 citations  
  26. Symbol grounding and the origin of language.Stevan Harnad - 2002 - In Matthias Scheutz (ed.), Computationalism: New Directions. MIT Press.
    What language allows us to do is to "steal" categories quickly and effortlessly through hearsay instead of having to earn them the hard way, through risky and time-consuming sensorimotor "toil" (trial-and-error learning, guided by corrective feedback from the consequences of miscategorisation). To make such linguistic "theft" possible, however, some, at least, of the denoting symbols of language must first be grounded in categories that have been earned through sensorimotor toil (or else in categories that have already been "prepared" for us (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  27.  51
    The Latent Structure of Dictionaries.Philippe Vincent-Lamarre, Alexandre Blondin Massé, Marcos Lopes, Mélanie Lord, Odile Marcotte & Stevan Harnad - 2016 - Topics in Cognitive Science 8 (3):625-659.
    How many words—and which ones—are sufficient to define all other words? When dictionaries are analyzed as directed graphs with links from defining words to defined words, they reveal a latent structure. Recursively removing all words that are reachable by definition but that do not define any further words reduces the dictionary to a Kernel of about 10% of its size. This is still not the smallest number of words that can define all the rest. About 75% of the Kernel turns (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  28.  24
    Minds, machines and Turing: The indistinguishability of indistinguishables.Stevan Harnad - unknown
    Turing's celebrated 1950 paper proposes a very general methodological criterion for modelling mental function: total functional equivalence and indistinguishability. His criterion gives rise to a hierarchy of Turing Tests, from subtotal ("toy") fragments of our functions (t1), to total symbolic (pen-pal) function (T2 -- the standard Turing Test), to total external sensorimotor (robotic) function (T3), to total internal microfunction (T4), to total indistinguishability in every empirically discernible respect (T5). This is a "reverse-engineering" hierarchy of (decreasing) empirical underdetermination of the theory (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  29.  81
    Levels of functional equivalence in reverse bioengineering: The Darwinian Turing test for artificial life.Stevan Harnad - 1994 - Artificial Life 1 (3):93-301.
    Both Artificial Life and Artificial Mind are branches of what Dennett has called "reverse engineering": Ordinary engineering attempts to build systems to meet certain functional specifications, reverse bioengineering attempts to understand how systems that have already been built by the Blind Watchmaker work. Computational modelling (virtual life) can capture the formal principles of life, perhaps predict and explain it completely, but it can no more be alive than a virtual forest fire can be hot. In itself, a computational model is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  30.  15
    Distributed processes, distributed cognizers, and collaborative cognition.Stevan Harnad - 2005 - Pragmatics and Cognition 13 (2):501-514.
    Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able to do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing.This is called the Turing Test. It cannot test whether a process can generate feeling, hence (...)
    Direct download  
     
    Export citation  
     
    Bookmark   9 citations  
  31.  42
    Learned Inquiry and the Net: The Role of Peer Review, Peer Commentary and Copyright.Stevan Harnad - unknown
    Peer Review and Copyright each have a double role: Formal refereeing protects (R1) the author from publishing and (R2) the reader from reading papers that are not of sufficient quality. Copyright protects the author from (C1) theft of text and (C2) theft of authorship. It has been suggested that in the electronic medium we can dispense with peer review, "publish" everything, and let browsing and commentary do the quality control. It has also been suggested that special safeguards and laws may (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  32. Categorical Perception and the Evolution of Supervised Learning in Neural Nets.Stevan Harnad & SJ Hanson - unknown
    Some of the features of animal and human categorical perception (CP) for color, pitch and speech are exhibited by neural net simulations of CP with one-dimensional inputs: When a backprop net is trained to discriminate and then categorize a set of stimuli, the second task is accomplished by "warping" the similarity space (compressing within-category distances and expanding between-category distances). This natural side-effect also occurs in humans and animals. Such CP categories, consisting of named, bounded regions of similarity space, may be (...)
     
    Export citation  
     
    Bookmark   5 citations  
  33.  71
    Lost in the hermeneutic hall of mirrors.Stevan Harnad - 1990 - Journal of Experimental and Theoretical Artificial Intelligence 2:321-27.
    Critique of Computationalism as merely projecting hermeneutics (i.e., meaning originating from the mind of an external interpreter) onto otherwise intrinsically meaningless symbols. Projecting an interpretation onto a symbol system results in its being reflected back, in a spuriously self-confirming way.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  34. Correlation vs. causality: How/why the mind-body problem is hard.Stevan Harnad - 2000 - Journal of Consciousness Studies 7 (4):54-61.
    The Mind/Body Problem is about causation not correlation. And its solution will require a mechanism in which the mental component somehow manages to play a causal role of its own, rather than just supervening superflously on other, nonmental components that look, for all the world, as if they can do the full causal job perfectly well without it. Correlations confirm that M does indeed "supervene" on B, but causality is needed to show how/why M is not supererogatory; and that's the (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  35.  49
    Creative disagreement.Stevan Harnad - unknown
    Do scientists agree? It is not only unrealistic to suppose that they do, but probably just as unrealistic to think that they ought to. Agreement is for what is already established scientific history. The current and vital ongoing aspect of science consists of an active and often heated interaction of data, ideas and minds, in a process one might call "creative disagreement." The "scientific method" is largely derived from a reconstruction based on selective hindsight. What actually goes on has much (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  36.  20
    Validating research performance metrics against peer rankings.Stevan Harnad - 2008 - Ethics in Science and Environmental Politics 8 (1):103-107.
  37.  40
    Grounding symbols in the analog world with neural nets a hybrid model.Stevan Harnad - unknown
    1.1 The predominant approach to cognitive modeling is still what has come to be called "computationalism" (Dietrich 1990, Harnad 1990b), the hypothesis that cognition is computation. The more recent rival approach is "connectionism" (Hanson & Burr 1990, McClelland & Rumelhart 1986), the hypothesis that cognition is a dynamic pattern of connections and activations in a "neural net." Are computationalism and connectionism really deeply different from one another, and if so, should they compete for cognitive hegemony, or should they collaborate? These (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  38.  28
    Sleep and Dreaming: Scientific Advances and Reconsiderations.Edward F. Pace-Schott, Mark Solms, Mark Blagrove & Stevan Harnad (eds.) - 2003 - Cambridge University Press.
    Printbegrænsninger: Der kan printes 10 sider ad gangen og max. 40 sider pr. session.
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  39.  40
    Scholarly skywriting and the prepublicationcontinuum of scientific inquiry.Stevan Harnad - unknown
    William Gardner's proposal to establish a searchable, retrievable electronic archive is fine, as far as it goes. The potential role of electronic networks in scientific publication, however, goes far beyond providing searchable electronic archives for electronic journals. The whole process of scholarly communication is currently undergoing a revolution comparable to the one occasioned by the invention of printing. On the brink of intellectual perestroika is that vast PREPUBLICATION phase of scientific inquiry in which ideas and findings are discussed informally with (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  40.  64
    Artificial life: Synthetic versus virtual.Stevan Harnad - 1993 - In Chris Langton (ed.), Santa Fe Institute Studies in the Sciences of Complexity. Volume XVI: 539ff. Reading, USA: Addison-Wesley.
    Artificial life can take two forms: synthetic and virtual. In principle, the materials and properties of synthetic living systems could differ radically from those of natural living systems yet still resemble them enough to be really alive if they are grounded in the relevant causal interactions with the real world. Virtual (purely computational) "living" systems, in contrast, are just ungrounded symbol systems that are systematically interpretable as if they were alive; in reality they are no more alive than a virtual (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  41. Distributed cognition: Cognizing, autonomy and the Turing test.Stevan Harnad & Itiel E. Dror - 2006 - Pragmatics and Cognition 14 (2):14.
    Some of the papers in this special issue distribute cognition between what is going on inside individual cognizers' heads and their outside worlds; others distribute cognition among different individual cognizers. Turing's criterion for cognition was individual, autonomous input/output capacity. It is not clear that distributed cognition could pass the Turing Test.
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  42.  50
    Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding.Stevan Harnad & Stephen J. Hanson - unknown
    After people learn to sort objects into categories they see them differently. Members of the same category look more alike and members of different categories look more different. This phenomenon of within-category compression and between-category separation in similarity space is called categorical perception (CP). It is exhibited by human subjects, animals and neural net models. In backpropagation nets trained first to auto-associate 12 stimuli varying along a onedimensional continuum and then to sort them into 3 categories, CP arises as a (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  43. Symbol grounding is an empirical problem: Neural nets are just a candidate component.Stevan Harnad - 1993
    "Symbol Grounding" is beginning to mean too many things to too many people. My own construal has always been simple: Cognition cannot be just computation, because computation is just the systematically interpretable manipulation of meaningless symbols, whereas the meanings of my thoughts don't depend on their interpretability or interpretation by someone else. On pain of infinite regress, then, symbol meanings must be grounded in something other than just their interpretability if they are to be candidates for what is going on (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  44.  40
    Against computational hermeneutics.Stevan Harnad - 1990 - Social Epistemology 4:167-172.
    Critique of Computationalism as merely projecting hermeneutics (i.e., meaning originating from the mind of an external interpreter) onto otherwise intrinsically meaningless symbols.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  45. The annotation game: On Turing (1950) on computing, machinery, and intelligence.Stevan Harnad - 2006 - In Robert Epstein & Grace Peters (eds.), [Book Chapter] (in Press). Kluwer Academic Publishers.
    This quote/commented critique of Turing's classical paper suggests that Turing meant -- or should have meant -- the robotic version of the Turing Test (and not just the email version). Moreover, any dynamic system (that we design and understand) can be a candidate, not just a computational one. Turing also dismisses the other-minds problem and the mind/body problem too quickly. They are at the heart of both the problem he is addressing and the solution he is proposing.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  46.  37
    Grounding Symbolic Capacity in Robotic Capacity.Stevan Harnad - unknown
    According to "computationalism" (Newell, 1980; Pylyshyn 1984; Dietrich 1990), mental states are computational states, so if one wishes to build a mind, one is actually looking for the right program to run on a digital computer. A computer program is a semantically interpretable formal symbol system consisting of rules for manipulating symbols on the basis of their shapes, which are arbitrary in relation to what they can be systematically interpreted as meaning. According to computationalism, every physical implementation of the right (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  47.  17
    Experimental Analysis of Naming Behavior Cannot Explain Naming Capacity.Stevan Harnad - unknown
    The experimental analysis of naming behavior can tell us exactly the kinds of things Horne & Lowe (H & L) report here: (1) the conditions under which people and animals succeed or fail in naming things and (2) the conditions under which bidirectional associations are formed between inputs (objects, pictures of objects, seen or heard names of objects) and outputs (spoken names of objects, multimodal operations on objects). The "stimulus equivalence" that H & L single out is really just the (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  48.  36
    The origin of words: A psychophysical hypothesis.Stevan Harnad - 1996 - In [Book Chapter].
    It is hypothesized that words originated as the names of perceptual categories and that two forms of representation underlying perceptual categorization -- iconic and categorical representations -- served to ground a third, symbolic, form of representation. The third form of representation made it possible to name and describe our environment, chiefly in terms of categories, their memberships, and their invariant features. Symbolic representations can be shared because they are intertranslatable. Both categorization and translation are approximate rather than exact, but the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  49. The Timing of a Conscious Decision: From Ear to Mouth.Stevan Harnad - unknown
    Libet, Gleason, Wright, & Pearl (1983) asked participants to report the moment at which they freely decided to initiate a pre-specified movement, based on the position of a red marker on a clock. Using event-related potentials (ERPs), Libet found that the subjective feeling of deciding to perform a voluntary action came after the onset of the motor “readiness potential,” RP). This counterintuitive conclusion poses a challenge for the philosophical notion of free will. Faced with these findings, Libet (1985) proposed that (...)
     
    Export citation  
     
    Bookmark  
  50.  70
    Metaphor and Mental Duality.Stevan Harnad - 1982 - In T. Simon & R. Scholes (ed.), Language, Mind, And Brain. Hillsdale NJ: Erlbaum. pp. 189-211.
    I am going to attempt to argue, given certain premises, there are reasons, not only empirical, but also logical, for expecting a certain division of labor in the processing of information by the human brain. This division of labor consists specifically of a functional bifurcation into what may be called, to a first approximation, "verbal" and "nonverbal" modes of information- processing. That this dichotomy is not quite satisfactory, however, will be one of the principal conclusions of this chapter, for I (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 133