133 found
Order:
Disambiguations:
Stevan Harnad [133]Stevan Robert Harnad [1]
See also:
Profile: Stevan Harnad (Université du Québec à Montreal, University of Southampton)
  1. Stevan Harnad, Interactive Cognition: Exploring the Potential of Electronic Quote/Commenting.
    Human cognition is not an island unto itself. As a species, we are not Leibnizian Monads independently engaging in clear, Cartesian thinking. Our minds interact. That's surely why our species has language. And that interactivity probably constrains both what and how we think.
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  2. Stevan Harnad, The Timing of a Conscious Decision: From Ear to Mouth.
    Libet, Gleason, Wright, & Pearl (1983) asked participants to report the moment at which they freely decided to initiate a pre-specified movement, based on the position of a red marker on a clock. Using event-related potentials (ERPs), Libet found that the subjective feeling of deciding to perform a voluntary action came after the onset of the motor “readiness potential,” RP). This counterintuitive conclusion poses a challenge for the philosophical notion of free will. Faced with these findings, Libet (1985) proposed that (...)
    Translate
     
     
    Export citation  
     
    My bibliography  
  3.  71
    Stevan Harnad (1982). Consciousness: An Afterthought. Cognition and Brain Theory 5:29-47.
    There are many possible approaches to the mind/brain problem. One of the most prominent, and perhaps the most practical, is to ignore it.
    Direct download (10 more)  
     
    Export citation  
     
    My bibliography   133 citations  
  4. Stevan Harnad (1990). The Symbol Grounding Problem. Philosophical Explorations 42:335-346.
    There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the symbol grounding problem : How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their shapes, be grounded (...)
    Direct download (16 more)  
     
    Export citation  
     
    My bibliography   76 citations  
  5.  28
    Stevan Harnad (2003). Categorical Perception. In L. Nadel (ed.), Encyclopedia of Cognitive Science. Nature Publishing Group 67--4.
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography   37 citations  
  6. Stevan Harnad (2005). Distributed Processes, Distributed Cognizers and Collaborative Cognition. [Journal (Paginated)] (in Press) 13 (3):01-514.
    Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing (“know-how”) This is called the Turing Test. It cannot test whether a process can generate feeling, (...)
    Direct download (15 more)  
     
    Export citation  
     
    My bibliography   3 citations  
  7.  93
    Stevan Harnad (1991). Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem. [REVIEW] Minds and Machines 1 (1):43-54.
    Explaining the mind by building machines with minds runs into the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what is (...)
    Direct download (16 more)  
     
    Export citation  
     
    My bibliography   52 citations  
  8. Stevan Harnad, The Causal Topography of Cognition.
    The causal structure of cognition can be simulated but not implemented computationally, just as the causal structure of a furnace can be simulated but not implemented computationally. Heating is a dynamical property, not a computational one. A computational simulation of a furnace cannot heat a real house (only a simulated house). It lacks the essential causal property of a furnace. This is obvious with computational furnaces. The only thing that allows us even to imagine that it is otherwise in the (...)
    Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  9.  60
    Stevan Harnad (1992). Connecting Object to Symbol in Modeling Cognition. In A. Clark & Ronald Lutz (eds.), Connectionism in Context. Springer-Verlag 75--90.
    Connectionism and computationalism are currently vying for hegemony in cognitive modeling. At first glance the opposition seems incoherent, because connectionism is itself computational, but the form of computationalism that has been the prime candidate for encoding the "language of thought" has been symbolic computationalism (Dietrich 1990, Fodor 1975, Harnad 1990c; Newell 1980; Pylyshyn 1984), whereas connectionism is nonsymbolic (Fodor & Pylyshyn 1988, or, as some have hopefully dubbed it, "subsymbolic" Smolensky 1988). This paper will examine what is and is not (...)
    Direct download (11 more)  
     
    Export citation  
     
    My bibliography   34 citations  
  10. Stevan Harnad (2011). Lunch Uncertain [Review Of: Floridi, Luciano (2011) The Philosophy of Information (Oxford)]. [REVIEW] Times Literary Supplement 5664 (22-23).
    The usual way to try to ground knowing according to contemporary theory of knowledge is: We know something if (1) it’s true, (2) we believe it, and (3) we believe it for the “right” reasons. Floridi proposes a better way. His grounding is based partly on probability theory, and partly on a question/answer network of verbal and behavioural interactions evolving in time. This is rather like modeling the data-exchange between a data-seeker who needs to know which button to press on (...)
    Translate
      Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  11. Stevan Harnad, Waking OA's “Slumbering Giant”: The University's Mandate To Mandate Open Access.
    SUMMARY: Universities (the universal research-providers) as well as research funders (public and private) are beginning to make it part of their mandates to ensure not only that researchers conduct and publish peer-reviewed research (“publish or perish”), but that they also make it available online, free for all. This is called Open Access (OA), and it maximizes the uptake, impact and progress of research by making it accessible to all potential users worldwide, not just those whose universities can afford to subscribe (...)
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  12. Stevan Harnad (2003). Can a Machine Be Conscious? How? Journal of Consciousness Studies 10 (4):67-75.
    A "machine" is any causal physical system, hence we are machines, hence machines can be conscious. The question is: which kinds of machines can be conscious? Chances are that robots that can pass the Turing Test -- completely indistinguishable from us in their behavioral capacities -- can be conscious (i.e. feel), but we can never be sure (because of the "other-minds" problem). And we can never know HOW they have minds, because of the "mind/body" problem. We can only know how (...)
    Direct download (13 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  13. Stevan Harnad (2000). Minds, Machines and Turing: The Indistinguishability of Indistinguishables. Journal of Logic, Language and Information 9 (4):425-445.
    Turing's celebrated 1950 paper proposes a very general methodological criterion for modelling mental function: total functional equivalence and indistinguishability. His criterion gives rise to a hierarchy of Turing Tests, from subtotal ("toy") fragments of our functions (t1), to total symbolic (pen-pal) function (T2 -- the standard Turing Test), to total external sensorimotor (robotic) function (T3), to total internal microfunction (T4), to total indistinguishability in every empirically discernible respect (T5). This is a "reverse-engineering" hierarchy of (decreasing) empirical underdetermination of the theory (...)
    Direct download (20 more)  
     
    Export citation  
     
    My bibliography   3 citations  
  14. Stevan Harnad (2001). What's Wrong and Right About Searle's Chinese Room Argument? In Michael A. Bishop & John M. Preston (eds.), [Book Chapter] (in Press). Oxford University Press
    Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind).
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography  
  15. Stevan Harnad (1989). Minds, Machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence 1 (4):5-25.
    Searle's celebrated Chinese Room Argument has shaken the foundations of Artificial Intelligence. Many refutations have been attempted, but none seem convincing. This paper is an attempt to sort out explicitly the assumptions and the logical, methodological and empirical points of disagreement. Searle is shown to have underestimated some features of computer modeling, but the heart of the issue turns out to be an empirical question about the scope and limits of the purely symbolic model of the mind. Nonsymbolic modeling turns (...)
    Direct download (20 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  16. Stevan Harnad & Itiel Dror (2006). Distributed Cognition: Cognizing, Autonomy and the Turing Test. Pragmatics and Cognition 14 (2):14.
    Some of the papers in this special issue distribute cognition between what is going on inside individual cognizers' heads and their outside worlds; others distribute cognition among different individual cognizers. Turing's criterion for cognition was individual, autonomous input/output capacity. It is not clear that distributed cognition could pass the Turing Test.
    Direct download (10 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  17.  34
    Stevan Harnad, Psychophysical and Cognitive Aspects of Categorical Perception:A Critical Overview.
    There are many entry points into the problem of categorization. Two particularly important ones are the so-called top-down and bottom-up approaches. Top-down approaches such as artificial intelligence begin with the symbolic names and descriptions for some categories already given; computer programs are written to manipulate the symbols. Cognitive modeling involves the further assumption that such symbol-interactions resemble the way our brains do categorization. An explicit expectation of the top-down approach is that it will eventually join with the bottom-up approach, which (...)
    Translate
      Direct download (2 more)  
     
    Export citation  
     
    My bibliography   5 citations  
  18. Stevan Harnad (2003). Minds, Machines, and Searle 2: What's Right and Wrong About the Chinese Room Argument. In John M. Preston & John Mark Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press
    When in 1979 Zenon Pylyshyn, associate editor of Behavioral and Brain Sciences (BBS, a peer commentary journal which I edit) informed me that he had secured a paper by John Searle with the unprepossessing title of [XXXX], I cannot say that I was especially impressed; nor did a quick reading of the brief manuscript -- which seemed to be yet another tedious "Granny Objection"[1] about why/how we are not computers -- do anything to upgrade that impression.
    Direct download (6 more)  
     
    Export citation  
     
    My bibliography  
  19. Patrick Hayes, Stevan Harnad, Donald R. Perlis & Ned Block (1992). Virtual Symposium on Virtual Mind. Minds and Machines 2 (3):217-238.
    When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaningful conversation). These higher levels of interpretability are called "virtual" systems. If such a virtual system is interpretable as if it had a mind, is such a "virtual (...)
    Direct download (21 more)  
     
    Export citation  
     
    My bibliography   5 citations  
  20. Stevan Harnad (2000). Correlation Vs. Causality: How/Why the Mind-Body Problem is Hard. Journal of Consciousness Studies 7 (4):54-61.
    The Mind/Body Problem is about causation not correlation. And its solution will require a mechanism in which the mental component somehow manages to play a causal role of its own, rather than just supervening superflously on other, nonmental components that look, for all the world, as if they can do the full causal job perfectly well without it. Correlations confirm that M does indeed "supervene" on B, but causality is needed to show how/why M is not supererogatory; and that's the (...)
    Direct download (8 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  21. Stevan Harnad (2001). Explaining the Mind: Problems, Problems. Philosophical Explorations 41:36-42.
    The mind/body problem is the feeling/function problem: How and why do feeling systems feel? The problem is not just "hard" but insoluble . Fortunately, the "easy" problems of cognitive science are not insoluble. Five books are reviewed in this context.
    Direct download (8 more)  
     
    Export citation  
     
    My bibliography  
  22. Stevan Harnad (1994). Why and How We Are Not Zombies. Journal of Consciousness Studies 1 (2):164-67.
    A robot that is functionally indistinguishable from us may or may not be a mindless Zombie. There will never be any way to know, yet its functional principles will be as close as we can ever get to explaining the mind.
    Direct download (9 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  23.  96
    Stevan Harnad, There is No Concrete.
    We are accustomed to thinking that a primrose is "concrete" and a prime number is "abstract," that "roundness" is more abstract than "round," and that "property" is more abstract than "roundness." In reality, the relation between "abstract" and "concrete" is more like the (non)relation between "abstract" and "concave," "concrete" being a sensory term [about what something feels like] and "abstract" being a functional term (about what the sensorimotor system is doing with its input in order to produce its output): Feelings (...)
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  24.  65
    Stevan Harnad (1989). Minds, Machines and Searle. Philosophical Explorations.
    Searle's celebrated Chinese Room Argument has shaken the foundations of Artificial Intelligence. Many refutations have been attempted, but none seem convincing. This paper is an attempt to sort out explicitly the assumptions and the logical, methodological and empirical points of disagreement. Searle is shown to have underestimated some features of computer modeling, but the heart of the issue turns out to be an empirical question about the scope and limits of the purely symbolic (computational) model of the mind. Nonsymbolic modeling (...)
    Translate
      Direct download (5 more)  
     
    Export citation  
     
    My bibliography   8 citations  
  25.  30
    Stevan Harnad, Learned Inquiry and the Net: The Role of Peer Review, Peer Commentary and Copyright.
    Peer Review and Copyright each have a double role: Formal refereeing protects (R1) the author from publishing and (R2) the reader from reading papers that are not of sufficient quality. Copyright protects the author from (C1) theft of text and (C2) theft of authorship. It has been suggested that in the electronic medium we can dispense with peer review, "publish" everything, and let browsing and commentary do the quality control. It has also been suggested that special safeguards and laws may (...)
    No categories
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography   2 citations  
  26. Stevan Harnad (2001). Rights and Wrongs of Searle's Chinese Room Argument. In M. Bishop & J. Preston (eds.), Essays on Searle's Chinese Room Argument. Oxford University Press
    "in an academic generation a little overaddicted to "politesse," it may be worth saying that violent destruction is not necessarily worthless and futile. Even though it leaves doubt about the right road for London, it helps if someone rips up, however violently, a.
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography  
  27.  97
    Stevan Harnad (1992). Virtual Symposium on Virtual Mind. Minds and Machines 2 (3):217-238.
    When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaninguful conversation). These higher levels of interpretability are called ‘virtual’ systems. If such a virtual system is interpretable as if it had a mind, is such a ‘virtual (...)
    Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  28.  91
    Stevan Harnad, There is Only One Mind/Body Problem.
    In our century a Frege/Brentano wedge has gradually been driven into the mind/body problem so deeply that it appears to have split it into two: The problem of "qualia" and the problem of "intentionality." Both problems use similar intuition pumps: For qualia, we imagine a robot that is indistinguishable from us in every objective respect, but it lacks subjective experiences; it is mindless. For intentionality, we again imagine a robot that is indistinguishable from us in every objective respect but its (...)
    Direct download (6 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  29.  82
    Stevan Harnad (2002). Symbol Grounding and the Origin of Language. In Matthias Scheutz (ed.), Computationalism: New Directions. MIT Press
    What language allows us to do is to "steal" categories quickly and effortlessly through hearsay instead of having to earn them the hard way, through risky and time-consuming sensorimotor "toil" (trial-and-error learning, guided by corrective feedback from the consequences of miscategorisation). To make such linguistic "theft" possible, however, some, at least, of the denoting symbols of language must first be grounded in categories that have been earned through sensorimotor toil (or else in categories that have already been "prepared" for us (...)
    Direct download (8 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  30.  86
    Stevan Harnad, Symbol Grounding is an Empirical Problem: Neural Nets Are Just a Candidate Component.
    "Symbol Grounding" is beginning to mean too many things to too many people. My own construal has always been simple: Cognition cannot be just computation, because computation is just the systematically interpretable manipulation of meaningless symbols, whereas the meanings of my thoughts don't depend on their interpretability or interpretation by someone else. On pain of infinite regress, then, symbol meanings must be grounded in something other than just their interpretability if they are to be candidates for what is going on (...)
    Direct download (8 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  31.  90
    Stevan Harnad, On Fodor on Darwin on Evolution.
    Jerry Fodor argues that Darwin was wrong about "natural selection" because (1) it is only a tautology rather than a scientific law that can support counterfactuals ("If X had happened, Y would have happened") and because (2) only minds can select. Hence Darwin's analogy with "artificial selection" by animal breeders was misleading and evolutionary explanation is nothing but post-hoc historical narrative. I argue that Darwin was right on all counts. Until Darwin's "tautology," it had been believed that either (a) God (...)
    Translate
      Direct download (6 more)  
     
    Export citation  
     
    My bibliography  
  32.  1
    Philippe Vincent‐Lamarre, Alexandre Blondin Massé, Marcos Lopes, Mélanie Lord, Odile Marcotte & Stevan Harnad (2016). The Latent Structure of Dictionaries. Topics in Cognitive Science 8 (2):n/a-n/a.
    How many words—and which ones—are sufficient to define all other words? When dictionaries are analyzed as directed graphs with links from defining words to defined words, they reveal a latent structure. Recursively removing all words that are reachable by definition but that do not define any further words reduces the dictionary to a Kernel of about 10% of its size. This is still not the smallest number of words that can define all the rest. About 75% of the Kernel turns (...)
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  33.  8
    Stevan Harnad (2003). Symbol‐Grounding Problem. In L. Nadel (ed.), Encyclopedia of Cognitive Science. Nature Publishing Group
  34.  77
    Stevan Harnad (2005). To Cognize is to Categorize: Cognition is Categorization. In C. Lefebvre & H. Cohen (eds.), Handbook of Categorization. Elsevier
    2. Invariant Sensorimotor Features ("Affordances"). To say this is not to declare oneself a Gibsonian, whatever that means. It is merely to point out that what a sensorimotor system can do is determined by what can be extracted from its motor interactions with its sensory input. If you lack sonar sensors, then your sensorimotor system cannot do what a bat's can do, at least not without the help of instruments. Light stimulation affords color vision for those of us with the (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography   3 citations  
  35.  53
    Stevan Harnad (1994). Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing Test for Artificial Life. Artificial Life 1 (3):93-301.
    Both Artificial Life and Artificial Mind are branches of what Dennett has called "reverse engineering": Ordinary engineering attempts to build systems to meet certain functional specifications, reverse bioengineering attempts to understand how systems that have already been built by the Blind Watchmaker work. Computational modelling (virtual life) can capture the formal principles of life, perhaps predict and explain it completely, but it can no more be alive than a virtual forest fire can be hot. In itself, a computational model is (...)
    Direct download (7 more)  
     
    Export citation  
     
    My bibliography   3 citations  
  36.  12
    Stevan Harnad (2007). Philosophy, Ethics, and Humanities in Medicine. Philosophy, Ethics, and Humanities in Medicine 2:31.
    No categories
    Direct download  
     
    Export citation  
     
    My bibliography  
  37.  68
    Stevan Harnad, Searle's Chinese Room Argument.
    Computationalism. According to computationalism, to explain how the mind works, cognitive science needs to find out what the right computations are -- the same ones that the brain performs in order to generate the mind and its capacities. Once we know that, then every system that performs those computations will have those mental states: Every computer that runs the mind's program will have a mind, because computation is hardware independent : Any hardware that is running the right program has the (...)
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  38.  77
    Stevan Harnad (2006). The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence. In Robert Epstein & G. Peters (eds.), [Book Chapter] (in Press). Kluwer
    This quote/commented critique of Turing's classical paper suggests that Turing meant -- or should have meant -- the robotic version of the Turing Test (and not just the email version). Moreover, any dynamic system (that we design and understand) can be a candidate, not just a computational one. Turing also dismisses the other-minds problem and the mind/body problem too quickly. They are at the heart of both the problem he is addressing and the solution he is proposing.
    Direct download (8 more)  
     
    Export citation  
     
    My bibliography  
  39.  52
    Stevan Harnad, Minds, Brains and Turing.
    Turing set the agenda for (what would eventually be called) the cognitive sciences. He said, essentially, that cognition is as cognition does (or, more accurately, as cognition is capable of doing): Explain the causal basis of cognitive capacity and you’ve explained cognition. Test your explanation by designing a machine that can do everything a normal human cognizer can do – and do it so veridically that human cognizers cannot tell its performance apart from a real human cognizer’s – and you (...)
    Translate
      Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  40.  23
    Stevan Harnad (1993). Grounding Symbols in the Analog World with Neural Nets. Philosophical Explorations 2 (1):12-78.
    Harnad's main argument can be roughly summarised as follows: due to Searle's Chinese Room argument, symbol systems by themselves are insufficient to exhibit cognition, because the symbols are not grounded in the real world, hence without meaning. However, a symbol system that is connected to the real world through transducers receiving sensory data, with neural nets translating these data into sensory categories, would not be subject to the Chinese Room argument. Harnad's article is not only the starting point for the (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography   6 citations  
  41.  64
    Stevan Harnad (1992). The Turing Test is Not a Trick: Turing Indistinguishability is a Scientific Criterion. Philosophical Explorations 3 (4):9-10.
    It is important to understand that the Turing Test is not, nor was it intended to be, a trick; how well one can fool someone is not a measure of scientific progress. The TT is an empirical criterion: It sets AI's empirical goal to be to generate human-scale performance capacity. This goal will be met when the candidate's performance is totally indistinguishable from a human's. Until then, the TT simply represents what it is that AI must endeavor eventually to accomplish (...)
    Direct download (6 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  42.  13
    Edward F. Pace-Schott, Mark Solms, Mark Blagrove & Stevan Harnad (eds.) (2003). Sleep and Dreaming: Scientific Advances and Reconsiderations. Cambridge University Press.
    Printbegrænsninger: Der kan printes 10 sider ad gangen og max. 40 sider pr. session.
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography   3 citations  
  43.  60
    Stevan Harnad (1994). Computation is Just Interpretable Symbol Manipulation; Cognition Isn't. Minds and Machines 4 (4):379-90.
    Computation is interpretable symbol manipulation. Symbols are objects that are manipulated on the basis of rules operating only on theirshapes, which are arbitrary in relation to what they can be interpreted as meaning. Even if one accepts the Church/Turing Thesis that computation is unique, universal and very near omnipotent, not everything is a computer, because not everything can be given a systematic interpretation; and certainly everything can''t be givenevery systematic interpretation. But even after computers and computation have been successfully distinguished (...)
    Direct download (14 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  44.  21
    Stevan Harnad, Creative Disagreement.
    Do scientists agree? It is not only unrealistic to suppose that they do, but probably just as unrealistic to think that they ought to. Agreement is for what is already established scientific history. The current and vital ongoing aspect of science consists of an active and often heated interaction of data, ideas and minds, in a process one might call "creative disagreement." The "scientific method" is largely derived from a reconstruction based on selective hindsight. What actually goes on has much (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  45.  71
    Stevan Harnad (1995). Grounding Symbols in Sensorimotor Categories with Neural Networks. Institute of Electrical Engineers Colloquium on "Grounding Representations.
    It is unlikely that the systematic, compositional properties of formal symbol systems -- i.e., of computation -- play no role at all in cognition. However, it is equally unlikely that cognition is just computation, because of the symbol grounding problem (Harnad 1990): The symbols in a symbol system are systematically interpretable, by external interpreters, as meaning something, and that is a remarkable and powerful property of symbol systems. Cognition (i.e., thinking), has this property too: Our thoughts are systematically interpretable by (...)
    Direct download (16 more)  
     
    Export citation  
     
    My bibliography  
  46.  47
    Stevan Harnad, Doing, Feeling, Meaning And Explaining.
    It is “easy” to explain doing, “hard” to explain feeling. Turing has set the agenda for the easy explanation (though it will be a long time coming). I will try to explain why and how explaining feeling will not only be hard, but impossible. Explaining meaning will prove almost as hard because meaning is a hybrid of know-how and what it feels like to know how.
    Translate
      Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  47.  64
    Stevan Harnad (2007). Creativity : Method or Magic? In Henri Cohen & Brigitte Stemmer (eds.), Consciousness and Cognition: Fragments of Mind and Brain. Elxevier Academic Press
    Creativity may be a trait, a state or just a process defined by its products. It can be contrasted with certain cognitive activities that are not ordinarily creative, such as problem solving, deduction, induction, learning, imitation, trial and error, heuristics and "abduction," however, all of these can be done creatively too. There are four kinds of theories, attributing creativity respectively to (1) method, (2) "memory" (innate structure), (3) magic or (4) mutation. These theories variously emphasize the role of an unconscious (...)
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  48.  18
    Stevan Harnad & Stephen J. Hanson, Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding.
    After people learn to sort objects into categories they see them differently. Members of the same category look more alike and members of different categories look more different. This phenomenon of within-category compression and between-category separation in similarity space is called categorical perception (CP). It is exhibited by human subjects, animals and neural net models. In backpropagation nets trained first to auto-associate 12 stimuli varying along a onedimensional continuum and then to sort them into 3 categories, CP arises as a (...)
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography   1 citation  
  49.  56
    Stevan Harnad (1995). Why and How We Are Not Zombies. Philosophical Explorations 1 (2):164-167.
    A robot that is functionally indistinguishable from us may or may not be a mindless Zombie. There will never be any way to know, yet its functional principles will be as close as we can ever get to explaining the mind.
    Direct download (5 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  50.  64
    Angelo Cangelosi, Alberto Greco & Stevan Harnad (2002). Symbol Grounding and the Symbolic Theft Hypothesis. In A. Cangelosi & D. Parisi (eds.), Simulating the Evolution of Language. Springer-Verlag 191--210.
    Scholars studying the origins and evolution of language are also interested in the general issue of the evolution of cognition. Language is not an isolated capability of the individual, but has intrinsic relationships with many other behavioral, cognitive, and social abilities. By understanding the mechanisms underlying the evolution of linguistic abilities, it is possible to understand the evolution of cognitive abilities. Cognitivism, one of the current approaches in psychology and cognitive science, proposes that symbol systems capture mental phenomena, and attributes (...)
    Direct download (7 more)  
     
    Export citation  
     
    My bibliography  
1 — 50 / 133