Switch to: Citations

Add references

You must login to add references.
  1. Against computational hermeneutics.Stevan Harnad - 1990 - Social Epistemology 4:167-172.
    Critique of Computationalism as merely projecting hermeneutics (i.e., meaning originating from the mind of an external interpreter) onto otherwise intrinsically meaningless symbols.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Minds, brains, and programs.John Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.
    What psychological and philosophical significance should we attach to recent efforts at computer simulations of human cognitive capacities? In answering this question, I find it useful to distinguish what I will call "strong" AI from "weak" or "cautious" AI. According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. (...)
    Direct download (14 more)  
     
    Export citation  
     
    Bookmark   1692 citations  
  • Artificial life: Synthetic versus virtual.Stevan Harnad - 1993 - In Chris Langton (ed.), Santa Fe Institute Studies in the Sciences of Complexity. Volume XVI: 539ff. Addison-Wesley.
    Artificial life can take two forms: synthetic and virtual. In principle, the materials and properties of synthetic living systems could differ radically from those of natural living systems yet still resemble them enough to be really alive if they are grounded in the relevant causal interactions with the real world. Virtual (purely computational) "living" systems, in contrast, are just ungrounded symbol systems that are systematically interpretable as if they were alive; in reality they are no more alive than a virtual (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Minds, Brains and Science.John R. Searle - 1984 - Cambridge: Harvard University Press.
    As Louisiana and Cuba emerged from slavery in the late nineteenth century, each faced the question of what rights former slaves could claim. Degrees of Freedom compares and contrasts these two societies in which slavery was destroyed by war, and citizenship was redefined through social and political upheaval. Both Louisiana and Cuba were rich in sugar plantations that depended on an enslaved labor force. After abolition, on both sides of the Gulf of Mexico, ordinary people-cane cutters and cigar workers, laundresses (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   321 citations  
  • Consciousness, explanatory inversion and cognitive science.John R. Searle - 1993 - Behavioral and Brain Sciences 16 (1):189-189.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   65 citations  
  • The truly total Turing test.Paul Schweizer - 1998 - Minds and Machines 8 (2):263-272.
    The paper examines the nature of the behavioral evidence underlying attributions of intelligence in the case of human beings, and how this might be extended to other kinds of cognitive system, in the spirit of the original Turing Test. I consider Harnad's Total Turing Test, which involves successful performance of both linguistic and robotic behavior, and which is often thought to incorporate the very same range of empirical data that is available in the human case. However, I argue that the (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  • Computation and cognition: Issues in the foundation of cognitive science.Zenon W. Pylyshyn - 1980 - Behavioral and Brain Sciences 3 (1):111-32.
    The computational view of mind rests on certain intuitions regarding the fundamental similarity between computation and cognition. We examine some of these intuitions and suggest that they derive from the fact that computers and human organisms are both physical systems whose behavior is correctly described as being governed by rules acting on symbolic representations. Some of the implications of this view are discussed. It is suggested that a fundamental hypothesis of this approach is that there is a natural domain of (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   662 citations  
  • Physical symbol systems.Allen Newell - 1980 - Cognitive Science 4 (2):135-83.
    On the occasion of a first conference on Cognitive Science, it seems appropriate to review the basis of common understanding between the various disciplines. In my estimate, the most fundamental contribution so far of artificial intelligence and computer science to the joint enterprise of cognitive science has been the notion of a physical symbol system, i.e., the concept of a broad class of systems capable of having and manipulating symbols, yet realizable in the physical universe. The notion of symbol so (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   485 citations  
  • An Essay on the Psychology of Invention in the Mathematical Field. [REVIEW]E. N. & Jacques Hadamard - 1945 - Journal of Philosophy 42 (12):333.
  • Primate theory of mind is a Turing test.Robert W. Mitchell & James R. Anderson - 1998 - Behavioral and Brain Sciences 21 (1):127-128.
    Heyes's literature review of deception, imitation, and self-recognition is inadequate, misleading, and erroneous. The anaesthetic artifact hypothesis of self-recognition is unsupported by the data she herself examines. Her proposed experiment is tantalizing, indicating that theory of mind is simply a Turing test.
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Theory of mind in nonhuman primates.C. M. Heyes - 1998 - Behavioral and Brain Sciences 21 (1):101-114.
    Since the BBS article in which Premack and Woodruff (1978) asked “Does the chimpanzee have a theory of mind?,” it has been repeatedly claimed that there is observational and experimental evidence that apes have mental state concepts, such as “want” and “know.” Unlike research on the development of theory of mind in childhood, however, no substantial progress has been made through this work with nonhuman primates. A survey of empirical studies of imitation, self-recognition, social relationships, deception, role-taking, and perspective-taking suggests (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   131 citations  
  • Virtual symposium on virtual mind.Patrick Hayes, Stevan Harnad, Donald Perlis & Ned Block - 1992 - Minds and Machines 2 (3):217-238.
    When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaningful conversation). These higher levels of interpretability are called "virtual" systems. If such a virtual system is interpretable as if it had a mind, is such a "virtual (...)
    Direct download (14 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • Reaping the whirlwind: Reply to Harnad's Other Bodies, Other Minds[REVIEW]Larry Hauser - 1993 - Minds and Machines 3 (2):219-37.
    Harnad''s proposed robotic upgrade of Turing''s Test (TT), from a test of linguistic capacity alone to a Total Turing Test (TTT) of linguisticand sensorimotor capacity, conflicts with his claim that no behavioral test provides even probable warrant for attributions of thought because there is no evidence of consciousness besides private experience. Intuitive, scientific, and philosophical considerations Harnad offers in favor of his proposed upgrade are unconvincing. I agree with Harnad that distinguishing real from as if thought on the basis of (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Computation is just interpretable symbol manipulation; cognition isn't.Stevan Harnad - 1994 - Minds and Machines 4 (4):379-90.
    Computation is interpretable symbol manipulation. Symbols are objects that are manipulated on the basis of rules operating only on theirshapes, which are arbitrary in relation to what they can be interpreted as meaning. Even if one accepts the Church/Turing Thesis that computation is unique, universal and very near omnipotent, not everything is a computer, because not everything can be given a systematic interpretation; and certainly everything can''t be givenevery systematic interpretation. But even after computers and computation have been successfully distinguished (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  • Subcognition and the limits of the Turing test.Robert M. French - 1990 - Mind 99 (393):53-66.
  • Time and the observer: The where and when of consciousness in the brain.Daniel C. Dennett & Marcel Kinsbourne - 1992 - Behavioral and Brain Sciences 15 (2):183-201.
    _Behavioral and Brain Sciences_ , 15, 183-247, 1992. Reprinted in _The Philosopher's Annual_ , Grim, Mar and Williams, eds., vol. XV-1992, 1994, pp. 23-68; Noel Sheehy and Tony Chapman, eds., _Cognitive Science_ , Vol. I, Elgar, 1995, pp.210-274.
    Direct download (12 more)  
     
    Export citation  
     
    Bookmark   371 citations  
  • The Turing Test and the Frame Problem: AI's Mistaken Understanding of Intelligence.Larry Crockett - 1994 - Ablex.
    I have discussed the frame problem and the Turing test at length, but I have not attempted to spell out what I think the implications of the frame problem ...
    Direct download  
     
    Export citation  
     
    Bookmark   10 citations  
  • What's wrong and right about Searle's chinese room argument?Stevan Harnad - 2001 - In Michael A. Bishop & John M. Preston (eds.), [Book Chapter] (in Press). Oxford University Press.
    Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind).
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Turing test is not a trick: Turing indistinguishability is a scientific criterion.Stevan Harnad - 1992 - SIGART Bulletin 3 (4):9-10.
    It is important to understand that the Turing Test is not, nor was it intended to be, a trick; how well one can fool someone is not a measure of scientific progress. The TT is an empirical criterion: It sets AI's empirical goal to be to generate human-scale performance capacity. This goal will be met when the candidate's performance is totally indistinguishable from a human's. Until then, the TT simply represents what it is that AI must endeavor eventually to accomplish (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  • Neoconstructivism: A unifying constraint for the cognitive sciences.Stevan Harnad - 1982 - In Thomas W. Simon & Robert J. Scholes (eds.), [Book Chapter]. Lawrence Erlbaum. pp. 1-11.
    Behavioral scientists studied behavior; cognitive scientists study what generates behavior. Cognitive science is hence theoretical behaviorism (or behaviorism is experimental cognitivism). Behavior is data for a cognitive theorist. What counts as a theory of behavior? In this paper, a methodological constraint on theory construction -- "neoconstructivism" -- will be proposed (by analogy with constructivism in mathematics): Cognitive theory must be computable; given an encoding of the input to a behaving system, a theory must be able to compute (an encoding of) (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Lost in the hermeneutic hall of mirrors.Stevan Harnad - 1990 - Journal of Experimental and Theoretical Artificial Intelligence 2:321-27.
    Critique of Computationalism as merely projecting hermeneutics (i.e., meaning originating from the mind of an external interpreter) onto otherwise intrinsically meaningless symbols. Projecting an interpretation onto a symbol system results in its being reflected back, in a spuriously self-confirming way.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Levels of functional equivalence in reverse bioengineering: The Darwinian Turing test for artificial life.Stevan Harnad - 1994 - Artificial Life 1 (3):93-301.
    Both Artificial Life and Artificial Mind are branches of what Dennett has called "reverse engineering": Ordinary engineering attempts to build systems to meet certain functional specifications, reverse bioengineering attempts to understand how systems that have already been built by the Blind Watchmaker work. Computational modelling (virtual life) can capture the formal principles of life, perhaps predict and explain it completely, but it can no more be alive than a virtual forest fire can be hot. In itself, a computational model is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Grounding symbols in the analog world with neural nets.Stevan Harnad - 1993 - Think (misc) 2 (1):12-78.
    Harnad's main argument can be roughly summarised as follows: due to Searle's Chinese Room argument, symbol systems by themselves are insufficient to exhibit cognition, because the symbols are not grounded in the real world, hence without meaning. However, a symbol system that is connected to the real world through transducers receiving sensory data, with neural nets translating these data into sensory categories, would not be subject to the Chinese Room argument. Harnad's article is not only the starting point for the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • Discussion (passim).Stevan Harnad - 1993 - In G. R. Bock & James L. Marsh (eds.), [Book Chapter]. (Ciba Foundation Symposium 174).
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Verifying machines' minds. [REVIEW]Stevan Harnad - 1984 - Contemporary Psychology 29:389 - 391.
    he question of the possibility of artificial consciousness is both very new and very old. It is new in the context of contemporary cognitive science and its concern with whether a machine can be conscious; it is old in the form of the mind/body problem and the "other minds" problem of philosophy. Contemporary enthusiasts proceed at their peril if they ignore or are ignorant of the false starts and blind alleys that the older thinkers have painfully worked through.
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Lessons from a restricted Turing test.Stuart M. Shieber - 1994 - Communications of the Association for Computing Machinery 37:70-82.
  • Turing indistinguishability and the blind watchmaker.Stevan Harnad - 2002 - In James H. Fetzer (ed.), Consciousness Evolving. John Benjamins. pp. 3-18.
    Many special problems crop up when evolutionary theory turns, quite naturally, to the question of the adaptive value and causal role of consciousness in human and nonhuman organisms. One problem is that -- unless we are to be dualists, treating it as an independent nonphysical force -- consciousness could not have had an independent adaptive function of its own, over and above whatever behavioral and physiological functions it "supervenes" on, because evolution is completely blind to the difference between a conscious (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Connecting object to symbol in modeling cognition.Stevan Harnad - 1992 - In A. Clark & Ronald Lutz (eds.), Connectionism in Context. Springer Verlag. pp. 75--90.
    Connectionism and computationalism are currently vying for hegemony in cognitive modeling. At first glance the opposition seems incoherent, because connectionism is itself computational, but the form of computationalism that has been the prime candidate for encoding the "language of thought" has been symbolic computationalism (Dietrich 1990, Fodor 1975, Harnad 1990c; Newell 1980; Pylyshyn 1984), whereas connectionism is nonsymbolic (Fodor & Pylyshyn 1988, or, as some have hopefully dubbed it, "subsymbolic" Smolensky 1988). This paper will examine what is and is not (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   39 citations  
  • Correlation vs. causality: How/why the mind-body problem is hard.Stevan Harnad - 2000 - Journal of Consciousness Studies 7 (4):54-61.
    The Mind/Body Problem is about causation not correlation. And its solution will require a mechanism in which the mental component somehow manages to play a causal role of its own, rather than just supervening superflously on other, nonmental components that look, for all the world, as if they can do the full causal job perfectly well without it. Correlations confirm that M does indeed "supervene" on B, but causality is needed to show how/why M is not supererogatory; and that's the (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Consciousness: An afterthought.Stevan Harnad - 1982 - Cognition and Brain Theory 5:29-47.
    There are many possible approaches to the mind/brain problem. One of the most prominent, and perhaps the most practical, is to ignore it.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   143 citations  
  • Why and how we are not zombies.Stevan Harnad - 1994 - Journal of Consciousness Studies 1 (2):164-67.
    A robot that is functionally indistinguishable from us may or may not be a mindless Zombie. There will never be any way to know, yet its functional principles will be as close as we can ever get to explaining the mind.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  • The Failures of Computationalism.John R. Searle - 2001 - Http.
    Harnad and I agree that the Chinese Room Argument deals a knockout blow to Strong AI, but beyond that point we do not agree on much at all. So let's begin by pondering the implications of the Chinese Room. The Chinese Room shows that a system, me for example, could pass the Turing Test for understanding Chinese, for example, and could implement any program you like and still not understand a word of Chinese. Now, why? What does the genuine Chinese (...)
    Direct download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Cognitive Science as Reverse Engineering.Daniel C. Dennett - unknown
    The vivid terms, "Top-down" and "Bottom-up" have become popular in several different contexts in cognitive science. My task today is to sort out some different meanings and comment on the relations between them, and their implications for cognitive science.
    Direct download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Can machines think?Daniel C. Dennett - 1984 - In M. G. Shafto (ed.), How We Know. Harper & Row.