Some philosophers have conﬂated functionalism and computationalism. I reconstruct how this came about and uncover two assumptions that made the conﬂation possible. They are the assumptions that (i) psychological functional analyses are computational descriptions and (ii) everything may be described as performing computations. I argue that, if we want to improve our understanding of both the metaphysics of mental states and the functional relations between them, we should reject these assumptions. # 2004 Elsevier Ltd. All rights reserved.
In this paper, I want to deal with the triviality threat to computationalism. On one hand, the controversial and vague claim that cognition involves computation is still denied. On the other, contemporary physicists and philosophers alike claim that all physical processes are indeed computational or algorithmic. This claim would justify the computationalism claim by making it utterly trivial. I will show that even if these two claims were true, computationalism would not have to be trivial.
Some philosophers have conflated functionalism and computationalism. I reconstruct how this came about and uncover two assumptions that made the conflation possible. They are the assumptions that (i) psychological functional analyses are computational descriptions and (ii) everything may be described as performing computations. I argue that, if we want to improve our understanding of both the metaphysics of mental states and the functional relations between them, we should reject these assumptions.
Both Cybersemiotics and Info-computationalist research programmes represent attempts to unify understanding of information, knowledge and communication. The first one takes into account phenomenological aspects of signification which are insisting on the human experience "from within". The second adopts solely the view "from the outside" based on scientific practice, with an observing agent generating inter-subjective knowledge in a research community. The process of knowledge production, embodied into networks of cognizing agents interacting with the environment and developing through evolution is studied on (...) different levels of abstraction in both frames of reference. In order to develop scientifically tractable models of evolution of intelligence in informational structures from pre-biotic/chemical to living networked intelligent organisms, including the implementation of those models in artificial agents, a basic level language of Info-Computationalism has shown to be suitable. There are however contexts in which we deal with complex informational structures essentially dependent on human first person knowledge where high level language such as Cybersemiotics is the appropriate tool for conceptualization and communication. Two research projects are presented in order to exemplify the interplay of info-computational and higher-order approaches: The Blue Brain Project where the brain is modeled as info-computational system, a simulation in silico of a biological brain function, and Biosemiotics research on genes, information, and semiosis in which the process of semiosis is understood in info-computational terms. The article analyzes differences and convergences of Cybersemiotics and Info-computationalist approaches which by placing focus on distinct levels of organization, help elucidate processes of knowledge production in intelligent agents. (shrink)
Wittgenstein’s views invite a modest, functionalist account of mental states and regularities, or more specifically a causal/computational, representational theory of the mind (CRTT). It is only by understandingWittgenstein’s remarks in the context of a theory like CRTT that his insights have any real force; and it is only by recognizing those insights that CRTT can begin to account for sensations and our thoughts about them. For instance, Wittgenstein’s (in)famous remark that “an inner process stands in need of outward criteria” (PI:§580), (...) so implausible read behaviorally, is entirely plausible if the “outward” is allowed to include computational facts about our brains. But what is especially penetrating about Wittgenstein’s discussion is his unique diagnosis of our puzzlement in this area, in particular, his suggestion that it is due to our captivation by “pictures” whose application to reality is left crucially under-specified. It is only by understanding. What sustains the naive picture is not a captivation by language, but, at least in part, our largely involuntary reactions to things that look and act like our conspecifics. We project a property into them correlative to that reaction in ourselves, and are, indeed, unwilling to project it into things that do not induce that reaction. (shrink)
This paper challenges two orthodox theses: (a) that computational processes must be algorithmic; and (b) that all computed functions must be Turing-computable. Section 2 advances the claim that the works in computability theory, including Turing's analysis of the effective computable functions, do not substantiate the two theses. It is then shown (Section 3) that we can describe a system that computes a number-theoretic function which is not Turing-computable. The argument against the first thesis proceeds in two stages. It is first (...) shown (Section 4) that whether a process is algorithmic depends on the way we describe the process. It is then argued (Section 5) that systems compute even if their processes are not described as algorithmic. The paper concludes with a suggestion for a semantic approach to computation. (shrink)
In this paper I discuss Searle's claim that the computational properties of a system could never cause a system to be conscious. In the first section of the paper I argue that Searle is correct that, even if a system both behaves in a way that is characteristic of conscious agents (like ourselves) and has a computational structure similar to those agents, one cannot be certain that that system is conscious. On the other hand, I suggest that Searle's intuition that (...) it is “empirically absurd” that such a system could be conscious is unfounded. In the second section I show that Searle's attempt to show that a system's computational states could not possibly cause it to be conscious is based upon an erroneous distinction between computational and physical properties. On the basis of these two arguments, I conclude that, supposing that the behavior of conscious agents can be explained in terms of their computational properties, we have good reason to suppose that a system having computational properties similar to such agents is also conscious. (shrink)
Computationalism has been the mainstream view of cognition for decades. There are periodic reports of its demise, but they are greatly exaggerated. This essay surveys some recent literature on computationalism. It concludes that computationalism is a family of theories about the mechanisms of cognition. The main relevant evidence for testing it comes from neuroscience, though psychology and AI are relevant too. Computationalism comes in many versions, which continue to guide competing research programs in philosophy of mind (...) as well as psychology and neuroscience. Although our understanding of computationalism has deepened in recent years, much work in this area remains to be done. (shrink)
Defending or attacking either functionalism or computationalism requires clarity on what they amount to and what evidence counts for or against them. My goal here is not to evaluate their plausibility. My goal is to formulate them and their relationship clearly enough that we can determine which type of evidence is relevant to them. I aim to dispel some sources of confusion that surround functionalism and computationalism, recruit recent philosophical work on mechanisms and computation to shed light on (...) them, and clarify how functionalism and computationalism may or may not legitimately come together. (shrink)
Roughly speaking, computationalism says that cognition is computation, or that cognitive phenomena are explained by the agent‘s computations. The cognitive processes and behavior of agents are the explanandum. The computations performed by the agents‘ cognitive systems are the proposed explanans. Since the cognitive systems of biological organisms are their nervous 1 systems (plus or minus a bit), we may say that according to computationalism, the cognitive processes and behavior of organisms are explained by neural computations. Some people might (...) prefer to say that cognitive systems are ―realized‖ by nervous systems, and thus that—according to computationalism—cognitive computations are ―realized‖ by neural processes. In this paper, nothing hinges on the nature of the relation between cognitive systems and nervous systems, or between computations and neural processes. For present purposes, if a neural process realizes a computation, then that neural process is a computation. Thus, I will couch much of my discussion in terms of nervous systems and neural computation.1 Before proceeding, we should dispense with a possible red herring. Contrary to a common assumption, computationalism does not stand in opposition to connectionism. Connectionism, in the most general and common sense of the term, is the claim that cognitive phenomena are explained (at some level and at least in part) by the processes of neural networks. This is a truism, supported by most neuroscientific evidence. Everybody ought to be a connectionist in this general sense. The relevant question is, are neural processes computations? More precisely, are the neural processes to be found in the nervous systems of organisms computations? Computationalists say ―yes‖, anti-computationalists say ―no‖. This paper investigates whether any of the arguments on offer against computationalism have a chance at knocking it off.2 Ever since Warren McCulloch and Walter Pitts (1943) first proposed it, computationalism has been subjected to a wide range of objections.. (shrink)
Abstract Mental representations, Swiatczak (Minds Mach 21:19–32, 2011) argues, are fundamentally biochemical and their operations depend on consciousness; hence the computational theory of mind, based as it is on multiple realisability and purely syntactic operations, must be wrong. Swiatczak, however, is mistaken. Computation, properly understood, can afford descriptions/explanations of any physical process, and since Swiatczak accepts that consciousness has a physical basis, his argument against computationalism must fail. Of course, we may not have much idea how consciousness (itself a (...) rather unclear plurality of notions) might be implemented, but we do have a hypothesis—that all of our mental life, including consciousness, is the result of computational processes and so not tied to a biochemical substrate. Like it or not, the computational theory of mind remains the only game in town. Content Type Journal Article Pages 1-8 DOI 10.1007/s11023-012-9271-5 Authors David Davenport, Computer Engineering Department, Bilkent University, 06800 Ankara, Turkey Journal Minds and Machines Online ISSN 1572-8641 Print ISSN 0924-6495. (shrink)
The emergence of cognitive science as a multi-disciplinary investigation into the nature of mind has historically revolved around the core assumption that the central ‘cognitive’ aspects of mind are computational in character. Although there is some disagreement and philosophical speculation concerning the precise formulation of this ‘core assumption’ it is generally agreed that computationalism in some form lies at the heart of cognitive science as it is currently conceived. Von Eckardt’s recent work on this topic is useful in enabling (...) us to get a sense of the scope of the computational assumption. She makes clear that there are two rather different ways in which we could understand cognitive science’s commitment to computationalism and hence two ways to understand the claim that the ‘mind is a computer’, by appeal to either (1) A mathematical theory of computability or (2) A theory of data-processing or informationprocessing. Importantly, she also argues that although there are many aspects of claim that the ‘mind is a computer’ that can be nicely captured by Boyd’s account of the way scientific metaphors are employed, not to direct attention to the hitherto unnoticed, but to encourage investigation of the unknown. Nonetheless, cognitive scientists are not making the claim that the ‘mind is a computer’ in a metaphorical sense. If Von Eckhardt is correct, when cognitive scientists assume the ‘mind is a computer’ and give a sense to the notion of the computer in the sense of (2) above, they are making a literal claim about the nature of mind (Von Eckardt, 1993, p. 116). And as she points out that if one reads (2) in a theoretically committed way then there is no a priori reason to exclude the organic brain from the list of entities that might fall under the description of being a ‘computer’. Important, we can truly describe it as a data-processing (or information-processing) device. What is useful about Von Eckardt’s general analysis of computationalism’s core assumption is that it provides a clear angle from which to view the flaws of computationalism. This paper defends the claim that if there is an account of information adequate to capture those aspects of mind that we regard as essential to mentality it is one that requires us to surrender the idea that the mind is a computer.. (shrink)
The Church–Turing Thesis (CTT) is often employed in arguments for computationalism. I scrutinize the most prominent of such arguments in light of recent work on CTT and argue that they are unsound. Although CTT does nothing to support computationalism, it is not irrelevant to it. By eliminating misunderstandings about the relationship between CTT and computationalism, we deepen our appreciation of computationalism as an empirical hypothesis.
Computationalist theories of mind require brain symbols, that is, neural events that represent kinds or instances of kinds. Standard models of computation require multiple inscriptions of symbols with the same representational content. The satisfaction of two conditions makes it easy to see how this requirement is met in computers, but we have no reason to think that these conditions are satisfied in the brain. Thus, if we wish to give computationalist explanations of human cognition, without committing ourselvesa priori to a (...) strong and unsupported claim in neuroscience, we must first either explain how we can provide multiple brain symbols with the same content, or explain how we can abandon standard models of computation. It is argued that both of these alternatives require us to explain the execution of complex tasks that have a cognition-like structure. Circularity or regress are thus threatened, unless noncomputationalist principles can provide the required explanations. But in the latter case, we do not know that noncomputationalist principles might not bear most of the weight of explaining cognition. Four possible types of computationalist theory are discussed; none appears to provide a promising solution to the problem. Thus, despite known difficulties in noncomputationalist investigations, we have every reason to pursue the search for noncomputationalist principles in cognitive theory. (shrink)
Computationalism, the notion that cognition is computation, is a working hypothesis of many AI researchers and Cognitive Scientists. Although it has not been proved, neither has it been disproved. In this paper, I give some refutations to some well-known alleged refutations of computationalism. My arguments have two themes: people are more limited than is often recognized in these debates; computer systems are more complicated than is often recognized in these debates. To underline the latter point, I sketch the (...) design and abilities of a possible embodied computer system. (shrink)
The following paper presents a characterization of three distinctions fundamental to computationalism, viz., the distinction between analog and digital machines, representation and nonrepresentation-using systems, and direct and indirect perceptual processes. Each distinction is shown to rest on nothing more than the methodological principles which justify the explanatory framework of the special sciences.
Computationalism, a specie of functionalism, posits that a mental state like pain is realized by a ‘core’ computational state within a particular causal network of such states. This entails that what is realized by the core state is contingent on events remote in space and time, which puts computationalism at odds with the locality principle of physics. If computationalism is amended to respect locality, then it posits that a type of phenomenal experience is determined by a single (...) type of computational state. But a computational state, considered by itself, is of no determinate type—it has no particular symbolic content, since it could be embedded in any of an infinite number of algorithms. Hence, if locality is respected, then the type of experience that is realized by a computational state, or whether any experience at all is realized, is under-determined by the computational nature of the state. Accordingly, Block’s absent and inverted qualia arguments against functionalism find support in the locality principle of physics. If computationalism denies locality to avoid this problem, then it cannot be considered a physicalist theory since it would entail a commitment to phenomena, like teleological causation and action-at-a-distance, that have long been rejected by modern science. The remaining theoretical alternative is to accept the locality principle for macro events and deny that formal, computational operations are sufficient to realize a phenomenal mental state. (shrink)
A working hypothesis of computationalism is that Mind arises, not from the intrinsic nature of the causal properties of particular forms of matter, but from the organization of matter. If this hypothesis is correct, then a wide range of physical systems (e.g. optical, chemical, various hybrids, etc.) should support Mind, especially computers, since they have the capability to create/manipulate organizations of bits of arbitrarily complexity and dynamics. In any particular computer, these bit patterns are quite physical, but their particular (...) physicality is considered irrelevant (since they could be replaced by other physical substrata). (shrink)
Summary. A distinction is made between two senses of the claim “cognition is computation”. One sense, the opaque reading, takes computation to be whatever is described by our current computational theory and claims that cognition is best understood in terms of that theory. The transparent reading, which has its primary allegiance to the phenomenon of computation, rather than to any particular theory of it, is the claim that the best account of cognition will be given by whatever theory turns out (...) to be the best account of the phenomenon of computation. The distinction is clarified and defended against charges of circularity and changing the subject. Several well-known objections to computationalism are then reviewed, and for each the question of whether the transparent reading of the computationalist claim can provide a response is considered. (shrink)
Computationalism is the claim that all possible thoughts are computations, i.e. executions of algorithms. The aim of the paper is to show that if intentionality is semantically clear, in a way defined in the paper, then computationalism must be false. Using a convenient version of the phenomenological relation of intentionality and a diagonalization device inspired by Thomson's theorem of 1962, we show there exists a thought that canno be a computation.
Harnad and I agree that the Chinese Room Argument deals a knockout blow to Strong AI, but beyond that point we do not agree on much at all. So let's begin by pondering the implications of the Chinese Room. The Chinese Room shows that a system, me for example, could pass the Turing Test for understanding Chinese, for example, and could implement any program you like and still not understand a word of Chinese. Now, why? What does the genuine Chinese (...) speaker have that I in the Chinese Room do not have? The answer is obvious. I, in the Chinese room, am manipulating a <span class='Hi'>bunch</span> of formal symbols; but the Chinese speaker has more than symbols, he knows what they mean. That is, in addition to the syntax of Chinese, the genuine Chinese speaker has a semantics in the form of meaning, understanding, and mental contents generally. (shrink)
Since the cognitive revolution, it’s become commonplace that cognition involves both computation and information processing. Is this one claim or two? Is computation the same as information processing? The two terms are often used interchangeably, but this usage masks important differences. In this paper, we distinguish information processing from computation and examine some of their mutual relations, shedding light on the role each can play in a theory of cognition. We recommend that theoristError: Illegal entry in bfrange block in ToUnicode (...) CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMaps of cognition be explicit and careful in choosing 1 notions of computation and information and connecting them together. Much confusion can be avoided by doing so. Keywords: computation, information processing, computationalism, computational theory of mind, cognitivism. (shrink)
Computers today are not only the calculation tools - they are directly (inter)acting in the physical world which itself may be conceived of as the universal computer (Zuse, Fredkin, Wolfram, Chaitin, Lloyd). In expanding its domains from abstract logical symbol manipulation to physical embedded and networked devices, computing goes beyond Church-Turing limit (Copeland, Siegelman, Burgin, Schachter). Computational processes are distributed, reactive, interactive, agent-based and concurrent. The main criterion of success of computation is not its termination, but the adequacy of its (...) response, its speed, generality and flexibility; adaptability, and tolerance to noise, error,faults, and damage. Interactive computing is a generalization of Turing computing, and it calls for new conceptualizations (Goldin, Wegner). In the info-computationalist framework, with computation seen as information processing, natural computation appears as the most suitable paradigm of computation and information semantics requires logical pluralism. (shrink)
In this paper I place Jim Fetzer's esemplastic burial of the computational conceptionof mind within the context of both my own burial and the theory of mind I would put in place of this dead doctrine. My view..
In this reply to James H. Fetzer’s “Minds and Machines: Limits to Simulations of Thought and Action”, I argue that computationalism should not be the view that (human) cognition is computation, but that it should be the view that cognition (simpliciter) is computable. It follows that computationalism can be true even if (human) cognition is not the result of computations in the brain. I also argue that, if semiotic systems are systems that interpret signs, then both humans and (...) computers are semiotic systems. Finally, I suggest that minds can be considered as virtual machines implemented in certain semiotic systems, primarily the brain, but also AI computers. In doing so, I take issue with Fetzer’s arguments to the contrary. (shrink)
The principal temptation toward substance dualisms, or otherwise incorporating a question begging homunculus into our psychologies, arises not from the problem of consciousness in general, nor from the problem of intentionality, but from the question of our awareness and understanding of our own mental contents, and the control of the deliberate, conscious thinking in which we employ them. Dennett has called this "Hume's problem". Cognitivist philosophers have generally either denied the experiential reality of thought, as did the Behaviorists, or have (...) taken an implicitly epiphenomenalist stance, a form of dualism. Some sort of mental duality may indeed be required to meet this problem, but not one that is metaphysical or question begging. I argue that it can be solved in the light of Paivio's "Dual Coding" theory of mental representation. This theory, which is strikingly simple and intuitive (perhaps too much so to have caught the imagination of philosophers) has demonstrated impressive empirical power and scope. It posits two distinct systems of potentially conscious representations in the human mind: mental imagery and verbal representation (which is not to be confused with 'propositional' or "mentalese" representation). I defend, on conceptual grounds, Paivio's assertion of precisely two codes against interpretations which would either multiply image codes to match sense modes, or collapse the two, admittedly interacting, systems into one. On this basis I argue that the inference that a conscious agent would be needed to read such mental representations and to manipulate them in the light of their contents can be pre-empted by an account of how the two systems interact, each registering, affecting and being affected by developing associative processes within the other. (shrink)
It is shown that the Fodor's interpretation of the frame problem is the central indication that his version of the Modularity Thesis is incompatible with computationalism. Since computationalism is far more plausible than this thesis, the latter should be rejected.
The paper presents a paradoxical feature of computational systems that suggests that computationalism cannot explain symbol grounding. If the mind is a digital computer, as computationalism claims, then it can be computing either over meaningful symbols or over meaningless symbols. If it is computing over meaningful symbols its functioning presupposes the existence of meaningful symbols in the system, i.e. it implies semantic nativism. If the mind is computing over meaningless symbols, no intentional cognitive processes are available prior to (...) symbol grounding. In this case, no symbol grounding could take place since any grounding presupposes intentional cognitive processes. So, whether computing in the mind is over meaningless or over meaningful symbols, computationalism implies semantic nativism. (shrink)
My purpose in this brief paper is to consider the implications of a radically different computer architecure to some fundamental problems in the foundations of Cognitive Science. More exactly, I wish to consider the ramifications of the 'Gödel-Minds-Machines' controversy of the late 1960s on a dynamically changing computer architecture which, I venture to suggest, is going to revolutionize which 'functions' of the human mind can and cannot be modelled by (non-human) computational automata. I will proceed on the presupposition that the (...) reader is familiar with some of the fundamentals of computational theory and mathematical logic. (shrink)
ABSTRACT. Thought experiments about de se attitudes and Jackson’s original Knowledge Argument are compared with each other and discussed from the perspective of a computational theory of mind. It is argued that internal knowledge, i.e. knowledge formed on the basis of signals that encode aspects of their own processing rather than being intentionally directed towards external objects, suffices for explaining the seminal puzzles without resorting to acquaintance or phenomenal character as primitive notions. Since computationalism is ontologically neutral, the account (...) also explains why neither Lewis’s two gods nor Mary’s surprise in the Knowledge Argument violate physicalism. (shrink)
What counts as a computation and how it relates to cognitive function are important questions for scientists interested in understanding how the mind thinks. This paper argues that pragmatic aspects of explanation ultimately determine how we answer those questions by examining what is needed to make rigorous the notion of computation used in the (cognitive) sciences. It (1) outlines the connection between the Church-Turing Thesis and computational theories of physical systems, (2) differentiates merely satisfying a computational function from true computation, (...) and finally (3) relates how we determine a true computation to the functional methodology in cognitive science. All of the discussion will be directed toward showing that the only way to connect formal notions of computation to empirical theory will be in virtue of the pragmatic aspects of explanation. (shrink)
What Robots Can and Can't Be (hereinafter Robots) is, as Selmer Bringsjord says "intended to be a collection of formal-arguments-that-border-on-proofs for the proposition that in all worlds, at all times, machines can't be minds" (Bringsjord, forthcoming). In his (1994) "Précis of What Robots Can and Can't Be" Bringsjord styles certain of these arguments as proceeding "repeatedly . . . through instantiations of" the "simple schema".
The book presents investigations into the world of info-computational nature, in which information constitutes the structure, while computational process amounts to its change. Information and computation are inextricably bound: There is no computation without informational structure, and there is no information without computational process. Those two complementary ideas are used to build a conceptual net, which according to Novalis is a theoretical way of capturing reality. We apprehend the reality within a framework known as natural computationalism, the view that (...) the whole universe can be understood as a computational system at many different levels - from quantum mechanical world, to biological organisms including intelligent minds and their societies. Questions about nature of information and computation and their unified view are addressed along with application of info- computational approach to knowledge generation. (shrink)
It is usual when writing on research methodology in dissertations and thesis work within Software Engineering to refer to Empirical Methods, Grounded Theory and Action Research. Analysis of Constructive Research Methods which are fundamental for all knowledge production and especially for concept formation, modeling and the use of artifacts is seldom given, so the relevant first-hand knowledge is missing. This article argues for introducing of the analysis of Constructive Research Methods, as crucial for understanding of research process and knowledge production. (...) The paper provides characterization of the Constructive Research Method and its relations to Action Research and Grounded Theory. Illustrative example of Blue Brain Project is presented. Finally, foundations of Constructive Research are analyzed within the framework of Info-Computationalism which provides models of knowledge construction by information processing in a cognizing agent. (shrink)
Computation and information processing are among the most fundamental notions in cognitive science. They are also among the most imprecisely discussed. Many cognitive scientists take it for granted that cognition involves computation, information processing, or both – although others disagree vehemently. Yet different cognitive scientists use ‘computation’ and ‘information processing’ to mean different things, sometimes without realizing that they do. In addition, computation and information processing are surrounded by several myths; first and foremost, that they are the same thing. In (...) this paper, we address this unsatisfactory state of affairs by presenting a general and theory-neutral account of computation and information processing. We also apply our framework by analyzing the relations between computation and information processing on one hand and classicism and connectionism on the other. We defend the relevance to cognitive science of both computation, in a generic sense that we fully articulate for the first time, and information processing, in three important senses of the term. Our account advances some foundational debates in cognitive science by untangling some of their conceptual knots in a theory-neutral way. By leveling the playing field, we pave the way for the future resolution of the debates’ empirical aspects. (shrink)
Knowledge generation can be naturalized by adopting computational model of cognition and evolutionary approach. In this framework knowledge is seen as a result of the structuring of input data (data → information → knowledge) by an interactive computational process going on in the agent during the adaptive interplay with the environment, which clearly presents developmental advantage by increasing agent’s ability to cope with the situation dynamics. This paper addresses the mechanism of knowledge generation, a process that may be modeled as (...) natural computation in order to be better understood and improved. (shrink)
According to some philosophers, computational explanation is proprietary to psychology—it does not belong in neuroscience. But neuroscientists routinely offer computational explanations of cognitive phenomena. In fact, computational explanation was initially imported from computability theory into the science of mind by neuroscientists, who justified this move on neurophysiological grounds. Establishing the legitimacy and importance of computational explanation in neuroscience is one thing; shedding light on it is another. I raise some philosophical questions pertaining to computational explanation and outline some promising answers that (...) are being developed by a number of authors. (shrink)
The thesis develops solutions to two main problems for mental realism. Mental realism is the theory that mental properties, events, and objects exist, with their own set of characters and causal powers. The first problem comes from the philosophy of science, where Psillos proposes a notion of scientific realism that contradicts mental realism, and consequently, if one is to be a scientific realist in the way Psillos recommends, one must reject mental realism. I propose adaptations to the conception of scientific (...) realism to make it compatible with mental realism. In the process, the thesis defends computational cognitive science from a compelling argument Searle can be seen to endorse but has not put forth in an organized logical manner. A new conception of scientific realism emerges out of this inquiry, integrating the mental into the rest of nature. The second problem for mental realism arises out of non-reductive physicalism- the view that higher-level properties, and in particular mental properties, are irreducible, physically realized, and that physical properties are sufficient non-overdetermining causes of any effect. Kim’s Problem of Causal Exclusion aims to show that the mental, if unreduced, does no causal work. Consequently, given that we should not believe in the existence of properties that do not participate in causation, we would be forced to drop mental realism. A solution is needed. The thesis examines various positions relevant to the debate. Several doctrines of physicalism are explored, rejected, and one is proposed; the thesis shows the way in which Kim’s reductionist position has been constantly inconsistent throughout the years of debate; the thesis argues that trope theory does not compete with a universalist conception of properties to provide a solution; and shows weakness in the Macdonald’s non-reductive monist position and Pereboom’s constitutional coincidence account of mental causation. The thesis suggests that either the premises of Kim’s argument are consistent, and consequently his reductio is logically invalid, or at least one of the premises is false, and therefore the argument is not sound. Consequently, the Problem of Causal Exclusion that Kim claims emerges out of non-reductive physicalism does not force us to reject mental realism. Mental realism lives on. (shrink)
In addition to his famous Chinese Room argument, John Searle has posed a more radical problem for views on which minds can be understood as programs. Even his wall, he claims, implements the WordStar program according to the standard deﬁnition of implementation because there is some ‘‘pattern of molecule movements’’ that is isomorphic to the formal structure of WordStar. Program implementation, Searle charges, is merely observer-relative and thus not an intrinsic feature of the world. I argue, ﬁrst, that analogous charges (...) involving other concepts (motion and meaning) lead to consequences no one accepts. Second, I show that Searle’s treatment of computation is incoherent, yielding the consequence that nothing computes anything: even our standard personal computers fail to run any programs on this account. I propose an alternative account, one that accords with the way engineers, programmers, and cognitive scientists use the concept of computation in their empirical work. This alternative interpretation provides the basis of a philosophical analysis of program implementation, one that may yet be suitable for a computational theory of the mind. (shrink)
Wittgenstein's arguments about rule-following and private language turn both on interpretation and what he called our 'pictures' of the mind. His remarks about these can be understood in terms of the conceptual metaphor of the mind as a container, and enable us to give a better account of physicalism.
This book is about the relation among the concepts of mind, science, and computation. From the standpoint of cognitive science—the interdisciplinary scientific study of the mind—the working hypothesis for this relation is that the key to a scientific understanding of the mind is the concept of computation, which is just another way of putting the view that the way to naturalize the mind is through the computational framework. In particular, this book assesses the validity of the said hypothesis. The book (...) is divided into two major parts. The first part makes a general survey of the fundamental issues and competing views in the discipline of philosophy of mind. This is intended to provide a proper orientation and background for the second part, which examines the plausibility of the computational framework and the feasibility of the project to naturalize the mind. These two parts can also be seen in another way: the first gives a general introduction to the discipline of the philosophy of mind, while the second provides one possible route into some of the current debates in the discipline. In this light, this book is good reading material for both beginners and advanced students in the philosophy of mind. (shrink)
After more than 60 years, Shannon’s research continues to raise fundamental questions, such as the one formulated by R. Luce, which is still unanswered: “Why is information theory not very applicable to psychological problems, despite apparent similarities of concepts?” On this topic, S. Pinker, one of the foremost defenders of the widespread computational theory of mind, has argued that thought is simply a type of computation, and that the gap between human cognition and computational models may be illusory. In this (...) context, in his latest book, titled Thinking Fast and Slow, D. Kahneman provides further theoretical interpretation by differentiating the two assumed systems of the cognitive functioning of the human mind. He calls them intuition (system 1) determined to be an associative (automatic, fast and perceptual) machine, and reasoning (system 2) required to be voluntary and to operate logical-deductively. In this paper, we propose a mathematical approach inspired by Ausubel’s meaningful learning theory for investigating, from the constructivist perspective, information processing in the working memory of cognizers. Specifically, a thought experiment is performed utilizing the mind of a dual-natured creature known as Maxwell’s demon: a tiny “man–machine” solely equipped with the characteristics of system 1, which prevents it from reasoning. The calculation presented here shows that the Ausubelian learning schema, when inserted into the creature’s memory, leads to a Shannon-Hartley-like model that, in turn, converges exactly to the fundamental thermodynamic principle of computation, known as the Landauer limit. This result indicates that when the system 2 is shut down, both an intelligent being, as well as a binary machine, incur the same minimum energy cost per unit of information (knowledge) processed (acquired), which mathematically shows the computational attribute of the system 1, as Kahneman theorized. This finding links information theory to human psychological features and opens the possibility to experimentally test the computational theory of mind by means of Landauer’s energy cost, which can pave a way toward the conception of a multi-bit reasoning machine. (shrink)
In the book, I argue that the mind can be explained computationally because it is itself computational—whether it engages in mental arithmetic, parses natural language, or processes the auditory signals that allow us to experience music. All these capacities arise from complex information-processing operations of the mind. By analyzing the state of the art in cognitive science, I develop an account of computational explanation used to explain the capacities in question.
There is much in The Sensory Order that recommends the oft-made claim that Hayek anticipated connectionist theories of mind. To the extent that this is so, contemporary arguments against and for connectionism, as advanced by Jerry Fodor, Zenon Pylyshyn, and John Searle, are shown as applicable to theoretical psychology. However, the ﬁnal section of this chapter highlights an important disanalogy between theoretical psychology and connectionist theories of mind.
John Searle's Chinese room argument is perhaps the most influential andwidely cited argument against artificial intelligence (AI). Understood astargeting AI proper â claims that computers can think or do thinkâ Searle's argument, despite its rhetorical flash, is logically andscientifically a dud. Advertised as effective against AI proper, theargument, in its main outlines, is an ignoratio elenchi. It musterspersuasive force fallaciously by indirection fostered by equivocaldeployment of the phrase "strong AI" and reinforced by equivocation on thephrase "causal powers" (at least) equal (...) to those of brains." On a morecarefully crafted understanding â understood just to targetmetaphysical identification of thought with computation ("Functionalism"or "Computationalism") and not AI proper the argument is still unsound,though more interestingly so. It's unsound in ways difficult for high churchâ "someday my prince of an AI program will come" â believersin AI to acknowledge without undermining their high church beliefs. The adhominem bite of Searle's argument against the high church persuasions of somany cognitive scientists, I suggest, largely explains the undeserved reputethis really quite disreputable argument enjoys among them. (shrink)
Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind).
Computation is central to the foundations of modern cognitive science, but its role is controversial. Questions about computation abound: What is it for a physical system to implement a computation? Is computation sufficient for thought? What is the role of computation in a theory of cognition? What is the relation between different sorts of computational theory, such as connectionism and symbolic computation? In this paper I develop a systematic framework that addresses all of these questions. Justifying the role of computation (...) requires analysis of implementation, the nexus between abstract computations and concrete physical systems. I give such an analysis, based on the idea that a system implements a computation if the causal structure of the system mirrors the formal structure of the computation. This account can be used to justify the central commitments of artificial intelligence and computational cognitive science: the thesis of computational sufficiency, which holds that the right kind of computational structure suffices for the possession of a mind, and the thesis of computational explanation, which holds that computation provides a general framework for the explanation of cognitive processes. The theses are consequences of the facts that (a) computation can specify general patterns of causal organization, and (b) mentality is an organizational invariant, rooted in such patterns. Along the way I answer various challenges to the computationalist position, such as those put forward by Searle. I close by advocating a kind of minimal computationalism, compatible with a very wide variety of empirical approaches to the mind. This allows computation to serve as a true foundation for cognitive science. (shrink)
More than a decade ago, philosopher John Searle started a long-running controversy with his paper “Minds, Brains, and Programs” (Searle, 1980a), an attack on the ambitious claims of artificial intelligence (AI). With his now famous _Chinese Room_ argument, Searle claimed to show that despite the best efforts of AI researchers, a computer could never recreate such vital properties of human mentality as intentionality, subjectivity, and understanding. The AI research program is based on the underlying assumption that all important aspects of (...) human cognition may in principle be captured in a computational model. This assumption stems from the belief that beyond a certain level, implementational details are irrelevant to cognition. According to this belief, neurons, and biological wetware in general, have no preferred status as the substrate for a mind. As it happens, the best examples of minds we have at present have arisen from a carbon-based substrate, but this is due to constraints of evolution and possibly historical accidents, rather than to an absolute metaphysical necessity. As a result of this belief, many cognitive scientists have chosen to focus not on the biological substrate of the mind, but instead on the abstract causal structure_ _that the mind embodies (at an appropriate level of abstraction). The view that it is abstract causal structure that is essential to mentality has been an implicit assumption of the AI research program since Turing (1950), but was first articulated explicitly, in various forms, by Putnam (1960), Armstrong (1970) and Lewis (1970), and has become known as _functionalism_. From here, it is a very short step to _computationalism_, the view that computational structure is what is important in capturing the essence of mentality. This step follows from a belief that any abstract causal structure can be captured computationally: a belief made plausible by the Church–Turing Thesis, which articulates the power. (shrink)
I argue that John Searle's (1980) influential Chinese room argument (CRA) against computationalism and strong AI survives existing objections, including Block's (1998) internalized systems reply, Fodor's (1991b) deviant causal chain reply, and Hauser's (1997) unconscious content reply. However, a new ``essentialist'' reply I construct shows that the CRA as presented by Searle is an unsound argument that relies on a question-begging appeal to intuition. My diagnosis of the CRA relies on an interpretation of computationalism as a scientific theory (...) about the essential nature of intentional content; such theories often yield non-intuitive results in non-standard cases, and so cannot be judged by such intuitions. However, I further argue that the CRA can be transformed into a potentially valid argument against computationalism simply by reinterpreting it as an indeterminacy argument that shows that computationalism cannot explain the ordinary distinction between semantic content and sheer syntactic manipulation, and thus cannot be an adequate account of content. This conclusion admittedly rests on the arguable but plausible assumption that thought content is interestingly determinate. I conclude that the viability of computationalism and strong AI depends on their addressing the indeterminacy objection, but that it is currently unclear how this objection can be successfully addressed. (shrink)
This paper reports on the Kuhnian revolution now occurring in neuropsychology that is finally supportive of and friendly to phenomenology – the “enactive” approach to the mind-body relation, grounded in the notion of self-organization, which is consistent with Husserl and Merleau-Ponty on virtually every point. According to the enactive approach, human minds understand the world by virtue of the ways our bodies can act relative to it, or the ways we can imagine acting. This requires that action be distinguished from (...) passivity, that the mental be approached from a first person perspective, and that the cognitive capacities of the brain be grounded in the emotional and motivational processes that guide action and anticipate action affordances. It avoids the old intractable problems inherent in the computationalist approaches of twentieth century atomism and radical empiricism, and again allows phenomenology to bridge to neuropsychology in the way Merleau-Ponty was already doing over half a century ago. (shrink)
The new kid on the block in cognitive science these days is dynamic systems. This way of thinking about the mind is, as usual, radically opposed to computationalism - - the hypothesis that thinking is computing. The use of dynamic systems is just the latest in a series of attempts, from Searle's Chinese Room Argument, through the weirdnesses of postmodernism, to overthrown computationalism, which as we all know is a perfectly nice hypothesis about the mind that never hurt (...) anyone. (shrink)
It has been over thirty years since the publication of Jerry Fodor’s landmark book The Language of Thought (LOT 1). In LOT 2: The Language of Thought Revisited, Fodor provides an update on his thoughts concerning a range of topics that have been the focus of his work in the intervening decades. The Representational Theory of Mind (RTM), the central thesis of LOT 1, remains intact in LOT 2: mental states are relations between organisms and syntactically-structured mental representations, and mental (...) processes are computations defined over such representations. The differences between LOT 1 and LOT 2 are mostly differences of focus. Whereas LOT 1 had a number of targets—e.g. reductionism, behaviorism, empiricism, and operationalism—LOT 2 identifies “pragmatism” as the main enemy of the “Cartesian” kind of mentalism Fodor favors (pp. 11-12). Moreover, unlike LOT 1, a main aim of LOT 2 is to defend a theory of concepts that is atomistic and referentialist: lexical concepts lack structure, and their meaning is determined by their relation to the world and not by their relations to other concepts (pp. 16-20). In addition to new discussions of concepts and content, LOT 2 treats us to Fodor’s latest thoughts on compositionality, computationalism, nativism, nonconceptual content, and the causal theory of reference. Although those familiar with Fodor’s work over the last thirty years will find its main conclusions unsurprising, LOT 2 is nevertheless an exciting, breezily written book that’s full of stimulating arguments and (in standard Fodor style) immensely interesting digressions. In the Introduction, Fodor bundles together a number of distinct doctrines under “pragmatism”—e.g., that “knowing how is the paradigm cognitive state and it is prior to knowing that in the order of intentional explanation” (p. 10), and that “the distinctive function of the mind is guiding action” (p. 13). But it’s clear by Chapter 2 that his main target is “concept pragmatism,” according to which concepts are individuated by their inferential properties. Fodor’s “Cartesianism,” in contrast, has it that none of the epistemic properties of concepts are constitutive.. (shrink)
The most cursory examination of the history of artificial intelligence highlights numerous egregious claims of its researchers, especially in relation to a populist form of ‘strong’ computationalism which holds that any suitably programmed computer instantiates genuine conscious mental states purely in virtue of carrying out a specific series of computations. The argument presented herein is a simple development of that originally presented in Putnam’s (Representation & Reality, Bradford Books, Cambridge in 1988 ) monograph, “Representation & Reality”, which if correct, (...) has important implications for turing machine functionalism and the prospect of ‘conscious’ machines. In the paper, instead of seeking to develop Putnam’s claim that, “everything implements every finite state automata”, I will try to establish the weaker result that, “everything implements the specific machine Q on a particular input set ( x )”. Then, equating Q ( x ) to any putative AI program, I will show that conceding the ‘strong AI’ thesis for Q (crediting it with mental states and consciousness) opens the door to a vicious form of panpsychism whereby all open systems, (e.g. grass, rocks etc.), must instantiate conscious experience and hence that disembodied minds lurk everywhere. (shrink)
I review a widely accepted argument to the conclusion that the contents of our beliefs, desires and other mental states cannot be causally efficacious in a classical computational model of the mind. I reply that this argument rests essentially on an assumption about the nature of neural structure that we have no good scientific reason to accept. I conclude that computationalism is compatible with wide semantic causal efficacy, and suggest how the computational model might be modified to accommodate this (...) possibility. (shrink)
Mind, it has recently been argued1, is a thoroughly temporal phenomenon: so temporal, indeed, as to defy description and analysis using the traditional computational tools of cognitive scientific understanding. The proper explanatory tools, so the suggestion goes, are instead the geometric constructs and differential equations of Dynamical Systems Theory. I consider various aspects of the putative temporal challenge to computational understanding, and show that the root problem turns on the presence of a certain kind of causal web: a web that (...) involves multiple components (both inner and outer) linked by chains of continuous and reciprocal causal influence. There is, however, no compelling route from such facts about causal and temporal complexity to the radical anti- computationalist conclusion. This is because, interactive complexities notwithstanding, the computational approach provides a kind of explanatory understanding that cannot (I suggest) be recreated using the alternative resources of pure Dynamical Systems Theory. In particular, it provides a means of mapping information flow onto causal structure -- a mapping that is crucial to understanding the distinctive kinds of flexibility and control characteristic of truly mindful engagements with the world. Where we confront especially complex interactive causal webs, however, it does indeed become harder to isolate the syntactic vehicles required by the computational approach. Dynamical Systems Theory, I conclude, may play a vital role in recovering such vehicles from the burgeoning mass of real-time interactive complexity. (shrink)
What language allows us to do is to "steal" categories quickly and effortlessly through hearsay instead of having to earn them the hard way, through risky and time-consuming sensorimotor "toil" (trial-and-error learning, guided by corrective feedback from the consequences of miscategorisation). To make such linguistic "theft" possible, however, some, at least, of the denoting symbols of language must first be grounded in categories that have been earned through sensorimotor toil (or else in categories that have already been "prepared" for us (...) through Darwinian theft by the genes of our ancestors); it cannot be linguistic theft all the way down. The symbols that denote categories must be grounded in the capacity to sort, label and interact with the proximal sensorimotor projections of their distal category-members in a way that coheres systematically with their semantic interpretations, both for individual symbols, and for symbols strung together to express truth-value-bearing propositions. (shrink)
Computationalism. According to computationalism, to explain how the mind works, cognitive science needs to find out what the right computations are -- the same ones that the brain performs in order to generate the mind and its capacities. Once we know that, then every system that performs those computations will have those mental states: Every computer that runs the mind's program will have a mind, because computation is hardware independent : Any hardware that is running the right program (...) has the right computational states. (shrink)
Advocates of dynamic systems have suggested that higher mental processes are based on continuous representations. In order to evaluate this claim, we first define the concept of representation, and rigorously distinguish between discrete representations and continuous representations. We also explore two important bases of representational content. Then, we present seven arguments that discrete representations are necessary for any system that must discriminate between two or more states. It follows that higher mental processes require discrete representations. We also argue that discrete (...) representations are more influenced by conceptual role than continuous representations. We end by arguing that the presence of discrete representations in cognitive systems entails that computationalism (i.e., the view that the mind is a computational device) is true, and that cognitive science should embrace representational pluralism. (shrink)
Forthcoming in Cognitive Architecture: from bio-politics to noo-politics, eds. Deborah Hauptmann, Warren Neidich and Abdul-Karim Mustapha INTRODUCTION The cognitive and affective sciences have benefitted in the last twenty years from a rethinking of the long-dominant computer model of the mind espoused by the standard approaches of computationalism and connectionism. The development of this alternative, often named the “embodied mind” approach or the “4EA” approach (embodied, embedded, enactive, extended, affective), has relied on a trio of classical 20th century phenomenologists for (...) its philosophical framework: Husserl, Heidegger, and Merleau-Ponty. In this essay I propose that the French thinker Gilles Deleuze can provide the conceptual framework that will enable us to thematize some unstated presuppositions of the 4EA School, as well as to sharpen, extend and / or radicalize some of their explicit presuppositions. I highlight three areas here: 1) an ontology of distributed and differential systems, using Deleuze’s notion of the virtual; 2) a thought of multiple subjectification practices rather than a thought of “the” subject, even if it be seen as embodied and embedded; and 3) a rethinking of the notion of affect in order to thematize a notion of “political affect.” I will develop this proposal with reference to Bruce Wexler’s Brain and Culture, a work which resonates superbly with the Deleuzean approach. (shrink)
We again press the case for computationalism by considering the latest in ill- conceived attacks on this foundational idea. We briefly but clearly define and delimit computationalism and then consider three authors from a new anti- computationalist collection.
Computationalism provides a framework for understanding how a mathematically describable physical world could give rise to conscious observations without the need for dualism. A criterion is proposed for the implementation of computations by physical systems, which has been a problem for computationalism. Together with an independence criterion for implementations this would allow, in principle, prediction of probabilities for various observations based on counting implementations. Applied to quantum mechanics, this results in a Many Computations Interpretation (MCI), which is an (...) explicit form of the Everett style Many Worlds Interpretation (MWI). Derivation of the Born Rule emerges as the central problem for most realist interpretations of quantum mechanics. If the Born Rule is derived based on computationalism and the wavefunction it would provide strong support for the MWI; but if the Born Rule is shown not to follow from these to an experimentally falsified extent, it would indicate the necessity for either new physics or (more radically) new philosophy of mind. (shrink)
If a brain is duplicated so that there are two brains in identical states, are there then two numerically distinct phenomenal experiences or only one? There are two, I argue, and given computationalism, this has implications for what it is to implement a computation. I then consider what happens when a computation is implemented in a system that either uses unreliable components or possesses varying degrees of parallelism. I show that in some of these cases there can be, in (...) a deep and intriguing sense, a fractional (non-integer) number of qualitatively identical phenomenal experiences. This, in turn, has implications for what lessons one should draw from neural replacement scenarios such as Chalmers. (shrink)
When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaningful conversation). These higher levels of interpretability are called "virtual" systems. If such a virtual system is interpretable as if it had a mind, is such a "virtual (...) mind" real? This is the question addressed in this "virtual" symposium, originally conducted electronically among four cognitive scientists: Donald Perlis, a computer scientist, argues that according to the computationalist thesis, virtual minds are real and hence Searle's Chinese Room Argument fails, because if Searle memorized and executed a program that could pass the Turing Test in Chinese he would have a second, virtual, Chinese-understanding mind of which he was unaware (as in multiple personality). Stevan Harnad, a psychologist, argues that Searle's Argument is valid, virtual minds are just hermeneutic overinterpretations, and symbols must be grounded in the real world of objects, not just the virtual world of interpretations. Computer scientist Patrick Hayes argues that Searle's Argument fails, but because Searle does not really implement the program: A real implementation must not be homuncular but mindless and mechanical, like a computer. Only then can it give rise to a mind at the virtual level. Philosopher Ned Block suggests that there is no reason a mindful implementation would not be a real one. (shrink)
DEFINING THE LIMITS OF THE FIELD. Because 'consciousness and the body' is central to so many philosophical endeavors, I cannot provide a comprehensive survey of recent work. So we must begin by limiting the scope of our inquiry. First, we will concentrate on work done in English or translated into English, simply to ensure ease of access to the texts under examination. Second, we will concentrate on work done in the last 15 years or so, since the early 1990s. Third, (...) we will concentrate on those philosophers who treat both consciousness and the body together. Thus we will not treat philosophers who look at body representations in culture, nor philosophers who examine socio-political bodily practices with minimal or no reference to consciousness. Finally, even with the philosophers we choose to treat, we cannot be comprehensive and will instead make representative choices among their works. With that being said, we will have a fairly liberal definition of continental philosophy, operationally defined as that which makes (non-exclusive) reference to the classic phenomenology of Husserl, Heidegger, and Merleau-Ponty. Thus we will include the radical phenomenology of Michel <span class='Hi'>Henry</span> and Jacques Derrida, who refer to the phenomenological classics from within a 'purely' philosophical perspective, that is, one with little or no reference to the biological and cognitive sciences. We will also treat other thinkers who seek to use phenomenology in conjunction with the biological and cognitive sciences; in doing so we will examine the use of phenomenology to contest certain claims in analytic philosophy of mind, namely the representationalist interpretation of cognition in terms of computationalism and.. (shrink)
John R. Searle's problem of the Chinese Room poses an important philosophical challenge to the foundations of strong artificial intelligence, and functionalist, cognitivist, and computationalist theories of mind. Searle has recently responded to three categories of criticisms of the Chinese Room and the consequences he attempts to conclude from it, redescribing the essential features of the problem, and offering new arguments about the syntax-semantics gap it is intended to demonstrate. Despite Searle's defense, the Chinese Room remains ineffective as a counterexample, (...) and poses no real threat to artificial intelligence or mechanist philosophy of mind. The thesis that intentionality is a primitive irreducible relation exemplified by biological phenomena is preferred in opposition to Searle's contrary claim that intentionality is a biological phenomenon exhibiting abstract properties. (shrink)
This is a paper on George Rey’s views of conceptual analysis (as presented in two version of his paper on philosophical analysis, the second bearing a telling title “Philosophical Analysis as Cognitive Psychology: Thinking About Nothing»), and on his views on a priori. Let me fist mention that I am very happy to comment on these views, and to discuss it with Georges in a conference.[i] I have personally learned a lot from him; in particular, his computationalist view of a (...) priori knowledge has influenced a lot my own thinking on the subject. (shrink)
The intelligent-seeming deeds of computers are what occasion philosophical debate about artificial intelligence (AI) in the first place. Since evidence of AI is not bad, arguments against seem called for. John Searle's Chinese Room Argument (1980a, 1984, 1990, 1994) is among the most famous and long-running would-be answers to the call. Surprisingly, both the original thought experiment (1980a) and Searle's later would-be formalizations of the embedding argument (1984, 1990) are quite unavailing against AI proper (claims that computers do or someday (...) will think ). Searle lately even styles it a "misunderstanding" (1994, p. 547) to think the argument was ever so directed! The Chinese room is now advertised to target Computationalism (claims that computation is what thought essentially is ) exclusively. Despite its renown, the Chinese Room Argument is totally ineffective even against this target. (shrink)
It is here argued that functionalist constraints on psychology do not preclude the applicability of classic forms of reduction and, therefore, do not support claims to a principled, or de jure, autonomy of psychology. In Part I, after isolating one minimal restriction any functionalist theory must impose on its categories, it is shown that any functionalism imposing an additional constraint of de facto autonomy must also be committed to a pure functionalist--that is, a computationalist--model for psychology. Using an extended parallel (...) to the reduction of Mendelian to molecular genetics, it is shown in Parts II and III that, contrary to the claims of Hilary Putnam and Jerry Fodor, there is no inconsistency between computational models and classical reductionism: neither plurality of physical realization nor plurality of function are inconsistent with reductionism as defended by Ernest Nagel. Employing the results of Part I, the conclusions of Parts II and III are generalized in Part IV to cover any version of functionalism whatsoever; thus, functionalism and reductionism are shown to be consistent. It is urged in conclusion that although a de facto form of autonomy is defensible, there are sound methodological grounds for unconditionally rejecting any principled version of the autonomy of psychology. (shrink)
In this essay I defend a theory of psychological explanation that is based on the joint commitment to direct reference and computationalism. I offer a new solution to the problem of Frege Cases. Frege Cases involve agents who are unaware that certain expressions corefer (e.g. that 'Cicero' and 'Tully' corefer), where such knowledge is relevant to the success of their behavior, leading to cases in which the agents fail to behave as the intentional laws predict. It is generally agreed (...) that Frege Cases are a major problem, if not the major problem, that this sort of theory faces. In this essay, I hope to show that the theory can surmount the Frege Cases. (shrink)
Good sciences have good metaphors. Indeed, good sciences are good because they have good metaphors. AI could use more good metaphors. In this editorial, I would like to propose a new metaphor to help us understand intelligence. Of course, whether the metaphor is any good or not depends on whether it actually does help us. (What I am going to propose is not something opposed to computationalism -- the hypothesis that cognition is computation. Noncomputational metaphors are in vogue these (...) days, and to date they have all been equally plausible and equally successful. And, just to be explicit, I do not mean “IQ” by “intelligence.” I am using “intelligence” in the way AI uses it: as a semi-techical term referring to a general property of all intelligent systems, animal (including humans), or machine, alike.). (shrink)
The article presents a critique of John Searle's attack on computationalist theories of mind in his recent book, The Rediscovery of the Mind. Searle is guilty of caricaturing his opponents, and of ignoring their arguments. Moreover, his own positive theory of mind, which he claims "takes account of" subjectivity, turns out to offer no discernible advantages over the views he rejects.
Over the past several decades, the philosophical community has witnessed the emergence of an important new paradigm for understanding the mind.1 The paradigm is that of machine computation, and its influence has been felt not only in philosophy, but also in all of the empirical disciplines devoted to the study of cognition. Of the several strategies for applying the resources provided by computer and cognitive science to the philosophy of mind, the one that has gained the most attention from philosophers (...) has been the Computational Theory of Mind (CTM). CTM was first articulated by Hilary Putnam (1960, 1961), but finds perhaps its most consistent and enduring advocate in Jerry Fodor (1975, 1980, 1981, 1987, 1990, 1994). It is this theory, and not any broader interpretations of what it would be for the mind to be a computer, that I wish to address in this paper. What I shall argue here is that the notion of symbolic representation employed by CTM is fundamentally unsuited to providing an explanation of the intentionality of mental states (a major goal of CTM), and that this result undercuts a second major goal of CTM, sometimes refered to as the vindication of intentional psychology. This line of argument is related to the discussions of derived intentionality by Searle (1980, 1983, 1984) and Sayre (1986, 1987). But whereas those discussions seem to be concerned with the causal dependence of familiar sorts of symbolic representation upon meaning-bestowing acts, my claim is rather that there is not one but several notions of meaning to be had, and that the notions that are applicable to symbols are conceptually dependent upon the notion that is applicable to mental states in the fashion that Aristotle refered to as paronymy. That is, an analysis of the notions of meaning applicable to symbols reveals that they contain presuppositions about meaningful mental states, much as Aristotle's analysis of the sense of healthy that is applied to foods reveals that it means conducive to having a healthy body, and hence any attempt to explain mental semantics in terms of the semantics of symbols is doomed to circularity and regress. I shall argue, however, that this does not have the consequence that computationalism is bankrupt as a paradigm for cognitive science, as it is possible to reconstruct CTM in a fashion that avoids these difficulties and makes it a viable research framework for psychology, albeit at the cost of losing its claims to explain intentionality and to vindicate intentional psychology. I have argued elsewhere (Horst, 1996) that local special sciences such as psychology do not require vindication in the form of demonstrating their reducibility to more fundamental theories, and hence failure to make good on these philosophical promises need not compromise the broad range of work in empirical cognitive science motivated by the computer paradigm in ways that do not depend on these problematic treatments of symbols. (shrink)
We again press the case for computationalism by considering the latest in illconceived attacks on this foundational idea. We briefly but clearly define and delimit computationalism and then consider three authors from a new anticomputationalist collection.
What is the relation between computation and intennonality? Cognition presup- poses intentionality (or semantics). This much is certain. So, if, according to com- putationalism, cognition is computation, then computation, mo, presupposes..
Connectionism and computationalism are currently vying for hegemony in cognitive modeling. At first glance the opposition seems incoherent, because connectionism is itself computational, but the form of computationalism that has been the prime candidate for encoding the "language of thought" has been symbolic computationalism (Dietrich 1990, Fodor 1975, Harnad 1990c; Newell 1980; Pylyshyn 1984), whereas connectionism is nonsymbolic (Fodor & Pylyshyn 1988, or, as some have hopefully dubbed it, "subsymbolic" Smolensky 1988). This paper will examine what is (...) and is not a symbol system. A hybrid nonsymbolic/symbolic system will be sketched in which the meanings of the symbols are grounded bottom-up in the system's capacity to discriminate and identify the objects they refer to. Neural nets are one possible mechanism for learning the invariants in the analog sensory projection on which successful categorization is based. "Categorical perception" (Harnad 1987a), in which similarity space is "warped" in the service of categorization, turns out to be exhibited by both people and nets, and may mediate the constraints exerted by the analog world of objects on the formal world of symbols. (shrink)
Lerdahl and Jackendoff's Generative Theory of Tonal Music (GTTM) is an important contribution to cognitive science. Jackendoff claims it is a computationalist theory and that the mental representations it postulates are unconscious. Thus GTTM looks to be a kind of cognitive science remote from the folk-psychological. I argue that this picture of GTTM is mistaken: GTTM is at least as much music analysis as cognitive science. Jackendoff's metatheory fails to explain how a listener can tell that a structural description corresponds (...) to the way she hears, how analytically minded listeners can communicate about their hearing, and how a reader of their book can comprehend it. I suggest an alternative construal, on which GTTM's analytical vocabulary functions as a public language and its mental representations are perceptual beliefs. Interesting philosophical problems ensue about knowledge of musical structure and knowledge about what structures one hears. There is a paradox: one wants an analysis to be true to a hearing, yet to be illuminating. Though analysis and hearing coincide in content at an abstract level, they do not coincide in conceptual content. What sort of knowledge then underlies the inference from perceptual to music-analytical representation? I argue that such knowledge is a priori. (shrink)
Computationalism says that brains are computing mechanisms, that is, mechanisms that perform computations. At present, there is no consensus on how to formulate computationalism precisely or adjudicate the dispute between computationalism and its foes, or between different versions of computationalism. An important reason for the current impasse is the lack of a satisfactory philosophical account of computing mechanisms. The main goal of this dissertation is to offer such an account. I also believe that the history of (...) class='Hi'>computationalism sheds light on the current debate. By tracing different versions of computationalism to their common historical origin, we can see how the current divisions originated and understand their motivation. Reconstructing debates over computationalism in the context of their own intellectual history can contribute to philosophical progress on the relation between brains and computing mechanisms and help determine how brains and computing mechanisms are alike, and how they differ. Accordingly, my dissertation is divided into a historical part, which traces the early history of computationalism up to 1946, and a philosophical part, which offers an account of computing mechanisms. The two main ideas developed in this dissertation are that (1) computational states are to be identified functionally not semantically, and (2) computing mechanisms are to be studied by functional analysis. The resulting account of computing mechanism, which I call the functional account of computing mechanisms, can be used to identify computing mechanisms and the functions they compute. I use the functional account of computing mechanisms to taxonomize computing mechanisms based on their different computing power, and I use this taxonomy of computing mechanisms to taxonomize different versions of computationalism based on the functional properties that they ascribe to brains. By doing so, I begin to tease out empirically testable statements about the functional organization of the brain that different versions of computationalism are committed to. I submit that when computationalism is reformulated in the more explicit and precise way I propose, the disputes about computationalism can be adjudicated on the grounds of empirical evidence from neuroscience. (shrink)