Some philosophers have conﬂated functionalism and computationalism. I reconstruct how this came about and uncover two assumptions that made the conﬂation possible. They are the assumptions that (i) psychological functional analyses are computational descriptions and (ii) everything may be described as performing computations. I argue that, if we want to improve our understanding of both the metaphysics of mental states and the functional relations between them, we should reject these assumptions. # 2004 Elsevier Ltd. All rights reserved.
In this paper, I want to deal with the triviality threat to computationalism. On one hand, the controversial and vague claim that cognition involves computation is still denied. On the other, contemporary physicists and philosophers alike claim that all physical processes are indeed computational or algorithmic. This claim would justify the computationalism claim by making it utterly trivial. I will show that even if these two claims were true, computationalism would not have to be trivial.
Abstract Mental representations, Swiatczak (Minds Mach 21:19–32, 2011) argues, are fundamentally biochemical and their operations depend on consciousness; hence the computational theory of mind, based as it is on multiple realisability and purely syntactic operations, must be wrong. Swiatczak, however, is mistaken. Computation, properly understood, can afford descriptions/explanations of any physical process, and since Swiatczak accepts that consciousness has a physical basis, his argument against computationalism must fail. Of course, we may not have much idea how consciousness (itself a (...) rather unclear plurality of notions) might be implemented, but we do have a hypothesis—that all of our mental life, including consciousness, is the result of computational processes and so not tied to a biochemical substrate. Like it or not, the computational theory of mind remains the only game in town. Content Type Journal Article Pages 1-8 DOI 10.1007/s11023-012-9271-5 Authors David Davenport, Computer Engineering Department, Bilkent University, 06800 Ankara, Turkey Journal Minds and Machines Online ISSN 1572-8641 Print ISSN 0924-6495. (shrink)
Since the early eighties, computationalism in the study of the mind has been “under attack” by several critics of the so-called “classic” or “symbolic” approaches in AI and cognitive science. Computationalism was generically identified with such approaches. For example, it was identified with both Allen Newell and Herbert Simon’s Physical Symbol System Hypothesis and Jerry Fodor’s theory of Language of Thought, usually without taking into account the fact ,that such approaches are very different as to their methods and (...) aims. Zenon Pylyshyn, in his influential book Computation and Cognition, claimed that both Newell and Fodor deeply influenced his ideas on cognition as computation. This probably added to the confusion, as many people still consider Pylyshyn’s book as paradigmatic of the computational approach in the study of the mind. Since then, cognitive scientists, AI researchers and also philosophers of the mind have been asked to take sides on different “paradigms” that have from time to time been proposed as opponents of (classic or symbolic) computationalism. Examples of such oppositions are: -/- computationalism vs. connectionism, computationalism vs. dynamical systems, computationalism vs. situated and embodied cognition, computationalism vs. behavioural and evolutionary robotics. -/- Our preliminary claim in section 1 is that computationalism should not be identified with what we would call the “paradigm (based on the metaphor) of the computer” (in the following, PoC). PoC is the (rather vague) statement that the mind functions “as a digital computer”. Actually, PoC is a restrictive version of computationalism, and nobody ever seriously upheld it, except in some rough versions of the computational approach and in some popular discussions about it. Usually, PoC is used as a straw man in many arguments against computationalism. In section 1 we look in some detail at PoC’s claims and argue that computationalism cannot be identified with PoC. In section 2 we point out that certain anticomputationalist arguments are based on this misleading identification. In section 3 we suggest that the view of the levels of explanation proposed by David Marr could clarify certain points of the debate on computationalism. In section 4 we touch on a controversial issue, namely the possibility of developing a notion of analog computation, similar to the notion of digital computation. A short conclusion follows in section 5. (shrink)
Some philosophers have conflated functionalism and computationalism. I reconstruct how this came about and uncover two assumptions that made the conflation possible. They are the assumptions that (i) psychological functional analyses are computational descriptions and (ii) everything may be described as performing computations. I argue that, if we want to improve our understanding of both the metaphysics of mental states and the functional relations between them, we should reject these assumptions.
Computationalism, a specie of functionalism, posits that a mental state like pain is realized by a ‘core’ computational state within a particular causal network of such states. This entails that what is realized by the core state is contingent on events remote in space and time, which puts computationalism at odds with the locality principle of physics. If computationalism is amended to respect locality, then it posits that a type of phenomenal experience is determined by a single (...) type of computational state. But a computational state, considered by itself, is of no determinate type—it has no particular symbolic content, since it could be embedded in any of an infinite number of algorithms. Hence, if locality is respected, then the type of experience that is realized by a computational state, or whether any experience at all is realized, is under-determined by the computational nature of the state. Accordingly, Block’s absent and inverted qualia arguments against functionalism find support in the locality principle of physics. If computationalism denies locality to avoid this problem, then it cannot be considered a physicalist theory since it would entail a commitment to phenomena, like teleological causation and action-at-a-distance, that have long been rejected by modern science. The remaining theoretical alternative is to accept the locality principle for macro events and deny that formal, computational operations are sufficient to realize a phenomenal mental state. (shrink)
Both Cybersemiotics and Info-computationalist research programmes represent attempts to unify understanding of information, knowledge and communication. The first one takes into account phenomenological aspects of signification which are insisting on the human experience "from within". The second adopts solely the view "from the outside" based on scientific practice, with an observing agent generating inter-subjective knowledge in a research community. The process of knowledge production, embodied into networks of cognizing agents interacting with the environment and developing through evolution is studied on (...) different levels of abstraction in both frames of reference. In order to develop scientifically tractable models of evolution of intelligence in informational structures from pre-biotic/chemical to living networked intelligent organisms, including the implementation of those models in artificial agents, a basic level language of Info-Computationalism has shown to be suitable. There are however contexts in which we deal with complex informational structures essentially dependent on human first person knowledge where high level language such as Cybersemiotics is the appropriate tool for conceptualization and communication. Two research projects are presented in order to exemplify the interplay of info-computational and higher-order approaches: The Blue Brain Project where the brain is modeled as info-computational system, a simulation in silico of a biological brain function, and Biosemiotics research on genes, information, and semiosis in which the process of semiosis is understood in info-computational terms. The article analyzes differences and convergences of Cybersemiotics and Info-computationalist approaches which by placing focus on distinct levels of organization, help elucidate processes of knowledge production in intelligent agents. (shrink)
Wittgenstein’s views invite a modest, functionalist account of mental states and regularities, or more specifically a causal/computational, representational theory of the mind (CRTT). It is only by understandingWittgenstein’s remarks in the context of a theory like CRTT that his insights have any real force; and it is only by recognizing those insights that CRTT can begin to account for sensations and our thoughts about them. For instance, Wittgenstein’s (in)famous remark that “an inner process stands in need of outward criteria” (PI:§580), (...) so implausible read behaviorally, is entirely plausible if the “outward” is allowed to include computational facts about our brains. But what is especially penetrating about Wittgenstein’s discussion is his unique diagnosis of our puzzlement in this area, in particular, his suggestion that it is due to our captivation by “pictures” whose application to reality is left crucially under-specified. It is only by understanding. What sustains the naive picture is not a captivation by language, but, at least in part, our largely involuntary reactions to things that look and act like our conspecifics. We project a property into them correlative to that reaction in ourselves, and are, indeed, unwilling to project it into things that do not induce that reaction. (shrink)
This paper challenges two orthodox theses: (a) that computational processes must be algorithmic; and (b) that all computed functions must be Turing-computable. Section 2 advances the claim that the works in computability theory, including Turing's analysis of the effective computable functions, do not substantiate the two theses. It is then shown (Section 3) that we can describe a system that computes a number-theoretic function which is not Turing-computable. The argument against the first thesis proceeds in two stages. It is first (...) shown (Section 4) that whether a process is algorithmic depends on the way we describe the process. It is then argued (Section 5) that systems compute even if their processes are not described as algorithmic. The paper concludes with a suggestion for a semantic approach to computation. (shrink)
In this paper I discuss Searle's claim that the computational properties of a system could never cause a system to be conscious. In the first section of the paper I argue that Searle is correct that, even if a system both behaves in a way that is characteristic of conscious agents (like ourselves) and has a computational structure similar to those agents, one cannot be certain that that system is conscious. On the other hand, I suggest that Searle's intuition that (...) it is “empirically absurd” that such a system could be conscious is unfounded. In the second section I show that Searle's attempt to show that a system's computational states could not possibly cause it to be conscious is based upon an erroneous distinction between computational and physical properties. On the basis of these two arguments, I conclude that, supposing that the behavior of conscious agents can be explained in terms of their computational properties, we have good reason to suppose that a system having computational properties similar to such agents is also conscious. (shrink)
Computationalism has been the mainstream view of cognition for decades. There are periodic reports of its demise, but they are greatly exaggerated. This essay surveys some recent literature on computationalism. It concludes that computationalism is a family of theories about the mechanisms of cognition. The main relevant evidence for testing it comes from neuroscience, though psychology and AI are relevant too. Computationalism comes in many versions, which continue to guide competing research programs in philosophy of mind (...) as well as psychology and neuroscience. Although our understanding of computationalism has deepened in recent years, much work in this area remains to be done. (shrink)
Defending or attacking either functionalism or computationalism requires clarity on what they amount to and what evidence counts for or against them. My goal here is not to evaluate their plausibility. My goal is to formulate them and their relationship clearly enough that we can determine which type of evidence is relevant to them. I aim to dispel some sources of confusion that surround functionalism and computationalism, recruit recent philosophical work on mechanisms and computation to shed light on (...) them, and clarify how functionalism and computationalism may or may not legitimately come together. (shrink)
Roughly speaking, computationalism says that cognition is computation, or that cognitive phenomena are explained by the agent‘s computations. The cognitive processes and behavior of agents are the explanandum. The computations performed by the agents‘ cognitive systems are the proposed explanans. Since the cognitive systems of biological organisms are their nervous 1 systems (plus or minus a bit), we may say that according to computationalism, the cognitive processes and behavior of organisms are explained by neural computations. Some people might (...) prefer to say that cognitive systems are ―realized‖ by nervous systems, and thus that—according to computationalism—cognitive computations are ―realized‖ by neural processes. In this paper, nothing hinges on the nature of the relation between cognitive systems and nervous systems, or between computations and neural processes. For present purposes, if a neural process realizes a computation, then that neural process is a computation. Thus, I will couch much of my discussion in terms of nervous systems and neural computation.1 Before proceeding, we should dispense with a possible red herring. Contrary to a common assumption, computationalism does not stand in opposition to connectionism. Connectionism, in the most general and common sense of the term, is the claim that cognitive phenomena are explained (at some level and at least in part) by the processes of neural networks. This is a truism, supported by most neuroscientific evidence. Everybody ought to be a connectionist in this general sense. The relevant question is, are neural processes computations? More precisely, are the neural processes to be found in the nervous systems of organisms computations? Computationalists say ―yes‖, anti-computationalists say ―no‖. This paper investigates whether any of the arguments on offer against computationalism have a chance at knocking it off.2 Ever since Warren McCulloch and Walter Pitts (1943) first proposed it, computationalism has been subjected to a wide range of objections.. (shrink)
The emergence of cognitive science as a multi-disciplinary investigation into the nature of mind has historically revolved around the core assumption that the central ‘cognitive’ aspects of mind are computational in character. Although there is some disagreement and philosophical speculation concerning the precise formulation of this ‘core assumption’ it is generally agreed that computationalism in some form lies at the heart of cognitive science as it is currently conceived. Von Eckardt’s recent work on this topic is useful in enabling (...) us to get a sense of the scope of the computational assumption. She makes clear that there are two rather different ways in which we could understand cognitive science’s commitment to computationalism and hence two ways to understand the claim that the ‘mind is a computer’, by appeal to either (1) A mathematical theory of computability or (2) A theory of data-processing or informationprocessing. Importantly, she also argues that although there are many aspects of claim that the ‘mind is a computer’ that can be nicely captured by Boyd’s account of the way scientific metaphors are employed, not to direct attention to the hitherto unnoticed, but to encourage investigation of the unknown. Nonetheless, cognitive scientists are not making the claim that the ‘mind is a computer’ in a metaphorical sense. If Von Eckhardt is correct, when cognitive scientists assume the ‘mind is a computer’ and give a sense to the notion of the computer in the sense of (2) above, they are making a literal claim about the nature of mind (Von Eckardt, 1993, p. 116). And as she points out that if one reads (2) in a theoretically committed way then there is no a priori reason to exclude the organic brain from the list of entities that might fall under the description of being a ‘computer’. Important, we can truly describe it as a data-processing (or information-processing) device. What is useful about Von Eckardt’s general analysis of computationalism’s core assumption is that it provides a clear angle from which to view the flaws of computationalism. This paper defends the claim that if there is an account of information adequate to capture those aspects of mind that we regard as essential to mentality it is one that requires us to surrender the idea that the mind is a computer.. (shrink)
The Church–Turing Thesis (CTT) is often employed in arguments for computationalism. I scrutinize the most prominent of such arguments in light of recent work on CTT and argue that they are unsound. Although CTT does nothing to support computationalism, it is not irrelevant to it. By eliminating misunderstandings about the relationship between CTT and computationalism, we deepen our appreciation of computationalism as an empirical hypothesis.
Computationalism, the notion that cognition is computation, is a working hypothesis of many AI researchers and Cognitive Scientists. Although it has not been proved, neither has it been disproved. In this paper, I give some refutations to some well-known alleged refutations of computationalism. My arguments have two themes: people are more limited than is often recognized in these debates; computer systems are more complicated than is often recognized in these debates. To underline the latter point, I sketch the (...) design and abilities of a possible embodied computer system. (shrink)
Computationalist theories of mind require brain symbols, that is, neural events that represent kinds or instances of kinds. Standard models of computation require multiple inscriptions of symbols with the same representational content. The satisfaction of two conditions makes it easy to see how this requirement is met in computers, but we have no reason to think that these conditions are satisfied in the brain. Thus, if we wish to give computationalist explanations of human cognition, without committing ourselvesa priori to a (...) strong and unsupported claim in neuroscience, we must first either explain how we can provide multiple brain symbols with the same content, or explain how we can abandon standard models of computation. It is argued that both of these alternatives require us to explain the execution of complex tasks that have a cognition-like structure. Circularity or regress are thus threatened, unless noncomputationalist principles can provide the required explanations. But in the latter case, we do not know that noncomputationalist principles might not bear most of the weight of explaining cognition. Four possible types of computationalist theory are discussed; none appears to provide a promising solution to the problem. Thus, despite known difficulties in noncomputationalist investigations, we have every reason to pursue the search for noncomputationalist principles in cognitive theory. (shrink)
The following paper presents a characterization of three distinctions fundamental to computationalism, viz., the distinction between analog and digital machines, representation and nonrepresentation-using systems, and direct and indirect perceptual processes. Each distinction is shown to rest on nothing more than the methodological principles which justify the explanatory framework of the special sciences.
A working hypothesis of computationalism is that Mind arises, not from the intrinsic nature of the causal properties of particular forms of matter, but from the organization of matter. If this hypothesis is correct, then a wide range of physical systems (e.g. optical, chemical, various hybrids, etc.) should support Mind, especially computers, since they have the capability to create/manipulate organizations of bits of arbitrarily complexity and dynamics. In any particular computer, these bit patterns are quite physical, but their particular (...) physicality is considered irrelevant (since they could be replaced by other physical substrata). (shrink)
Summary. A distinction is made between two senses of the claim “cognition is computation”. One sense, the opaque reading, takes computation to be whatever is described by our current computational theory and claims that cognition is best understood in terms of that theory. The transparent reading, which has its primary allegiance to the phenomenon of computation, rather than to any particular theory of it, is the claim that the best account of cognition will be given by whatever theory turns out (...) to be the best account of the phenomenon of computation. The distinction is clarified and defended against charges of circularity and changing the subject. Several well-known objections to computationalism are then reviewed, and for each the question of whether the transparent reading of the computationalist claim can provide a response is considered. (shrink)
Computationalism is the claim that all possible thoughts are computations, i.e. executions of algorithms. The aim of the paper is to show that if intentionality is semantically clear, in a way defined in the paper, then computationalism must be false. Using a convenient version of the phenomenological relation of intentionality and a diagonalization device inspired by Thomson's theorem of 1962, we show there exists a thought that canno be a computation.
Harnad and I agree that the Chinese Room Argument deals a knockout blow to Strong AI, but beyond that point we do not agree on much at all. So let's begin by pondering the implications of the Chinese Room. The Chinese Room shows that a system, me for example, could pass the Turing Test for understanding Chinese, for example, and could implement any program you like and still not understand a word of Chinese. Now, why? What does the genuine Chinese (...) speaker have that I in the Chinese Room do not have? The answer is obvious. I, in the Chinese room, am manipulating a <span class='Hi'>bunch</span> of formal symbols; but the Chinese speaker has more than symbols, he knows what they mean. That is, in addition to the syntax of Chinese, the genuine Chinese speaker has a semantics in the form of meaning, understanding, and mental contents generally. (shrink)
The most cursory examination of the history of artificial intelligence highlights numerous egregious claims of its researchers, especially in relation to a populist form of ‘strong’ computationalism which holds that any suitably programmed computer instantiates genuine conscious mental states purely in virtue of carrying out a specific series of computations. The argument presented herein is a simple development of that originally presented in Putnam’s (Representation & Reality, Bradford Books, Cambridge in 1988 ) monograph, “Representation & Reality”, which if correct, (...) has important implications for turing machine functionalism and the prospect of ‘conscious’ machines. In the paper, instead of seeking to develop Putnam’s claim that, “everything implements every finite state automata”, I will try to establish the weaker result that, “everything implements the specific machine Q on a particular input set ( x )”. Then, equating Q ( x ) to any putative AI program, I will show that conceding the ‘strong AI’ thesis for Q (crediting it with mental states and consciousness) opens the door to a vicious form of panpsychism whereby all open systems, (e.g. grass, rocks etc.), must instantiate conscious experience and hence that disembodied minds lurk everywhere. (shrink)
In this reply to James H. Fetzer’s “Minds and Machines: Limits to Simulations of Thought and Action”, I argue that computationalism should not be the view that (human) cognition is computation, but that it should be the view that cognition (simpliciter) is computable. It follows that computationalism can be true even if (human) cognition is not the result of computations in the brain. I also argue that, if semiotic systems are systems that interpret signs, then both humans and (...) computers are semiotic systems. Finally, I suggest that minds can be considered as virtual machines implemented in certain semiotic systems, primarily the brain, but also AI computers. In doing so, I take issue with Fetzer’s arguments to the contrary. (shrink)
I review a widely accepted argument to the conclusion that the contents of our beliefs, desires and other mental states cannot be causally efficacious in a classical computational model of the mind. I reply that this argument rests essentially on an assumption about the nature of neural structure that we have no good scientific reason to accept. I conclude that computationalism is compatible with wide semantic causal efficacy, and suggest how the computational model might be modified to accommodate this (...) possibility. (shrink)
Since the cognitive revolution, it’s become commonplace that cognition involves both computation and information processing. Is this one claim or two? Is computation the same as information processing? The two terms are often used interchangeably, but this usage masks important differences. In this paper, we distinguish information processing from computation and examine some of their mutual relations, shedding light on the role each can play in a theory of cognition. We recommend that theoristError: Illegal entry in bfrange block in ToUnicode (...) CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMaps of cognition be explicit and careful in choosing 1 notions of computation and information and connecting them together. Much confusion can be avoided by doing so. Keywords: computation, information processing, computationalism, computational theory of mind, cognitivism. (shrink)
It is shown that the Fodor's interpretation of the frame problem is the central indication that his version of the Modularity Thesis is incompatible with computationalism. Since computationalism is far more plausible than this thesis, the latter should be rejected.
The principal temptation toward substance dualisms, or otherwise incorporating a question begging homunculus into our psychologies, arises not from the problem of consciousness in general, nor from the problem of intentionality, but from the question of our awareness and understanding of our own mental contents, and the control of the deliberate, conscious thinking in which we employ them. Dennett has called this "Hume's problem". Cognitivist philosophers have generally either denied the experiential reality of thought, as did the Behaviorists, or have (...) taken an implicitly epiphenomenalist stance, a form of dualism. Some sort of mental duality may indeed be required to meet this problem, but not one that is metaphysical or question begging. I argue that it can be solved in the light of Paivio's "Dual Coding" theory of mental representation. This theory, which is strikingly simple and intuitive (perhaps too much so to have caught the imagination of philosophers) has demonstrated impressive empirical power and scope. It posits two distinct systems of potentially conscious representations in the human mind: mental imagery and verbal representation (which is not to be confused with 'propositional' or "mentalese" representation). I defend, on conceptual grounds, Paivio's assertion of precisely two codes against interpretations which would either multiply image codes to match sense modes, or collapse the two, admittedly interacting, systems into one. On this basis I argue that the inference that a conscious agent would be needed to read such mental representations and to manipulate them in the light of their contents can be pre-empted by an account of how the two systems interact, each registering, affecting and being affected by developing associative processes within the other. (shrink)
ABSTRACT. Thought experiments about de se attitudes and Jackson’s original Knowledge Argument are compared with each other and discussed from the perspective of a computational theory of mind. It is argued that internal knowledge, i.e. knowledge formed on the basis of signals that encode aspects of their own processing rather than being intentionally directed towards external objects, suffices for explaining the seminal puzzles without resorting to acquaintance or phenomenal character as primitive notions. Since computationalism is ontologically neutral, the account (...) also explains why neither Lewis’s two gods nor Mary’s surprise in the Knowledge Argument violate physicalism. (shrink)
Computers today are not only the calculation tools - they are directly (inter)acting in the physical world which itself may be conceived of as the universal computer (Zuse, Fredkin, Wolfram, Chaitin, Lloyd). In expanding its domains from abstract logical symbol manipulation to physical embedded and networked devices, computing goes beyond Church-Turing limit (Copeland, Siegelman, Burgin, Schachter). Computational processes are distributed, reactive, interactive, agent-based and concurrent. The main criterion of success of computation is not its termination, but the adequacy of its (...) response, its speed, generality and flexibility; adaptability, and tolerance to noise, error,faults, and damage. Interactive computing is a generalization of Turing computing, and it calls for new conceptualizations (Goldin, Wegner). In the info-computationalist framework, with computation seen as information processing, natural computation appears as the most suitable paradigm of computation and information semantics requires logical pluralism. (shrink)
Which notion of computation (if any) is essential for explaining cognition? Five answers to this question are discussed in the paper. (1) The classicist answer: symbolic (digital) computation is required for explaining cognition; (2) The broad digital computationalist answer: digital computation broadly construed is required for explaining cognition; (3) The connectionist answer: sub-symbolic computation is required for explaining cognition; (4) The computational neuroscientist answer: neural computation (that, strictly, is neither digital nor analogue) is required for explaining cognition; (5) The extreme (...) dynamicist answer: computation is not required for explaining cognition. The first four answers are only accurate to a first approximation. But the “devil” is in the details. The last answer cashes in on the parenthetical “if any” in the question above. The classicist argues that cognition is symbolic computation. But digital computationalism need not be equated with classicism. Indeed, computationalism can, in principle, range from digital (and analogue) computationalism through (the weaker thesis of) generic computationalism to (the even weaker thesis of) digital (or analogue) pancomputationalism. Connectionism, which has traditionally been criticised by classicists for being non-computational, can be plausibly construed as being either analogue or digital computationalism (depending on the type of connectionist networks used). Computational neuroscience invokes the notion of neural computation that may (possibly) be interpreted as a sui generis type of computation. The extreme dynamicist argues that the time has come for a post-computational cognitive science. This paper is an attempt to shed some light on this debate by examining various conceptions and misconceptions of (particularly digital) computation. (shrink)
In this paper I place Jim Fetzer's esemplastic burial of the computational conceptionof mind within the context of both my own burial and the theory of mind I would put in place of this dead doctrine. My view..
The paper presents a paradoxical feature of computational systems that suggests that computationalism cannot explain symbol grounding. If the mind is a digital computer, as computationalism claims, then it can be computing either over meaningful symbols or over meaningless symbols. If it is computing over meaningful symbols its functioning presupposes the existence of meaningful symbols in the system, i.e. it implies semantic nativism. If the mind is computing over meaningless symbols, no intentional cognitive processes are available prior to (...) symbol grounding. In this case, no symbol grounding could take place since any grounding presupposes intentional cognitive processes. So, whether computing in the mind is over meaningless or over meaningful symbols, computationalism implies semantic nativism. (shrink)
My purpose in this brief paper is to consider the implications of a radically different computer architecure to some fundamental problems in the foundations of Cognitive Science. More exactly, I wish to consider the ramifications of the 'Gödel-Minds-Machines' controversy of the late 1960s on a dynamically changing computer architecture which, I venture to suggest, is going to revolutionize which 'functions' of the human mind can and cannot be modelled by (non-human) computational automata. I will proceed on the presupposition that the (...) reader is familiar with some of the fundamentals of computational theory and mathematical logic. (shrink)
A central challenge for any theory of concept learning comes from Fodor’s argument against the learning of concepts, which lies at the basis of contemporary computationalist accounts of the mind. Robert Goldstone and his colleagues propose a theory of perceptual learning that attempts to overcome Fodor’s challenge. Its main component is the addition of a cognitive device at the interface of perception and conception, which slowly builds “cognitivesymbols” out of perceptual stimuli. Two main mechanisms of concept creation are unitization and (...) differentiation. In this paper, I will present and examine their theory, and will show that two problems hinder this reply to Fodor’s challenge from being a successful answer to the challenge. To amend the theory, I will argue that one would need to say more about the input systems to unitization and differentiation, and be clearer on the representational format that they are able to operate upon. Until these issues have been addressed, the proposal does not deploy its full potential to threaten a Fodorian position. (shrink)
What counts as a computation and how it relates to cognitive function are important questions for scientists interested in understanding how the mind thinks. This paper argues that pragmatic aspects of explanation ultimately determine how we answer those questions by examining what is needed to make rigorous the notion of computation used in the (cognitive) sciences. It (1) outlines the connection between the Church-Turing Thesis and computational theories of physical systems, (2) differentiates merely satisfying a computational function from true computation, (...) and finally (3) relates how we determine a true computation to the functional methodology in cognitive science. All of the discussion will be directed toward showing that the only way to connect formal notions of computation to empirical theory will be in virtue of the pragmatic aspects of explanation. (shrink)
When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaninguful conversation). These higher levels of interpretability are called ‘virtual’ systems. If such a virtual system is interpretable as if it had a mind, is such a ‘virtual (...) mind’ real?This is the question addressed in this ‘virtual’ symposium, originally conducted electronically among four cognitive scientists. Donald Perlis, a computer scientist, argues that according to the computationalist thesis, virtual minds are real and hence Searle's Chinese Room Argument fails, because if Searle memorized and executed a program that could pass the Turing Test in Chinese he would have a second, virtual, Chinese-understanding mind of which he was unaware (as in multiple personality). Stevan Harnad, a psychologist, argues that Searle's Argument is valid, virtual minds are just hermeneutic overinterpretations, and symbols must be grounded in the real world of objects, not just the virtual world of interpretations. Computer scientist Patrick Hayes argues that Searle's Argument fails, but because Searle does not really implement the program: a real implementation must not be homuncular but mindless and mechanical, like a computer. Only then can it give rise to a mind at the virtual level. Philosopher Ned Block suggests that there is no reason a mindful implementation would not be a real one. (shrink)
What Robots Can and Can't Be (hereinafter Robots) is, as Selmer Bringsjord says "intended to be a collection of formal-arguments-that-border-on-proofs for the proposition that in all worlds, at all times, machines can't be minds" (Bringsjord, forthcoming). In his (1994) "Précis of What Robots Can and Can't Be" Bringsjord styles certain of these arguments as proceeding "repeatedly . . . through instantiations of" the "simple schema".