The computational argument for individualism, which moves from computationalism to individualism about the mind, is problematic, not because computationalism is false, but because computational psychology is, at least sometimes, wide. The paper provides an early, or perhaps predecessor, version of the thesis of extended cognition.
Some philosophers have conﬂated functionalism and computationalism. I reconstruct how this came about and uncover two assumptions that made the conﬂation possible. They are the assumptions that (i) psychological functional analyses are computational descriptions and (ii) everything may be described as performing computations. I argue that, if we want to improve our understanding of both the metaphysics of mental states and the functional relations between them, we should reject these assumptions. # 2004 Elsevier Ltd. All rights reserved.
Some philosophers have conflated functionalism and computationalism. I reconstruct how this came about and uncover two assumptions that made the conflation possible. They are the assumptions that (i) psychological functional analyses are computational descriptions and (ii) everything may be described as performing computations. I argue that, if we want to improve our understanding of both the metaphysics of mental states and the functional relations between them, we should reject these assumptions.
Computationalism has been the mainstream view of cognition for decades. There are periodic reports of its demise, but they are greatly exaggerated. This essay surveys some recent literature on computationalism. It concludes that computationalism is a family of theories about the mechanisms of cognition. The main relevant evidence for testing it comes from neuroscience, though psychology and AI are relevant too. Computationalism comes in many versions, which continue to guide competing research programs in philosophy of mind (...) as well as psychology and neuroscience. Although our understanding of computationalism has deepened in recent years, much work in this area remains to be done. (shrink)
This paper argues for a noncognitiveist computationalism in the philosophy of mind. It further argues that both humans and computers have intentionality, that is, their mental states are semantical -- they are about things in their worlds.
The Church–Turing Thesis (CTT) is often employed in arguments for computationalism. I scrutinize the most prominent of such arguments in light of recent work on CTT and argue that they are unsound. Although CTT does nothing to support computationalism, it is not irrelevant to it. By eliminating misunderstandings about the relationship between CTT and computationalism, we deepen our appreciation of computationalism as an empirical hypothesis.
In this paper, I want to deal with the triviality threat to computationalism. On one hand, the controversial and vague claim that cognition involves computation is still denied. On the other, contemporary physicists and philosophers alike claim that all physical processes are indeed computational or algorithmic. This claim would justify the computationalism claim by making it utterly trivial. I will show that even if these two claims were true, computationalism would not have to be trivial.
In this paper I defend the classical computational account of reasoning against a range of highly influential objections, sometimes called relevance problems. Such problems are closely associated with the frame problem in artificial intelligence and, to a first approximation, concern the issue of how humans are able to determine which of a range of representations are relevant to the performance of a given cognitive task. Though many critics maintain that the nature and existence of such problems provide grounds for rejecting (...) classical computationalism, I show that this is not so. Some of these putative problems are a cause for concern only on highly implausible assumptions about the extent of our cognitive capacities, whilst others are a cause for concern only on similarly implausible views about the commitments of classical computationalism. Finally, some versions of the relevance problem are not really objections but hard research issues that any satisfactory account of cognition needs to address. I conclude by considering the diagnostic issue of why accounts of cognition in general—and classical computational accounts, in particular—have faired so poorly in addressing such research issues.Keywords: Computationalism; Frame problem; Relevance. (shrink)
Abstract Mental representations, Swiatczak (Minds Mach 21:19–32, 2011) argues, are fundamentally biochemical and their operations depend on consciousness; hence the computational theory of mind, based as it is on multiple realisability and purely syntactic operations, must be wrong. Swiatczak, however, is mistaken. Computation, properly understood, can afford descriptions/explanations of any physical process, and since Swiatczak accepts that consciousness has a physical basis, his argument against computationalism must fail. Of course, we may not have much idea how consciousness (itself a (...) rather unclear plurality of notions) might be implemented, but we do have a hypothesis—that all of our mental life, including consciousness, is the result of computational processes and so not tied to a biochemical substrate. Like it or not, the computational theory of mind remains the only game in town. Content Type Journal Article Pages 1-8 DOI 10.1007/s11023-012-9271-5 Authors David Davenport, Computer Engineering Department, Bilkent University, 06800 Ankara, Turkey Journal Minds and Machines Online ISSN 1572-8641 Print ISSN 0924-6495. (shrink)
Computationalism, the notion that cognition is computation, is a working hypothesis of many AI researchers and Cognitive Scientists. Although it has not been proved, neither has it been disproved. In this paper, I give some refutations to some well-known alleged refutations of computationalism. My arguments have two themes: people are more limited than is often recognized in these debates; computer systems are more complicated than is often recognized in these debates. To underline the latter point, I sketch the (...) design and abilities of a possible embodied computer system. (shrink)
Summary. A distinction is made between two senses of the claim “cognition is computation”. One sense, the opaque reading, takes computation to be whatever is described by our current computational theory and claims that cognition is best understood in terms of that theory. The transparent reading, which has its primary allegiance to the phenomenon of computation, rather than to any particular theory of it, is the claim that the best account of cognition will be given by whatever theory turns out (...) to be the best account of the phenomenon of computation. The distinction is clarified and defended against charges of circularity and changing the subject. Several well-known objections to computationalism are then reviewed, and for each the question of whether the transparent reading of the computationalist claim can provide a response is considered. (shrink)
Since the early eighties, computationalism in the study of the mind has been “under attack” by several critics of the so-called “classic” or “symbolic” approaches in AI and cognitive science. Computationalism was generically identified with such approaches. For example, it was identified with both Allen Newell and Herbert Simon’s Physical Symbol System Hypothesis and Jerry Fodor’s theory of Language of Thought, usually without taking into account the fact ,that such approaches are very different as to their methods and (...) aims. Zenon Pylyshyn, in his influential book Computation and Cognition, claimed that both Newell and Fodor deeply influenced his ideas on cognition as computation. This probably added to the confusion, as many people still consider Pylyshyn’s book as paradigmatic of the computational approach in the study of the mind. Since then, cognitive scientists, AI researchers and also philosophers of the mind have been asked to take sides on different “paradigms” that have from time to time been proposed as opponents of (classic or symbolic) computationalism. Examples of such oppositions are: -/- computationalism vs. connectionism, computationalism vs. dynamical systems, computationalism vs. situated and embodied cognition, computationalism vs. behavioural and evolutionary robotics. -/- Our preliminary claim in section 1 is that computationalism should not be identified with what we would call the “paradigm (based on the metaphor) of the computer” (in the following, PoC). PoC is the (rather vague) statement that the mind functions “as a digital computer”. Actually, PoC is a restrictive version of computationalism, and nobody ever seriously upheld it, except in some rough versions of the computational approach and in some popular discussions about it. Usually, PoC is used as a straw man in many arguments against computationalism. In section 1 we look in some detail at PoC’s claims and argue that computationalism cannot be identified with PoC. In section 2 we point out that certain anticomputationalist arguments are based on this misleading identification. In section 3 we suggest that the view of the levels of explanation proposed by David Marr could clarify certain points of the debate on computationalism. In section 4 we touch on a controversial issue, namely the possibility of developing a notion of analog computation, similar to the notion of digital computation. A short conclusion follows in section 5. (shrink)
In this paper, the Author reviewed the typical objections against the claim that brains are computers, or, to be more precise, information-processing mechanisms. By showing that practically all the popular objections are based on uncharitable interpretations of the claim, he argues that the claim is likely to be true, relevant to contemporary cognitive science, and non-trivial.
In this paper, I argue that computationalism is a progressive research tradition. Its metaphysical assumptions are that nervous systems are computational, and that information processing is necessary for cognition to occur. First, the primary reasons why information processing should explain cognition are reviewed. Then I argue that early formulations of these reasons are outdated. However, by relying on the mechanistic account of physical computation, they can be recast in a compelling way. Next, I contrast two computational models of working (...) memory to show how modeling has progressed over the years. The methodological assumptions of new modeling work are best understood in the mechanistic framework, which is evidenced by the way in which models are empirically validated. Moreover, the methodological and theoretical progress in computational neuroscience vindicates the new mechanistic approach to explanation, which, at the same time, justifies the best practices of computational modeling. Overall, computational modeling is deservedly successful in cognitive science. Its successes are related to deep conceptual connections between cognition and computation. Computationalism is not only here to stay, it becomes stronger every year. (shrink)
This paper challenges two orthodox theses: (a) that computational processes must be algorithmic; and (b) that all computed functions must be Turing-computable. Section 2 advances the claim that the works in computability theory, including Turing's analysis of the effective computable functions, do not substantiate the two theses. It is then shown (Section 3) that we can describe a system that computes a number-theoretic function which is not Turing-computable. The argument against the first thesis proceeds in two stages. It is first (...) shown (Section 4) that whether a process is algorithmic depends on the way we describe the process. It is then argued (Section 5) that systems compute even if their processes are not described as algorithmic. The paper concludes with a suggestion for a semantic approach to computation. (shrink)
Roughly speaking, computationalism says that cognition is computation, or that cognitive phenomena are explained by the agent‘s computations. The cognitive processes and behavior of agents are the explanandum. The computations performed by the agents‘ cognitive systems are the proposed explanans. Since the cognitive systems of biological organisms are their nervous 1 systems (plus or minus a bit), we may say that according to computationalism, the cognitive processes and behavior of organisms are explained by neural computations. Some people might (...) prefer to say that cognitive systems are ―realized‖ by nervous systems, and thus that—according to computationalism—cognitive computations are ―realized‖ by neural processes. In this paper, nothing hinges on the nature of the relation between cognitive systems and nervous systems, or between computations and neural processes. For present purposes, if a neural process realizes a computation, then that neural process is a computation. Thus, I will couch much of my discussion in terms of nervous systems and neural computation.1 Before proceeding, we should dispense with a possible red herring. Contrary to a common assumption, computationalism does not stand in opposition to connectionism. Connectionism, in the most general and common sense of the term, is the claim that cognitive phenomena are explained (at some level and at least in part) by the processes of neural networks. This is a truism, supported by most neuroscientific evidence. Everybody ought to be a connectionist in this general sense. The relevant question is, are neural processes computations? More precisely, are the neural processes to be found in the nervous systems of organisms computations? Computationalists say ―yes‖, anti-computationalists say ―no‖. This paper investigates whether any of the arguments on offer against computationalism have a chance at knocking it off.2 Ever since Warren McCulloch and Walter Pitts (1943) first proposed it, computationalism has been subjected to a wide range of objections.. (shrink)
In this paper I place Jim Fetzer's esemplastic burial of the computational conceptionof mind within the context of both my own burial and the theory of mind I would put in place of this dead doctrine. My view..
What counts as a computation and how it relates to cognitive function are important questions for scientists interested in understanding how the mind thinks. This paper argues that pragmatic aspects of explanation ultimately determine how we answer those questions by examining what is needed to make rigorous the notion of computation used in the (cognitive) sciences. It (1) outlines the connection between the Church-Turing Thesis and computational theories of physical systems, (2) differentiates merely satisfying a computational function from true computation, (...) and finally (3) relates how we determine a true computation to the functional methodology in cognitive science. All of the discussion will be directed toward showing that the only way to connect formal notions of computation to empirical theory will be in virtue of the pragmatic aspects of explanation. (shrink)
Computationalism is the claim that all possible thoughts are computations, i.e. executions of algorithms. The aim of the paper is to show that if intentionality is semantically clear, in a way defined in the paper, then computationalism must be false. Using a convenient version of the phenomenological relation of intentionality and a diagonalization device inspired by Thomson's theorem of 1962, we show there exists a thought that canno be a computation.
Defending or attacking either functionalism or computationalism requires clarity on what they amount to and what evidence counts for or against them. My goalhere is not to evaluatc their plausibility. My goal is to formulate them and their relationship clearly enough that we can determine which type of evidence is relevant to them. I aim to dispel some sources of confusion that surround functionalism and computationalism. recruit recent philosophical work on mechanisms and computation to shed light on them, (...) and clarify how functionalism and computationalism mayor may not legitimately come together. (shrink)
Harnad and I agree that the Chinese Room Argument deals a knockout blow to Strong AI, but beyond that point we do not agree on much at all. So let's begin by pondering the implications of the Chinese Room. The Chinese Room shows that a system, me for example, could pass the Turing Test for understanding Chinese, for example, and could implement any program you like and still not understand a word of Chinese. Now, why? What does the genuine Chinese (...) speaker have that I in the Chinese Room do not have? The answer is obvious. I, in the Chinese room, am manipulating a bunch of formal symbols; but the Chinese speaker has more than symbols, he knows what they mean. That is, in addition to the syntax of Chinese, the genuine Chinese speaker has a semantics in the form of meaning, understanding, and mental contents generally. (shrink)
Computationalism, a specie of functionalism, posits that a mental state like pain is realized by a ‘core’ computational state within a particular causal network of such states. This entails that what is realized by the core state is contingent on events remote in space and time, which puts computationalism at odds with the locality principle of physics. If computationalism is amended to respect locality, then it posits that a type of phenomenal experience is determined by a single (...) type of computational state. But a computational state, considered by itself, is of no determinate type—it has no particular symbolic content, since it could be embedded in any of an infinite number of algorithms. Hence, if locality is respected, then the type of experience that is realized by a computational state, or whether any experience at all is realized, is under-determined by the computational nature of the state. Accordingly, Block’s absent and inverted qualia arguments against functionalism find support in the locality principle of physics. If computationalism denies locality to avoid this problem, then it cannot be considered a physicalist theory since it would entail a commitment to phenomena, like teleological causation and action-at-a-distance, that have long been rejected by modern science. The remaining theoretical alternative is to accept the locality principle for macro events and deny that formal, computational operations are sufficient to realize a phenomenal mental state. (shrink)
The assumption that psychological states and processes are computational in character pervades much of cognitive science, what many call the computational theory of mind. In addition to occupying a central place in cognitive science, the computational theory of mind has also had a second life supporting “individualism”, the view that psychological states should be taxonomized so as to supervene only on the intrinsic, physical properties of individuals. One response to individualism has been to raise the prospect of “wide computational systems”, (...) in which some computational units are instantiated outside the individual. “Wide computationalism” attempts to sever the link between individualism and computational psychology by enlarging the concept of computation. However, in spite of its potential interest to cognitive science, wide computationalism has received little attention in philosophy of mind and cognitive science. This paper aims to revisit the prospect of wide computationalism. It is argued that by appropriating a mechanistic conception of computation wide computationalism can overcome several issues that plague initial formulations. The aim is to show that cognitive science has overlooked an important and viable option in computational psychology. The paper marshals empirical support and responds to possible objections. (shrink)
This paper challenges two orthodox theses: that computational processes must be algorithmic; and that all computed functions must be Turing-computable. Section 2 advances the claim that the works in computability theory, including Turing's analysis of the effective computable functions, do not substantiate the two theses. It is then shown that we can describe a system that computes a number-theoretic function which is not Turing-computable. The argument against the first thesis proceeds in two stages. It is first shown that whether a (...) process is algorithmic depends on the way we describe the process. It is then argued that systems compute even if their processes are not described as algorithmic. The paper concludes with a suggestion for a semantic approach to computation. (shrink)
Computationalist theories of mind require brain symbols, that is, neural events that represent kinds or instances of kinds. Standard models of computation require multiple inscriptions of symbols with the same representational content. The satisfaction of two conditions makes it easy to see how this requirement is met in computers, but we have no reason to think that these conditions are satisfied in the brain. Thus, if we wish to give computationalist explanations of human cognition, without committing ourselvesa priori to a (...) strong and unsupported claim in neuroscience, we must first either explain how we can provide multiple brain symbols with the same content, or explain how we can abandon standard models of computation. It is argued that both of these alternatives require us to explain the execution of complex tasks that have a cognition-like structure. Circularity or regress are thus threatened, unless noncomputationalist principles can provide the required explanations. But in the latter case, we do not know that noncomputationalist principles might not bear most of the weight of explaining cognition. Four possible types of computationalist theory are discussed; none appears to provide a promising solution to the problem. Thus, despite known difficulties in noncomputationalist investigations, we have every reason to pursue the search for noncomputationalist principles in cognitive theory. (shrink)
Computationalism is the claim that all possible thoughts are computations, i.e. executions of algorithms. The aim of the paper is to show that if intentionality is semantically clear, in a way defined in the paper, then computationalism must be false. Using a convenient version of the phenomenological relation of intentionality and a diagonalization device inspired by Thomson's theorem of 1962, we show there exists a thought that cannot be a computation.
This article focuses on issues related to improving an argument about minds and machines given by Kurt Gödel in 1951, in a prominent lecture. Roughly, Gödel’s argument supported the conjecture that either the human mind is not algorithmic, or there is a particular arithmetical truth impossible for the human mind to master, or both. A well-known weakness in his argument is crucial reliance on the assumption that, if the deductive capability of the human mind is equivalent to that of a (...) formal system, then that system must be consistent. Such a consistency assumption is a strong infallibility assumption about human reasoning, since a formal system having even the slightest inconsistency allows deduction of all statements expressible within the formal system, including all falsehoods expressible within the system. We investigate how that weakness and some of the other problematic aspects of Gödel’s argument can be eliminated or reduced. (shrink)
A working hypothesis of computationalism is that Mind arises, not from the intrinsic nature of the causal properties of particular forms of matter, but from the organization of matter. If this hypothesis is correct, then a wide range of physical systems (e.g. optical, chemical, various hybrids, etc.) should support Mind, especially computers, since they have the capability to create/manipulate organizations of bits of arbitrarily complexity and dynamics. In any particular computer, these bit patterns are quite physical, but their particular (...) physicality is considered irrelevant (since they could be replaced by other physical substrata). (shrink)
We analyse Hutto & Myin's three arguments against computationalism [Hutto, D., E. Myin, A. Peeters, and F. Zahnoun. Forthcoming. “The Cognitive Basis of Computation: Putting Computation In Its Place.” In The Routledge Handbook of the Computational Mind, edited by M. Sprevak, and M. Colombo. London: Routledge.; Hutto, D., and E. Myin. 2012. Radicalizing Enactivism: Basic Minds Without Content. Cambridge, MA: MIT Press; Hutto, D., and E. Myin. 2017. Evolving Enactivism: Basic Minds Meet Content. Cambridge, MA: MIT Press]. The Hard (...) Problem of Content targets computationalism that relies on semantic notion of computation, claiming that it cannot account for the natural origins of content. The Intentionality Problem is targeted against computationalism using non-semantic accounts of computation, arguing that it fails in explaining intentionality. Theion Problem claims that causal interaction between concrete physical processes and abstract computational properties is problematic. We argue that these a... (shrink)
In this paper I discuss Searle's claim that the computational properties of a system could never cause a system to be conscious. In the first section of the paper I argue that Searle is correct that, even if a system both behaves in a way that is characteristic of conscious agents (like ourselves) and has a computational structure similar to those agents, one cannot be certain that that system is conscious. On the other hand, I suggest that Searle's intuition that (...) it is “empirically absurd” that such a system could be conscious is unfounded. In the second section I show that Searle's attempt to show that a system's computational states could not possibly cause it to be conscious is based upon an erroneous distinction between computational and physical properties. On the basis of these two arguments, I conclude that, supposing that the behavior of conscious agents can be explained in terms of their computational properties, we have good reason to suppose that a system having computational properties similar to such agents is also conscious. (shrink)
Open peer commentary on the article “Info-computational Constructivism and Cognition” by Gordana Dodig-Crnkovic. Upshot: The limitations of materialism for studying cognition have motivated alternative epistemologies based on information and computation. I argue that these alternatives are also inherently limited and that these limits can only be overcome by considering materialism, info-computationalism, and cognition at the same time.
Wittgenstein’s views invite a modest, functionalist account of mental states and regularities, or more specifically a causal/computational, representational theory of the mind (CRTT). It is only by understandingWittgenstein’s remarks in the context of a theory like CRTT that his insights have any real force; and it is only by recognizing those insights that CRTT can begin to account for sensations and our thoughts about them. For instance, Wittgenstein’s (in)famous remark that “an inner process stands in need of outward criteria” (PI:§580), (...) so implausible read behaviorally, is entirely plausible if the “outward” is allowed to include computational facts about our brains. But what is especially penetrating about Wittgenstein’s discussion is his unique diagnosis of our puzzlement in this area, in particular, his suggestion that it is due to our captivation by “pictures” whose application to reality is left crucially under-specified. It is only by understanding. What sustains the naive picture is not a captivation by language, but, at least in part, our largely involuntary reactions to things that look and act like our conspecifics. We project a property into them correlative to that reaction in ourselves, and are, indeed, unwilling to project it into things that do not induce that reaction. (shrink)
This article focuses on the methodological basis for the criticism of the computationalism and “computer metaphor” in the philosophy of cognitive sciences. We suppose that the computational paradigm is the direct consequence of the theoretical confusion of phenomenal and cognitive kinds of experience. Cognitive processes, considered as the forms of computational description, are available for computer modelling. That implies the strong position of the computer metaphor in the neuroscience. In our opinion the key problem is the vague ontological nature (...) of the symbols which form the computational operations in the cognitive procedures. Despite the successful development of neuroscience, it is still impossible to explain the meaning of the content of mental states. The article provides the detailed analysis of the critical approaches to the computational models of consciousness. The special attention is given to the comparison of data integration in the artificial intellectual systems with semantic aspects of the phenomenal consciousness. In the first case the foundations of output are the hierarchy of classes, the rules protocols and applying heuristics and strategies. In the second case the knowledge is formed by qualia, metaphorical conceptualization and pragmatic level of communication. Natural principles of knowledge forming are unachievable for machine intellectual procedures. (shrink)
Due to his significant role in the development of computer technology and the discipline of artificial intelligence, Alan Turing has supposedly subscribed to the theory of mind that has been greatly inspired by the power of the said technology which has eventually become the dominant framework for current researches in artificial intelligence and cognitive science, namely, computationalism or the computational theory of mind. In this essay, I challenge this supposition. In particular, I will try to show that there is (...) no evidence in Turing’s two seminal works that supports such a supposition. His 1936 paper is all about the notion of computation or computability as it applies to mathematical functions and not to the nature or workings of intelligence. On the other hand, while his 1950 work is about intelligence, it is, however, particularly concerned with the problem of whether intelligence can be attributed to computing machines and not of whether computationality can be attributed to human intelligence or to intelligence in general. (shrink)
The following paper presents a characterization of three distinctions fundamental to computationalism, viz., the distinction between analog and digital machines, representation and nonrepresentation-using systems, and direct and indirect perceptual processes. Each distinction is shown to rest on nothing more than the methodological principles which justify the explanatory framework of the special sciences.
In this paper I discuss Searle's claim that the computational properties of a system could never cause a system to be conscious. In the first section of the paper I argue that Searle is correct that, even if a system both behaves in a way that is characteristic of conscious agents and has a computational structure similar to those agents, one cannot be certain that that system is conscious. On the other hand, I suggest that Searle's intuition that it is (...) “empirically absurd” that such a system could be conscious is unfounded. In the second section I show that Searle's attempt to show that a system's computational states could not possibly cause it to be conscious is based upon an erroneous distinction between computational and physical properties. On the basis of these two arguments, I conclude that, supposing that the behavior of conscious agents can be explained in terms of their computational properties, we have good reason to suppose that a system having computational properties similar to such agents is also conscious. (shrink)
Both Cybersemiotics and Info-computationalist research programmes represent attempts to unify understanding of information, knowledge and communication. The first one takes into account phenomenological aspects of signification which are insisting on the human experience "from within". The second adopts solely the view "from the outside" based on scientific practice, with an observing agent generating inter-subjective knowledge in a research community. The process of knowledge production, embodied into networks of cognizing agents interacting with the environment and developing through evolution is studied on (...) different levels of abstraction in both frames of reference. In order to develop scientifically tractable models of evolution of intelligence in informational structures from pre-biotic/chemical to living networked intelligent organisms, including the implementation of those models in artificial agents, a basic level language of Info-Computationalism has shown to be suitable. There are however contexts in which we deal with complex informational structures essentially dependent on human first person knowledge where high level language such as Cybersemiotics is the appropriate tool for conceptualization and communication. Two research projects are presented in order to exemplify the interplay of info-computational and higher-order approaches: The Blue Brain Project where the brain is modeled as info-computational system, a simulation in silico of a biological brain function, and Biosemiotics research on genes, information, and semiosis in which the process of semiosis is understood in info-computational terms. The article analyzes differences and convergences of Cybersemiotics and Info-computationalist approaches which by placing focus on distinct levels of organization, help elucidate processes of knowledge production in intelligent agents. (shrink)