According to some philosophers, computationalexplanation is proprietary to psychology—it does not belong in neuroscience. But neuroscientists routinely offer computational explanations of cognitive phenomena. In fact, computationalexplanation was initially imported from computability theory into the science of mind by neuroscientists, who justified this move on neurophysiological grounds. Establishing the legitimacy and importance of computationalexplanation in neuroscience is one thing; shedding light on it is another. I raise some philosophical questions pertaining to (...) class='Hi'>computationalexplanation and outline some promising answers that are being developed by a number of authors. (shrink)
In this chapter, I argue that some aspects of cognitive phenomena cannot be explained computationally. In the first part, I sketch a mechanistic account of computationalexplanation that spans multiple levels of organization of cognitive systems. In the second part, I turn my attention to what cannot be explained about cognitive systems in this way. I argue that information-processing mechanisms are indispensable in explanations of cognitive phenomena, and this vindicates the computationalexplanation of cognition. At the (...) same time, it has to be supplemented with other explanations to make the mechanistic explanation complete, and that naturally leads to explanatory pluralism in cognitive science. The price to pay for pluralism, however, is the abandonment of the traditional autonomy thesis asserting that cognition is independent of implementation details. (shrink)
The central aim of this paper is to shed light on the nature of explanation in computational neuroscience. I argue that computational models in this domain possess explanatory force to the extent that they describe the mechanisms responsible for producing a given phenomenon—paralleling how other mechanistic models explain. Conceiving computationalexplanation as a species of mechanistic explanation affords an important distinction between computational models that play genuine explanatory roles and those that merely provide (...) accurate descriptions or predictions of phenomena. It also serves to clarify the pattern of model refinement and elaboration undertaken by computational neuroscientists. (shrink)
In a recent paper, Kaplan (Synthese 183:339–373, 2011) takes up the task of extending Craver’s (Explaining the brain, 2007) mechanistic account of explanation in neuroscience to the new territory of computational neuroscience. He presents the model to mechanism mapping (3M) criterion as a condition for a model’s explanatory adequacy. This mechanistic approach is intended to replace earlier accounts which posited a level of computational analysis conceived as distinct and autonomous from underlying mechanistic details. In this paper I (...) discuss work in computational neuroscience that creates difficulties for the mechanist project. Carandini and Heeger (Nat Rev Neurosci 13:51–62, 2012) propose that many neural response properties can be understood in terms of canonical neural computations. These are “standard computational modules that apply the same fundamental operations in a variety of contexts.” Importantly, these computations can have numerous biophysical realisations, and so straightforward examination of the mechanisms underlying these computations carries little explanatory weight. Through a comparison between this modelling approach and minimal models in other branches of science, I argue that computational neuroscience frequently employs a distinct explanatory style, namely, efficient coding explanation. Such explanations cannot be assimilated into the mechanistic framework but do bear interesting similarities with evolutionary and optimality explanations elsewhere in biology. (shrink)
According to the computational theory of mind (CTM), mental capacities are explained by inner computations, which in biological organisms are realized in the brain. Computationalexplanation is so popular and entrenched that it’s common for scientists and philosophers to assume CTM without argument.
According to pancomputationalism, everything is a computing system. In this paper, I distinguish between different varieties of pancomputationalism. I find that although some varieties are more plausible than others, only the strongest variety is relevant to the philosophy of mind, but only the most trivial varieties are true. As a side effect of this exercise, I offer a clarified distinction between computational modelling and computationalexplanation.<br><br>.
According to Marr's theory of vision, computational processes of early vision rely for their success on certain "natural constraints" in the physical environment. I examine the implications of this feature of Marr's theory for the question whether psychological states supervene on neural states. It is reasonable to hold that Marr's theory is nonindividualistic in that, given the role of natural constraints, distinct computational theories of the same neural processes may be justified in different environments. But to avoid trivializing (...)computational explanations, theories must respect methodological solipsism in the sense that within a theory there cannot be differences in content without a corresponding difference in neural states. (shrink)
I explore a type of computational social simulation known as artificial societies. Artificial society simulations are dynamic models of real-world social phenomena. I explore the role that these simulations play in social explanation, by situating these simulations within contemporary philosophical work on explanation and on models. Many contemporary philosophers have argued that models provide causal explanations in science, and that models are necessary mediators between theory and data. I argue that artificial society simulations provide causal mechanistic explanations. (...) I conclude that in their current form, these simulations are based on methodologically individualist assumptions that could limit their potential scope of social explanation. (shrink)
We compared the processing of natural language quantifiers in a group of patients with schizophrenia and a healthy control group. In both groups, the difficulty of the quantifiers was consistent with computational predictions, and patients with schizophrenia took more time to solve the problems. However, they were significantly less accurate only with proportional quantifiers, like more than half. This can be explained by noting that, according to the complexity perspective, only proportional quantifiers require working memory engagement.
Two widely accepted assumptions within cognitive science are that (1) the goal is to understand the mechanisms responsible for cognitive performances and (2) computational modeling is a major tool for understanding these mechanisms. The particular approaches to computational modeling adopted in cognitive science, moreover, have significantly affected the way in which cognitive mechanisms are understood. Unable to employ some of the more common methods for conducting research on mechanisms, cognitive scientists’ guiding ideas about mechanism have developed in conjunction (...) with their styles of modeling. In particular, mental operations often are conceptualized as comparable to the processes employed in classical symbolic AI or neural network models. These models, in turn, have been interpreted by some as themselves intelligent systems since they employ the same type of operations as does the mind. For this paper, what is significant about these approaches to modeling is that they are constructed specifically to account for behavior and are evaluated by how well they do so—not by independent evidence that they describe actual operations in mental mechanisms. (shrink)
My purpose in this essay is to clarify the notion of explanation by computer simulation in artificial intelligence and cognitive science. My contention is that computer simulation may be understood as providing two different kinds of explanation, which makes the notion of explanation by computer simulation ambiguous. In order to show this, I shall draw a distinction between two possible ways of understanding the notion of simulation, depending on how one views the relation in which a computing (...) system that performs a cognitive task stands to the program that the system runs while performing that task. Next, I shall suggest that the kind of explanation that results from simulation is radically different in each case. In order to illustrate the difference, I will point out some prima facie methodological difficulties that need to be addressed in order to ensure that simulation plays a legitimate explanatory role in cognitive science, and I shall emphasize how those difficulties are very different depending on the notion of explanation involved. (shrink)
In this article, after presenting the basic idea of causal accounts of implementation and the problems they are supposed to solve, I sketch the model of computation preferred by Chalmers and argue that it is too limited to do full justice to computational theories in cognitive science. I also argue that it does not suffice to replace Chalmers’ favorite model with a better abstract model of computation; it is necessary to acknowledge the causal structure of physical computers that is (...) not accommodated by the models used in computability theory. Additionally, an alternative mechanistic proposal is outlined. (shrink)
Computational modeling plays an increasingly important explanatory role in cases where we investigate systems or problems that exceed our native epistemic capacities. One clear case where technological enhancement is indispensable involves the study of complex systems.1 However, even in contexts where the number of parameters and interactions that define a problem is small, simple systems sometimes exhibit non-linear features which computational models can illustrate and track. In recent decades, computational models have been proposed as a way to (...) assist us in understanding emergent phenomena. (shrink)
David Marr's theory of vision has been a rich source of inspiration, fascination and confusion. I will suggest that some of this confusion can be traced to discrepancies between the way Marr developed his theory in practice and the way he suggested such a theory ought to be developed in his explicit metatheoretical remarks. I will address claims that Marr's theory may be seen as an optimizing theory, along with the attendant suggestion that optimizing assumptions may be inappropriate for cognitive (...) mechanisms just as anti-adaptationists have argued they are inappropriate for other physiological mechanisms. I will discuss the nature of optimizing assumptions and theories. Considering various difficulties in identifying and assessing optimizing assumptions, I will suggest that Marr's theory is not purely an optimizing theory and that reaction to Marr on this issue prompts interesting considerations for the development of inter-disciplinary constraints in the cognitive and brain sciences. (shrink)
Abstract Although noting the importance of organization in mechanisms, the new mechanistic philosophers of science have followed most biologists in focusing primarily on only the simplest mode of organization in which operations are envisaged as occurring sequentially. Increasingly, though, biologists are recognizing that the mechanisms they confront are non-sequential and the operations nonlinear. To understand how such mechanisms function through time, they are turning to computational models and tools of dynamical systems theory. Recent research on circadian rhythms addressing both (...) intracellular mechanisms and the intercellular networks in which these mechanisms are synchronized illuminates this point. This and other recent research in biology shows that the new mechanistic philosophers of science must expand their account of mechanistic explanation to incorporate computational modeling, yielding dynamical mechanistic explanations. Developing such explanations, however, is a challenge for both the scientists and the philosophers as there are serious tensions between mechanistic and dynamical approaches to science, and there are important opportunities for philosophers of science to contribute to surmounting these tensions. Content Type Journal Article Category Original paper in Philosophy of Science Pages 1-16 DOI 10.1007/s13194-012-0046-x Authors William Bechtel, Department of Philosophy, Center for Chronobiology, and Science Studies Program, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0119, USA Journal European Journal for Philosophy of Science Online ISSN 1879-4920 Print ISSN 1879-4912. (shrink)
In the book, I argue that the mind can be explained computationally because it is itself computational—whether it engages in mental arithmetic, parses natural language, or processes the auditory signals that allow us to experience music. All these capacities arise from complex information-processing operations of the mind. By analyzing the state of the art in cognitive science, I develop an account of computationalexplanation used to explain the capacities in question.
In this paper, I explore the implications of Fodor’s attacks on the Computational Theory of Mind (CTM), which get their most recent airing in The Mind Doesn’t Work That Way. I argue that if Fodor is right that the CTM founders on the global nature of abductive inference, then several of the philosophical views about the mind that he has championed over the years founder as well. I focus on Fodor’s accounts of mental causation, psychological explanation, and intentionality.
Many cognitive scientists, having discovered that some computational-level characterization f of a cognitive capacity φ is intractable, invoke heuristics as algorithmic-level explanations of how cognizers compute f. We argue that such explanations are actually dysfunctional, and rebut five possible objections. We then propose computational-level theory revision as a principled and workable alternative.
Explaining the complex dynamics exhibited in many biological mechanisms requires extending the recent philosophical treatment of mechanisms that emphasizes sequences of operations. To understand how nonsequentially organized mechanisms will behave, scientists often advance what we call dynamic mechanistic explanations. These begin with a decomposition of the mechanism into component parts and operations, using a variety of laboratory-based strategies. Crucially, the mechanism is then recomposed by means of computational models in which variables or terms in differential equations correspond to properties (...) of its parts and operations. We provide two illustrations drawn from research on circadian rhythms. Once biologists identified some of the components of the molecular mechanism thought to be responsible for circadian rhythms, computational models were used to determine whether the proposed mechanisms could generate sustained oscillations. Modeling has become even more important as researchers have recognized that the oscillations generated in individual neurons are synchronized within networks; we describe models being employed to assess how different possible network architectures could produce the observed synchronized activity. (shrink)
Which notion of computation (if any) is essential for explaining cognition? Five answers to this question are discussed in the paper. (1) The classicist answer: symbolic (digital) computation is required for explaining cognition; (2) The broad digital computationalist answer: digital computation broadly construed is required for explaining cognition; (3) The connectionist answer: sub-symbolic computation is required for explaining cognition; (4) The computational neuroscientist answer: neural computation (that, strictly, is neither digital nor analogue) is required for explaining cognition; (5) The (...) extreme dynamicist answer: computation is not required for explaining cognition. The first four answers are only accurate to a first approximation. But the “devil” is in the details. The last answer cashes in on the parenthetical “if any” in the question above. The classicist argues that cognition is symbolic computation. But digital computationalism need not be equated with classicism. Indeed, computationalism can, in principle, range from digital (and analogue) computationalism through (the weaker thesis of) generic computationalism to (the even weaker thesis of) digital (or analogue) pancomputationalism. Connectionism, which has traditionally been criticised by classicists for being non-computational, can be plausibly construed as being either analogue or digital computationalism (depending on the type of connectionist networks used). Computational neuroscience invokes the notion of neural computation that may (possibly) be interpreted as a sui generis type of computation. The extreme dynamicist argues that the time has come for a post-computational cognitive science. This paper is an attempt to shed some light on this debate by examining various conceptions and misconceptions of (particularly digital) computation. (shrink)
A common kind of explanation in cognitive neuroscience might be called function-theoretic: with some target cognitive capacity in view, the theorist hypothesizes that the system computes a well-defined function (in the mathematical sense) and explains how computing this function contributes to the exercise of the cognitive capacity. Recently, proponents of the so-called ‘new mechanist’ approach in philosophy of science have argued that a model of a cognitive capacity is explanatory only to the extent that it reveals the causal structure (...) of the mechanism underlying the capacity. If they are right, then a cognitive model that resists a transparent mapping to known neural mechanisms fails to be explanatory. I argue that a function-theoretic characterization of a cognitive capacity can be genuinely explanatory even absent an account of how the capacity is realized in neural hardware. (shrink)
The received view of dynamical explanation is that dynamical cognitive science seeks to provide covering law explanations of cognitive phenomena. By analyzing three prominent examples of dynamicist research, I show that the received view is misleading: some dynamical explanations are mechanistic explanations, and in this way resemble computational and connectionist explanations. Interestingly, these dynamical explanations invoke the mathematical framework of dynamical systems theory to describe mechanisms far more complex and distributed than the ones typically considered by philosophers. Therefore, (...) contemporary dynamicist research reveals the need for a more sophisticated account of mechanistic explanation. (shrink)
There is general agreement that from the first few months of life, our apprehension of physical objects accords, in some sense, with certain principles. In one philosopher's locution, we are 'perceptually sensitive' to physical principles describing the behavior of objects. But in what does this accordance or sensitivity consist? Are these principles explicitly represented or merely 'implemented'? And what sort of explanation do we accomplish in claiming that our object perception accords with these principles? My main goal here is (...) to suggest answers to these questions. I argue that the object principles are not explicitly represented, first addressing some confusion in the debate about what that means. On the positive side, I conclude that the principles supply a competence account, at Marr's computational level, and that they function like natural constraints in vision. These are among their considerable explanatory benefits - benefits endowed by rules and principles in other cognitive domains as well. Characterizing the explanatory role of the object principles is my main project here, but in pursuing certain sub-goals I am led to other conclusions of interest in their own right. I address an argument that the object principles are explicitly represented which assumes that object perception is substantially thought-like. This provokes a jaunt off the main path which leads to interesting territory: the boundary between thought and perception. I argue that object apprehension is much closer to perception than to thought on the spectrum between the two. (shrink)
This paper challenges arguments that systematic patterns of intelligent behavior license the claim that representations must play a role in the cognitive system analogous to that played by syntactical structures in a computer program. In place of traditional computational models, I argue that research inspired by Dynamical Systems theory can support an alternative view of representations. My suggestion is that we treat linguistic and representational structures as providing complex multi-dimensional targets for the development of individual brains. This approach acknowledges (...) the indispensability of the intentional or representational idiom in psychological explanation without locating representations in the brains of intelligent agents. (shrink)
In the study of cognitive processes, limitations on computational resources (computing time and memory space) are usually considered to be beyond the scope of a theory of competence, and to be exclusively relevant to the study of performance. Starting from considerations derived from the theory of computational complexity, in this paper I argue that there are good reasons for claiming that some aspects of resource limitations pertain to the domain of a theory of competence.
Computational philosophy (CP) aims at investigating many important concepts and problems of the philosophical and epistemological tradition in a new way by taking advantage of information-theoretic, cognitive, and artificial intelligence methodologies. I maintain that the results of computational philosophy meet the classical requirements of some Peircian pragmatic ambitions. Indeed, more than a 100 years ago, the American philosopher C.S. Peirce, when working on logical and philosophical problems, suggested the concept of pragmatism(pragmaticism, in his own words) as a logical (...) criterion to analyze what words and concepts express through their practical meaning. Many words have been spent on creative processes and reasoning, especially in the case of scientific practices. In fact, many philosophers have usually offered a number of ways of construing hypotheses generation, but they aim at demonstrating that the activity of generating hypotheses is paradoxical, obscure, and thus not analyzable. Those descriptions are often so far from Peircian pragmatic prescription and so abstract to result completely unknowable and obscure. To dismiss this tendency and gain interesting insight about the so-called logic of scientific discovery we need to build constructive procedures, which could play a role in moving the problem-solving process forward by implementing them in some actual models. The computational turn gives us a new way to understand creative processes in a strictly pragmatic sense. In fact, by exploiting artificial intelligence and cognitive science tools, computational philosophy allows us to test concepts and ideas previously conceived only in abstract terms. It is in the perspective of these actual computational models that I find the central role of abduction in the explanation of creative reasoning in science. I maintain that the computational philosophy analysis of model-based and manipulative abduction and of external and epistemic mediators is important not only to delineate the actual practice of abduction, but also to further enhance the development of programs computationally adequate in rediscovering, or discovering for the first time, for example, scientific hypotheses or mathematical theorems. The last part of the paper is devoted to illustrating the problem of the extra-theoretical dimension of reasoning and discovery from the perspective of some mathematical cases derived from calculus and geometry. (shrink)
I examine one of the conceptual cornerstones of the field known as computational neuroscience, especially as articulated in Churchland et al. (1990), an article that is arguably the locus classicus of this term and its meaning. The authors of that article try, but I claim ultimately fail, to mark off the enterprise of computational neuroscience as an interdisciplinary approach to understanding the cognitive, information-processing functions of the brain. The failure is a result of the fact that the authors (...) provide no principled means to distinguish the study of neural systems as genuinely computational/information-processing from the study of any complex causal process. I then argue for two things. First, that in order to appropriately mark off computational neuroscience, one must be able to assign a semantics to the states over which an attempt to provide a computationalexplanation is made. Second, I show that neither of the two most popular ways of trying to effect such content assignation -- informational semantics and 'biosemantics' -- can make the required distinction, at least not in a way that a computational neuroscientist should be happy about. The moral of the story as I take it is not a negative one to the effect that computational neuroscience is in principle incapable of doing what it wants to do. Rather, it is to point out some work that remains to be done. (shrink)
It is often thought that the computational paradigm provides a supporting case for the theoretical autonomy of the science of mind. However, I argue that computation is in fact incompatible with this alleged aspect of intentional explanation, and hence the foundational assumptions of orthodox cognitive science are mutually unstable. The most plausible way to relieve these foundational tensions is to relinquish the idea that the psychological level enjoys some special form of theoretical sovereignty. So, in contrast to well (...) known antireductionist views based on multiple realizability, I argue that the primary goal of a computational approach to the mind should be to facilitate a translation of the psychological to the neurophysiological. (shrink)
In this essay I defend a theory of psychological explanation that is based on the joint commitment to direct reference and computationalism. I offer a new solution to the problem of Frege Cases. Frege Cases involve agents who are unaware that certain expressions corefer (e.g. that 'Cicero' and 'Tully' corefer), where such knowledge is relevant to the success of their behavior, leading to cases in which the agents fail to behave as the intentional laws predict. It is generally agreed (...) that Frege Cases are a major problem, if not the major problem, that this sort of theory faces. In this essay, I hope to show that the theory can surmount the Frege Cases. (shrink)
Computation is central to the foundations of modern cognitive science, but its role is controversial. Questions about computation abound: What is it for a physical system to implement a computation? Is computation sufficient for thought? What is the role of computation in a theory of cognition? What is the relation between different sorts of computational theory, such as connectionism and symbolic computation? In this paper I develop a systematic framework that addresses all of these questions. Justifying the role of (...) computation requires analysis of implementation, the nexus between abstract computations and concrete physical systems. I give such an analysis, based on the idea that a system implements a computation if the causal structure of the system mirrors the formal structure of the computation. This account can be used to justify the central commitments of artificial intelligence and computational cognitive science: the thesis of computational sufficiency, which holds that the right kind of computational structure suffices for the possession of a mind, and the thesis of computationalexplanation, which holds that computation provides a general framework for the explanation of cognitive processes. The theses are consequences of the facts that (a) computation can specify general patterns of causal organization, and (b) mentality is an organizational invariant, rooted in such patterns. Along the way I answer various challenges to the computationalist position, such as those put forward by Searle. I close by advocating a kind of minimal computationalism, compatible with a very wide variety of empirical approaches to the mind. This allows computation to serve as a true foundation for cognitive science. (shrink)
The purpose of this paper is to set forth a sense in which programs can and do explain behavior, and to distinguish from this a number of senses in which they do not. Once we are tolerably clear concerning the sort of explanatory strategy being employed, two rather interesting facts emerge; (1) though it is true that programs are "internally represented," this fact has no explanatory interest beyond the mere fact that the program is executed; (2) programs which are couched (...) in information processing terms may have an explanatory interest for a given range of behavior which is independent of physiological explanations of the same range of behavior. (shrink)
The commentators expressed concerns regarding the relevance and value of non-computational non-symbolic explanations of cognitive performance. But what counts as an “explanation” depends on the pre-theoretical assumptions behind the scenes of empirical science regarding the kinds of variables and relationships that are sought out in the first place, and some of the present disagreements stem from incommensurate assumptions. Traditional cognitive science presumes cognition to be a decomposable system of components interacting according to computational rules to generate cognitive (...) performances (i.e., component-dominant dynamics). We assign primacy to interaction-dominant dynamics among components. Though either choice can be a good guess before the fact, the primacy of interactions is now supported by much recent empirical work in cognitive science. Consequently, in the main, the commentators have failed so far to address the growing evidence corroborating the theory-driven predictions of complexity science. (shrink)
The opposition between behaviour- and mind-reading accounts of data on infants and non-human primates could be less dramatic than has been thought up to now. In this paper, I argue for this thesis by analysing a possible neuro-computationalexplanation of early mind-reading, based on a mechanism of associative generalization which is apt to implement the notion of mental states as intervening variables proposed by Andrew Whiten. This account allows capturing important continuities between behaviour-reading and mind-reading, insofar as both (...) are supposed to be just different kinds of generalization from perceptual experience. Specifically, I will argue that the projection of inner experiences to others which is involved in early mind-reading does not imply a computational leap beyond associative generalization from perceptual experience. (shrink)
Over the past several decades, the philosophical community has witnessed the emergence of an important new paradigm for understanding the mind.1 The paradigm is that of machine computation, and its influence has been felt not only in philosophy, but also in all of the empirical disciplines devoted to the study of cognition. Of the several strategies for applying the resources provided by computer and cognitive science to the philosophy of mind, the one that has gained the most attention from philosophers (...) has been the Computational Theory of Mind (CTM). CTM was first articulated by Hilary Putnam (1960, 1961), but finds perhaps its most consistent and enduring advocate in Jerry Fodor (1975, 1980, 1981, 1987, 1990, 1994). It is this theory, and not any broader interpretations of what it would be for the mind to be a computer, that I wish to address in this paper. What I shall argue here is that the notion of symbolic representation employed by CTM is fundamentally unsuited to providing an explanation of the intentionality of mental states (a major goal of CTM), and that this result undercuts a second major goal of CTM, sometimes refered to as the vindication of intentional psychology. This line of argument is related to the discussions of derived intentionality by Searle (1980, 1983, 1984) and Sayre (1986, 1987). But whereas those discussions seem to be concerned with the causal dependence of familiar sorts of symbolic representation upon meaning-bestowing acts, my claim is rather that there is not one but several notions of meaning to be had, and that the notions that are applicable to symbols are conceptually dependent upon the notion that is applicable to mental states in the fashion that Aristotle refered to as paronymy. That is, an analysis of the notions of meaning applicable to symbols reveals that they contain presuppositions about meaningful mental states, much as Aristotle's analysis of the sense of healthy that is applied to foods reveals that it means conducive to having a healthy body, and hence any attempt to explain mental semantics in terms of the semantics of symbols is doomed to circularity and regress. I shall argue, however, that this does not have the consequence that computationalism is bankrupt as a paradigm for cognitive science, as it is possible to reconstruct CTM in a fashion that avoids these difficulties and makes it a viable research framework for psychology, albeit at the cost of losing its claims to explain intentionality and to vindicate intentional psychology. I have argued elsewhere (Horst, 1996) that local special sciences such as psychology do not require vindication in the form of demonstrating their reducibility to more fundamental theories, and hence failure to make good on these philosophical promises need not compromise the broad range of work in empirical cognitive science motivated by the computer paradigm in ways that do not depend on these problematic treatments of symbols. (shrink)
Of the many and varied applications of quantum information theory, perhaps the most fascinating is the sub-field of quantum computation. In this sub-field, computational algorithms are designed which utilise the resources available in quantum systems in order to compute solutions to computational problems with, in some cases, exponentially fewer resources than any known classical algorithm. While the fact of quantum computational speedup is almost beyond doubt, the source of quantum speedup is still a matter of debate. In (...) this paper I argue that entanglement is a necessary component for any explanation of quantum speedup and I address some purported counter-examples that some claim show that the contrary is true. In particular, I address Biham et al.'s mixed-state version of the Deutsch-Jozsa algorithm, and Knill \& Laflamme's deterministic quantum computation with one qubit (DQC1) model of quantum computation. I argue that these examples do not demonstrate that entanglement is unnecessary for the explanation of quantum speedup, but that they rather illuminate and clarify the role that entanglement does play. (shrink)
Baker (2005) claims to provide an example of mathematical explanation of an empirical phenomenon which leads to ontological commitment to mathematical objects. This is meant to show that the positing of mathematical entities is necessary for satisfactory scientific explanations and thus that the application of mathematics to science can be used, at least in some cases, to support mathematical realism. In this paper I show that the example of explanation Baker considers can actually be given without postulating mathematical (...) objects and thus cannot be used by the mathematical realist. I also show that, despite this, mathematics keeps playing an important methodological role in the explanation and does not reduce to a merely computational or descriptive framework. (shrink)
Connectionist models of cognition are all the rage these days. They are said to provide better explanations than traditional symbolic computational models in a wide array of cognitive areas, from perception to memory to language to reasoning to motor action. But what does it actually mean to say that they "explain" cognition at all? In what sense do the dozens of nodes and hundreds of connections in a typical connectionist network explain anything? It is the purpose of this paper (...) to explore this question in light of traditional accounts of what it is to be an explanation. We start with an impossibly brief review of some historically important theories of explanation. We then discuss several currently-popular approaches to the question of how connectionist models explain cognition. Third, we describe a theory of causation by philosopher Stephen Yablo that solves some of the problems on which we think many accounts of connectionist explanation founder. Finally, we apply Yablo's theory to these accounts, and show how several important issues surrounding them seem to disappear into thin air in its presence. (shrink)
In the form of inference known as inference to the best explanation there are various ways to characterise what is meant by the best explanation. This paper considers a number of such characterisations including several based on confirmation measures and several based on coherence measures. The goal is to find a measure which adequately captures what is meant by 'best' and which also yields the truth with a high degree of probability. Computer simulations are used to show that (...) the overlap coherence measure achieves this goal, enabling the true explanation to be identified almost as often as an approach which simply selects the most probable explanation. Further advantages to this approach are also considered in the case where there is uncertainty in the prior probability distribution. (shrink)
In this paper I offer an explanation of the ineffability (linguistic inexpressibility) of sensory experiences. My explanation is put in terms of computational functionalism and standard externalist theories of representational content. As I will argue, many or most sensory experiences are representational states without constituent structure. This property determines both the representational function these states can serve and the information that can be extracted from them when they are processed. Sensory experiences can indicate the presence of certain (...) external states of affairs but they cannot convey any more information about them than that. So, format- or code-conversion mechanisms that link different systems of representation (linguistic and perceptual) to each other will fail to extract any relevant information from sensory experiences that could be coded in language. They only way to establish specific roles for sensory experiences in communication and the organization of behavior is to attach to them, by associative links, words, or other behavioral responses. If a sensory experience has no linguistic label associated to it in a particular subject, then no linguistic description can token, or activate, that state in the subject. In other words, no linguistic description can cause a subject to undergo an unlabeled perceptual state. On the contrary, complex, or syntactically structured perceptual states can be built up, on the basis of descriptions, by mechanisms of constructive imagination (conceived here as one sort of format conversion). It is this difference between complex and unstructured representational states that gives us an understanding of the phenomenon we call the ineffability of qualia. (shrink)
According to Marr, a computational-level theory consists of two elements, the what and the why . This article highlights the distinct role of the Why element in the computational analysis of vision. Three theses are advanced: ( a ) that the Why element plays an explanatory role in computational-level theories, ( b ) that its goal is to explain why the computed function (specified by the What element) is appropriate for a given visual task, and ( c (...) ) that the explanation consists in showing that the functional relations between the representing cells are similar to the “external” mathematical relations between the entities that these cells represent. *Received September 2009; revised January 2010. †To contact the author, please write to: Departments of Philosophy and Cognitive Science, The Hebrew University, Jerusalem 91905, Israel; e-mail: email@example.com. (shrink)