Two very different insights motivate characterizing the brain as a computer. One depends on mathematical theory that defines computability in a highly abstract sense. Here the foundational idea is that of a Turing machine. Not an actual machine, the Turing machine is really a conceptual way of making the point that any well-defined function could be executed, step by step, according to simple 'if-you-are-in-state-P-and-have-input-Q-then-do-R' rules, given enough time (maybe infinite time) [see COMPUTATION]. Insofar as the brain is a device whose (...) input and output can be characterized in terms of some mathematical function -- however complicated -- then in that very abstract sense, it can be mimicked by a Turning machine. Given what is known so far brains do seem to depend on cause-effect operations, and hence brains appear to be, in some formal sense, equivalent to a Turing machine [see CHURCH-TURING THESIS]. On its own, however, this reveals nothing at all of how the mind-brain actually works. The second insight depends on looking at the brain as a biological device that processes information from the environment to build complex representations that enable the brain to make predictions and select advantageous behaviors. Where necessary to avoid ambiguity, we will refer to the first notion of computation as algorithmic computation, and the second as information processing computation. (shrink)
The question of whether time is its own best representation is explored. Though there is theoretical debate between proponents of internal models and embedded cognition proponents (e.g. Brooks R 1991 Artiﬁcial Intelligence 47 139–59) concerning whether the world is its own best model, proponents of internal models are often content to let time be its own best representation. This happens via the time update of the model that simply allows the model’s state to evolve along with the state of the (...) modeled domain. I argue that this is neither necessary nor advisable. I show that this is not necessary by describing how internal modeling approaches can be generalized to schemes that explicitly represent time by maintaining trajectory estimates rather than state estimates. Though there are a variety of ways this could be done, I illustrate the proposal with a scheme that combines ﬁltering, smoothing and prediction to maintain an estimate of the modeled domain’s trajectory over time. I show that letting time be its own representation is not advisable by showing how trajectory estimation schemes can provide accounts of temporal illusions, such as apparent motion, that pose serious difﬁculties for any scheme that lets time be its own representation. (shrink)
Nothing is more obvious than the fact that we are able to experience events in the world such a ball deflecting from the cross-bar of a goal. But what is the temporal relation between these two things, the event, and our experience of the event? One possibility is that the world progresses temporally through a sequence of instantaneous states – the striker’s foot in contact with the ball, then the ball between the striker and the goal, then the ball in (...) contact with the cross-bar, and so forth –, while the perceiver’s experience is likewise a sequence of experience states, each one of which corresponds to, or is experience of, a corresponding state of the world – for example, a perception of the foot in contact with the ball, followed by a perception of the ball in the air, following by a perception of the ball in contact with the cross-bar. This way of understanding the relationship between experience and the world is very natural, and nearly universal. However, it rests on two assumptions that can be brought into question. (shrink)
It is an under-appreciated fact that we have no significant understanding of the neurobiological mechanisms supporting any aspect of cognition, broadly construed. The limited understanding we do have is a combination of a multitude of enticing empirical fragments, scattered sparsely on a background of noise, and a number of vastly underdetermined theoretical frameworks. But however incomplete the answers, the questions posed by cognitive neuroscience are compelling. Indeed, it is nothing less than ourselves -- our decision making abilities, our command of (...) language, our own consciousness -- that we are seeking to understand. (shrink)
It is the aim of work in theoretical cognitive science to produce good theories of what exactly cognition amounts to, preferably theories which not only provide a framework for fruitful empirical investigation, but which also shed light on cognitive activity itself, which help us to understand our place, as cognitive agents, in a complex causally determined physical universe. The most recent such framework to gain significant fame is the so-called dynamical approach to cognition (henceforth DST, for Dynamical Systems Theory ). (...) Explaining and exploring DST is the purpose of the collection Mind as Motion: Explorations in the Dynamics of Cognition , edited by Robert Port and Timothy van Gelder. (shrink)
0. Introduction The past decade has seen Cognitive Linguistics (CL) emerge as an important, exciting and promising theoretical alternative to Chomskyan approaches to the study of language. Even so, sheer numbers and institutional inertia make it the case that most current neurolinguistic research either assumes that the Chomskyan formalist story is more or less correct (and thus that the task of neurolinguistics is to determine how the brain implements GB, for instance), or that the there are two possibilities, Chomskyanism or (...) associationism/connectionism, and that the task of neurolinguistics is to discover which is really the way the brain does it. In either case, the theoretical apparatus of CL is not being explored by neurolinguistics, and hence the promise CL holds for making genuine fruitful contact with theoretical neurobiology is not materializing as quickly as one might hope. This paper is an attempt to make some initial steps at fulfilling this promise. (shrink)
 Michael Gareth Justin Evans was born in London on May 12 th, 1946, to his parents Gwaldus and Justin Evans. He had an older brother Huw, an older sister Myfawny, and a younger sister Elaine. As a young student, Evans was both highly intelligent and careless. The final report from his form master at Granton Primary School says that "Gareth is so vigorous and impatient to get his work finished that he is subject to error. A delightful (...) boy!". (shrink)
Each of us distinguishes between himself and states of himself on the one hand, and what is not himself or a state of himself on the other. What are the conditions of our making this distinction, and how are they fulfilled? In what way do we make it, and why do we make it in the way we do?
In this paper I will outline a unified information processing framework whose goal is to explain how the nervous system represents space, time and objects. In the remainder of this introductory section I will first be more specific about the sort of spatial, temporal, and object representation at issue, and then outline the structure of this paper.
William James’ Principles of Psychology, in which he made famous the ‘specious present’ doctrine of temporal experience, and Edmund Husserl’s Zur Phänomenologie des inneren Zeitbewusstseins, were giant strides in the philosophical investigation of the temporality of experience. However, an important set of precursors to these works has not been adequately investigated. In this article, we undertake this investigation. Beginning with Reid’s essay ‘Memory’ in Essays on the Intellectual Powers of Man, we trace out a line of development of ideas about (...) the temporality of experience that runs through Dugald Stewart, Thomas Brown, William Hamilton, and finally the work of Shadworth Hodgson and Robert Kelly, both of whom were immediate influences on James (though James pseudonymously cites the latter as ‘E.R. Clay’). Furthermore, we argue that Hodgson, especially his Metaphysic of Experience (1898), was a significant influence on Husserl. (shrink)
: Berkeley's Essay Towards a New Theory of Vision presents a theory of various aspects of the spatial content of visual experience that attempts to undercut not only the optico-geometric accounts of e.g., Descartes and Malebranche, but also elements of the empiricist account of Locke. My task in this paper is to shed light on some features of Berkeley's account that have not been adequately appreciated. After rehearsing a more detailed Lockean critique of the notion that depth is a proper (...) object of vision, Berkeley directs arguments he takes to be entirely parallel against the notion that vision has two-dimensional planar contents as proper objects. I show that this argument fails due to an illicit slide unnoticed by both Berkeley and his commentators—a slide present but innocuously so in the case of depth. Berkeley's positive account, according to which the apparent spatial content of vision is a matter of associations between, on the one hand, tactile and motor contents, and on the other hand non-spatial visual contents, also fails because of an illicit slide—again, unnoticed by Berkeley and his commentators. I close by discerning the salvageable and correct core of Berkeley's theory of the spatiality of vision. (shrink)
Gareth Evans’ account of Identiﬁ cation-freedom (IF), which he devel- ops in Chapters 6 and 7 of The Varieties of Reference (henceforth VR) is almost universally misunderstood.1 Howell is guilty of this same mis- understanding, and as a result claims to have mounted a criticism of Evans, when in fact he has not. I will take the occasion of Howell’s oth- erwise insightful article to clarify Evans’ position. Note that the bulk of Howell’s analysis is targeted at the phenomenon known (...) as immunity to error through misidentiﬁ cation (IEM), which is related to but not (necessarily) identical to IF. Therefore, the accuracy of Howell’s treat- ment of Evans in particular is tangential to the main thrust of his article. My exegesis of Evans’ account — like any non-trivial exegesis — goes somewhat beyond anything Evans overtly says. That Evans did not ex- plicitly put the pieces together in the way I suggest they ﬁ t no doubt contributes to the widespread misunderstanding of his views. But I am. (shrink)
An attempt is made to defend a general approach to the spatial content of perception, an approach according to which perception is imbued with spatial content in virtue of certain kinds of connections between perceiving organism's sensory input and its behavioral output. The most important aspect of the defense involves clearly distinguishing two kinds of perceptuo-behavioral skills—the formation of dispositions, and a capacity for emulation. The former, the formation of dispositions, is argued to by the central pivot of spatial content. (...) I provide a neural information processing interpretation of what these dispositions amount to, and describe how dispositions, so understood, are an obvious implementation of Gareth Evans' proposal on the topic. Furthermore, I describe what sorts of contribution are made by emulation mechanisms, and I also describe exactly how the emulation framework differs from similar but distinct notions with which it is often unhelpfully confused, such as sensorimotor contingencies and forward models. (shrink)
A number of recent attempts to bridge Husserlian phenomenology of time consciousness and contemporary tools and results from cognitive science or computational neuroscience are described and critiqued. An alternate proposal is outlined that lacks the weaknesses of existing accounts.
... there are cases in which on the basis of a temporally extended content of consciousness a unitary apprehension takes place which is spread out over a temporal interval (the so-called specious present). ... That several successive tones yield a melody is possible only in this way, that the succession of psychical processes are united "forthwith" in a common structure.
The emulation theory of representation articulated in the target article is further explained and explored in this response to commentaries. Major topics include: the irrelevance of equilibrium-point and related models of motor control to the theory; clarification of the particular sense of “representation” which the emulation theory of representation is an account of; the relation between the emulation framework and Kalman filtering; and addressing the empirical data considered to be in conflict with the emulation theory. In addition, I discuss the (...) further empirical support for the emulation theory provided by some commentators, as well as a number of suggested theoretical applications. (shrink)
The emulation theory of representation is developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain. The framework is based on constructs from control theory (forward models) and signal processing (Kalman filters). The idea is that in addition to simply engaging with the body and environment, the brain constructs neural circuits that act as models of the body and environment. During overt sensorimotor engagement, these models are driven by efference copies in (...) parallel with the body and environment, in order to provide expectations of the sensory feedback, and to enhance and process sensory information. These models can also be run off-line in order to produce imagery, estimate outcomes of different actions, and evaluate and develop motor plans. The framework is initially developed within the context of motor control, where it has been shown that inner models running in parallel with the body can reduce the effects of feedback delay problems. The same mechanisms can account for motor imagery as the off-line driving of the emulator via efference copies. The framework is extended to account for visual imagery as the off-line driving of an emulator of the motor-visual loop. I also show how such systems can provide for amodal spatial imagery. Perception, including visual perception, results from such models being used to form expectations of, and to interpret, sensory input. I close by briefly outlining other cognitive functions that might also be synthesized within this framework, including reasoning, theory of mind phenomena, and language. Key Words: efference copies; emulation theory of representation; forward models; Kalman filters; motor control; motor imagery; perception; visual imagery. (shrink)
I argue against a growing radical trend in current theoretical cognitive science that moves from the premises of embedded cognition, embodied cognition, dynamical systems theory and/or situated robotics to conclusions either to the effect that the mind is not in the brain or that cognition does not require representation, or both. I unearth the considerations at the foundation of this view: Haugeland's bandwidth-component argument to the effect that the brain is not a component in cognitive activity, and arguments inspired by (...) dynamical systems theory and situated robotics to the effect that cognitive activity does not involve representations. Both of these strands depend not only on a shift of emphasis from higher cognitive functions to things like sensorimotor processes, but also depend on a certain understanding of how sensorimotor processes are implemented - as closed-loop control systems. I describe a much more sophisticated model of sensorimotor processing that is not only more powerful and robust than simple closed-loop control, but for which there is great evidence that it is implemented in the nervous system. The is the emulation theory of representation, according to which the brain constructs inner dynamical models, or emulators, of the body and environment which are used in parallel with the body and environment to enhance motor control and perception and to provide faster feedback during motor processes, and can be run off-line to produce imagery and evaluate sensorimotor counterfactuals. I then show that the emulation framework is immune to the radical arguments, and makes apparent why the brain is a component in the cognitive activity, and exactly what the representations are in sensorimotor control. (shrink)
Philosophy interfaces with cognitive science in three distinct but related areas. First, there is the usual set of issues that fall under the heading of philosophy of science (explanation, reduction, etc.), applied to the special case of cognitive science. Second, there is the endeavor of taking results from cognitive science as bearing upon traditional philosophical questions about the mind, such as the nature of mental representation, consciousness, free will, perception, emotions, memory, etc. Third.
In this reply we claim that, contra Dreyfus, the kinds of skillful performances Dreyfus discusses _are_ representational. We explain this proposal, and then defend it against an objection to the effect that the representational notion we invoke is a weak one countenancing only some global state of an organism as a representation. According to this objection, such a representation is not a robust, projectible property of an organism, and hence will gain no explana- tory leverage in cognitive scientific explanations. We (...) argue on conceptual and empirical grounds that the representations we have identified are not weak unprojectible global states of organisms, but instead genuinely explanatory representational parts of persons. (shrink)
I examine one of the conceptual cornerstones of the field known as computational neuroscience, especially as articulated in Churchland et al. (1990), an article that is arguably the locus classicus of this term and its meaning. The authors of that article try, but I claim ultimately fail, to mark off the enterprise of computational neuroscience as an interdisciplinary approach to understanding the cognitive, information-processing functions of the brain. The failure is a result of the fact that the authors provide no (...) principled means to distinguish the study of neural systems as genuinely computational/information-processing from the study of any complex causal process. I then argue for two things. First, that in order to appropriately mark off computational neuroscience, one must be able to assign a semantics to the states over which an attempt to provide a computational explanation is made. Second, I show that neither of the two most popular ways of trying to effect such content assignation -- informational semantics and 'biosemantics' -- can make the required distinction, at least not in a way that a computational neuroscientist should be happy about. The moral of the story as I take it is not a negative one to the effect that computational neuroscience is in principle incapable of doing what it wants to do. Rather, it is to point out some work that remains to be done. (shrink)
b>: The problem of how physical systems, such as brains, come to represent themselves as subjects in an objective world is addressed. I develop an account of the requirements for this ability that draws on and refines work in a philosophical tradition that runs from Kant through Peter Strawson to Gareth Evans. The basic idea is that the ability to represent oneself as a subject in a world whose existence is independent of oneself involves the ability to represent space, and (...) in particular, to represent oneself as one object among others in an objective spatial realm. In parallel, I provide an account of how this ability, and the mechanisms that support it, are realized neurobiologically. This aspect of the article draws on, and refines, work done in the neurobiology and psychology of egocentric and allocentric spatial representation. (shrink)
There is a definite challenge in the air regarding the pivotal notion of internal representation. This challenge is explicit in, e.g., van Gelder, 1995; Beer, 1995; Thelen & Smith, 1994; Wheeler, 1994; and elsewhere. We think it is a challenge that can be met and that (importantly) can be met by arguing from within a general framework that accepts many of the basic premises of the work (in new robotics and in dynamical systems theory) that motivates such scepticism in the (...) first place. Our strategy will be as follows. We begin (Section 1) by offering an account (an example and something close to a definition) of what we shall term Minimal Robust Representationalism (MRR). Sections 2 & 3 address some likely worries and questions about this notion. We end (Section 4) by making explicit the conditions under which, on our account, a science (e.g., robot- ics) may claim to be addressing cognitive phenomena. (shrink)
I have argued elsewhere that imagery and represention are best explained as the result of operations of neurally implemented emulators of an agent's body and environment. In this article I extend the theory of emulation to address perceptual processing as well. The key notion will be that of an emulator of an agent's egocentric behavioral space. This emulator, when run off-line, produces mental imagery, including transformations such as visual image rotations. However, while on-line, it is used to process information from (...) sensory systems, resulting in perception (in this regard, the theory is similar to that proposed by Kosslyn (1994)). This emulator is what provides the theory in theory-laden perception. I close by arguing briefly that the spatial character of perception is to be explained as the contribution of the egocentric behavioral space emulator. (shrink)
 It is well-known that Evans laid the groundwork for a truly radical and fruitful theory of _content_ -- a theory according to which content is a genus with at least conceptual and nonconceptual varieties as species, and in which nonconceptual content plays a very significant role. It is less well-recognized that Evans was also in the process of working out the details of a truly radical and groundbreaking theory of _representation_, a task he was unfortunately unable to bring to (...) any satisfactory stage of fruition. I am here drawing the distinction between a theory of. (shrink)
Two very different insights motivate characterizing the brain as a computer. One depends on mathematical theory that defines computability in a highly abstract sense. Here the foundational idea is that of a Turing machine. Not an actual machine, the Turing machine is really a conceptual way of making the point that any well-defined function could be executed, step by step, according to simple 'if-you-are-in-state-P-and-have-input-Q-then-do-R' rules, given enough time (maybe infinite time) [see COMPUTATION]. Insofar as the brain is a device whose (...) input and output can be characterized in terms of some mathematical function -- however complicated -- then in that very abstract sense, it can be mimicked by a Turning machine. Given what is known so far brains do seem to depend on cause-effect operations, and hence brains appear to be, in some formal sense, equivalent to a Turing machine [see CHURCH-TURING THESIS]. On its own, however, this reveals nothing at all of how the mind-brain actually works. The second insight depends on looking at the brain as a biological device that processes information from the environment to build complex representations that enable the brain to make predictions and select advantageous behaviors. Where necessary to avoid ambiguity, we will refer to the first notion of computation as. (shrink)
b>: In this article I outline, apply, and defend a theory of natural representation. The main consequences of this theory are: i) representational status is a matter of how physical entities are used, and specifically is not a matter of causation, nomic relations with the intentional object, or information; ii) there are genuine (brain-)internal representations; iii) such representations are really representations, and not just farcical pseudo-representations, such as attractors, principal components, state-space partitions, or what-have-you;and iv) the theory allows us to (...) sharply distinguish those complex behaviors which are genuinely cognitive from those which are merely complex and adaptive. (shrink)
Using the Gödel Incompleteness Result for leverage, Roger Penrose has argued that the mechanism for consciousness involves quantum gravitational phenomena, acting through microtubules in neurons. We show that this hypothesis is implausible. First, the Gödel Result does not imply that human thought is in fact non algorithmic. Second, whether or not non algorithmic quantum gravitational phenomena actually exist, and if they did how that could conceivably implicate microtubules, and if microtubules were involved, how that could conceivably implicate consciousness, is entirely (...) speculative. Third, cytoplasmic ions such as calcium and sodium are almost certainly present in the microtubule pore, barring the quantum mechanical effects Penrose envisages. Finally, physiological evidence indicates that consciousness does not directly depend on microtubule properties in any case, rendering doubtful any theory according to which consciousness is generated in the microtubules. (shrink)