The reconciliation of theories of concepts based on prototypes, exemplars, and theory-like structures is a longstanding problem in cognitive science. In response to this problem, researchers have recently tended to adopt either hybrid theories that combine various kinds of representational structure, or eliminative theories that replace concepts with a more finely grained taxonomy of mental representations. In this paper, we describe an alternative approach involving a single class of mental representations called “semantic pointers.” Semantic pointers are symbol-like representations that result (...) from the compression and recursive binding of perceptual, lexical, and motor representations, effectively integrating traditional connectionist and symbolic approaches. We present a computational model using semantic pointers that replicates experimental data from categorization studies involving each prior paradigm. We argue that a framework involving semantic pointers can provide a unified account of conceptual phenomena, and we compare our framework to existing alternatives in accounting for the scope, content, recursive combination, and neural implementation of concepts. (shrink)
In a recent series of publications, dynamicist researchers have proposed a new conception of cognitive functioning. This conception is intended to replace the currently dominant theories of connectionism and symbolicism. The dynamicist approach to cognitive modeling employs concepts developed in the mathematical field of dynamical systems theory. They claim that cognitive models should be embedded, low-dimensional, complex, described by coupled differential equations, and non-representational. In this paper I begin with a short description of the dynamicist project and its role as (...) a cognitive theory. Subsequently, I determine the theoretical commitments of dynamicists, critically examine those commitments and discuss current examples of dynamicist models. In conclusion, I determine dynamicism's relation to symbolicism and connectionism and find that the dynamicist goal to establish a new paradigm has yet to be realized. (shrink)
Questions concerning the nature of representation and what representations are about have been a staple of Western philosophy since Aristotle. Recently, these same questions have begun to concern neuroscientists, who have developed new techniques and theories for understanding how the locus of neurobiological representation, the brain, operates. My dissertation draws on philosophy and neuroscience to develop a novel theory of representational content.
I argue that of the four kinds of quantitative description relevant for understanding brain function, a control theoretic approach is most appealing. This argument proceeds by comparing computational, dynamical, statistical and control theoretic approaches, and identifying criteria for a good description of brain function. These criteria include providing useful decompositions, simple state mappings, and the ability to account for variability. The criteria are justified by their importance in providing unified accounts of multi-level mechanisms that support intervention. Evaluation of the four (...) kinds of description with respect to these criteria supports the claim that control theoretic characterizations of brain function are the kind of quantitative description we ought to provide.Keywords: Neural computation; Control theory; Neural representation; Cognitive architecture; Cognitive models; Theoretical neuroscience. (shrink)
We argue that computation via quantum mechanical processes is irrelevant to explaining how brains produce thought, contrary to the ongoing speculations of many theorists. First, quantum effects do not have the temporal properties required for neural information processing. Second, there are substantial physical obstacles to any organic instantiation of quantum computation. Third, there is no psychological evidence that such mental phenomena as consciousness and mathematical thinking require explanation via quantum theory. We conclude that understanding brain function is unlikely to require (...) quantum computation or similar mechanisms. (shrink)
To have a fully integrated understanding of neurobiological systems, we must address two fundamental questions: 1. What do brains do (what is their function)? and 2. How do brains do whatever it is that they do (how is that function implemented)? I begin by arguing that these questions are necessarily inter-related. Thus, addressing one without consideration of an answer to the other, as is often done, is a mistake. I then describe what I take to be the best available approach (...) to addressing both questions. Specifically, to address 2, I adopt the Neural Engineering Framework (NEF) of Eliasmith & Anderson [Neural engineering: Computation representation and dynamics in neurobiological systems. Cambridge, MA: MIT Press, 2003] which identifies implementational principles for neural models. To address 1, I suggest that adopting statistical modeling methods for perception and action will be functionally sufficient for capturing biological behavior. I show how these two answers will be mutually constraining, since the process of model selection for the statistical method in this approach can be informed by known anatomical and physiological properties of the brain, captured by the NEF. Similarly, the application of the NEF must be informed by functional hypotheses, captured by the statistical modeling approach. (shrink)
Inductive reasoning is a fundamental and complex aspect of human intelligence. In particular, how do subjects, given a set of particular examples, generate general descriptions of the rules governing that set? We present a biologically plausible method for accomplishing this task and implement it in a spiking neuron model. We demonstrate the success of this model by applying it to the problem domain of Raven's Progressive Matrices, a widely used tool in the field of intelligence testing. The model is able (...) to generate the rules necessary to correctly solve Raven's items, as well as recreate many of the experimental effects observed in human subjects. (shrink)
The complex systems approach (CSA) to characterizing cognitive function is purported to underlie a conceptual and methodological revolution by its proponents. I examine one central claim from each of the contributed papers and argue that the provided examples do not justify calls for radical change in how we do cognitive science. Instead, I note how currently available approaches in ‘‘standard’’ cognitive science are adequate (or even more appropriate) for understanding the CSA provided examples.
Peter Achinstein (1990, 1991) analyses the scientific debate that took place in the eighteenth and nineteenth centuries concerning the nature of light. He offers a probabilistic account of the methods employed by both particle theorists and wave theorists, and rejects any analysis of this debate in terms of coherence. He characterizes coherence through reference to William Whewell's writings concerning how "consilience of inductions" establishes an acceptable theory (Whewell, 1847) . Achinstein rejects this analysis because of its vagueness and lack of (...) reference to empirical data, concluding that coherence is insufficient to account for the belief change that took place during the wave-particle debate. (shrink)
Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony, “mesh” binding, and conjunctive binding. Recent theoretical work has suggested that most of these methods will not scale well, that is, that they cannot encode structured representations using any of the tens of thousands of terms in the adult lexicon without making implausible resource assumptions. Here, we empirically demonstrate that the biologically plausible structured representations employed in the Semantic Pointer Architecture approach (...) to modeling cognition do scale appropriately. Specifically, we construct a spiking neural network of about 2.5 million neurons that employs semantic pointers to successfully encode and decode the main lexical relations in WordNet, which has over 100,000 terms. In addition, we show that the same representations can be employed to construct recursively structured sentences consisting of arbitrary WordNet concepts, while preserving the original lexical structure. We argue that these results suggest that semantic pointers are uniquely well-suited to providing a biologically plausible account of the structured representations that underwrite human cognition. (shrink)
I argue that dynamicism does not provide a convincing alternative to currently available cognitive theories. First, I show that the attractor dynamics of dynamicist models are inadequate for accounting for high-level cognition. Second, I argue that dynamicist arguments for the rejection of computation and representation are unsound in light of recent empirical findings. This new evidence provides a basis for questioning the importance of continuity to cognitive function, challenging a central commitment of dynamicism. Coupled with a defense of current connectionist (...) theory, these two critiques lead to the conclusion that dynamicists have failed to achieve their goal of providing a new paradigm for understanding cognition. (shrink)
It will always remain a remarkable phenomenon in the history of philosophy, that there was a time, when even mathematicians, who at the same time were philosophers, began to doubt, not of the accuracy of their geometrical propositions so far as they concerned space, but of their objective validity and the applicability of this concept itself, and of all its corollaries, to nature. They showed much concern whether a line in nature might not consist of physical points, and consequently that (...) true space in the object might consist of simple [discrete] parts, while the space which the geometer has in his mind [being continuous] cannot be such. (shrink)
To have a fully integrated understanding of neurobiological systems, we must address two fundamental questions: 1. What do brains do? and 2. How do brains do whatever it is that they do? I begin by arguing that these questions are necessarily inter-related. Thus, addressing one without consideration of an answer to the other, as is often done, is a mistake. I then describe what I take to be the best available approach to addressing both questions. Specifically, to address 2, I (...) adopt the Neural Engineering Framework of Eliasmith & Anderson [Neural engineering: Computation representation and dynamics in neurobiological systems. Cambridge, MA: MIT Press, 2003] which identifies implementational principles for neural models. To address 1, I suggest that adopting statistical modeling methods for perception and action will be functionally sufficient for capturing biological behavior. I show how these two answers will be mutually constraining, since the process of model selection for the statistical method in this approach can be informed by known anatomical and physiological properties of the brain, captured by the NEF. Similarly, the application of the NEF must be informed by functional hypotheses, captured by the statistical modeling approach. (shrink)
The properties of Turing’s famous ‘universal machine’ has long sustained functionalist intuitions about the nature of cognition. Here, I show that there is a logical problem with standard functionalist arguments for multiple realizability. These arguments rely essentially on Turing’s powerful insights regarding computation. In addressing a possible reply to this criticism, I further argue that functionalism is not a useful approach for understanding what it is to have a mind. In particular, I show that the difficulties involved in distinguishing implementation (...) from function make multiple realizability claims untestable and uninformative. As a result, I conclude that the role of Turing machines in philosophy of mind needs to be reconsidered. (shrink)
It has been suggested that Marr took the three levels he famously identifies to be independent. In this paper, we argue that Marr's view is more nuanced. Specifically, we show that the view explicitly articulated in his work attempts to integrate the levels, and in doing so results in Marr attacking both reductionism and vagueness. The result is a perspective in which both high-level information-processing constraints and low-level implementational constraints play mutually reinforcing and constraining roles. We discuss our recent work (...) on Spaun—currently the world's largest functional brain model—that demonstrates the positive impact of this kind of unifying integration of Marr's levels. We argue that this kind of integration avoids his concerns with both reductionism and vagueness. In short, we suggest that the methods behind Spaun can be used to satisfy Marr's explicit interest in combining high-level functional and detailed mechanistic explanations. (shrink)
Van Gelder (1995) has recently spearheaded a movement to challenge the dominance of connectionist and classicist models in cognitive science. The dynamical conception of cognition is van Gelder's replacement for the computation bound paradigms provided by connectionism and classicism. He relies on the Watt governor to fulfill the role of a dynamicist Turing machine and claims that the Motivational Oscillatory Theory (MOT) provides a sound empirical basis for dynamicism. In other words, the Watt governor is to be the theoretical exemplar (...) of the class of systems necessary for cognition and MOT is an empirical instantiation of that class. However, I shall argue that neither the Watt governor nor MOT successfully fulfill these prescribed roles. This failure, along with van Gelder's peculiar use of the concept of computation and his struggle with representationalism, prevent him from providing a convincing alternative to current cognitive theories. (shrink)
Van Gelder has recently spearheaded a movement to challenge the dominance of connectionist and classicist models in cognitive science. The dynamical conception of cognition is van Gelder's replacement for the computation bound paradigms provided by connectionism and classicism. He relies on the Watt governor to fulfill the role of a dynamicist Turing machine and claims that the Motivational Oscillatory Theory provides a sound empirical basis for dynamicism. In other words, the Watt governor is to be the theoretical exemplar of the (...) class of systems necessary for cognition and MOT is an empirical instantiation of that class. However, I shall argue that neither the Watt governor nor MOT successfully fulfill these prescribed roles. This failure, along with van Gelder's peculiar use of the concept of computation and his struggle with representationalism, prevent him from providing a convincing alternative to current cognitive theories. (shrink)
Van Gelder has presented a position which he ties closely to a broad class of models known as dynamical models. While supporting many of his broader claims about the importance of this class (as has been argued by connectionists for quite some time), I note that there are a number of unique characteristics of his brand of dynamicism. I suggest that these characteristics engender difficulties for his view.
We use a spiking neural network model of working memory capable of performing the spatial delayed response task to investigate two drugs that affect WM: guanfacine and phenylephrine. In this model, the loss of information over time results from changes in the spiking neural activity through recurrent connections. We reproduce the standard forgetting curve and then show that this curve changes in the presence of GFC and PHE, whose application is simulated by manipulating functional, neural, and biophysical properties of the (...) model. In particular, applying GFC causes increased activity in neurons that are sensitive to the information currently being remembered, while applying PHE leads to decreased activity in these same neurons. Interestingly, these differential effects emerge from network-level interactions because GFC and PHE affect all neurons equally. We compare our model to both electrophysiological data from neurons in monkey dorsolateral prefrontal cortex and to behavioral evidence from monkeys performing the DRT. (shrink)
The ability to improve in speed and accuracy as a result of repeating some task is an important hallmark of intelligent biological systems. Although gradual behavioral improvements from practice have been modeled in spiking neural networks, few such models have attempted to explain cognitive development of a task as complex as addition. In this work, we model the progression from a counting-based strategy for addition to a recall-based strategy. The model consists of two networks working in parallel: a slower basal (...) ganglia loop and a faster cortical network. The slow network methodically computes the count from one digit given another, corresponding to the addition of two digits, whereas the fast network gradually “memorizes” the output from the slow network. The faster network eventually learns how to add the same digits that initially drove the behavior of the slower network. Performance of this model is demonstrated by simulating a fully spiking neural network that includes basal ganglia, thalamus, and various cortical areas. Consequently, the model incorporates various neuroanatomical data, in terms of brain areas used for calculation and makes psychologically testable predictions related to frequency of rehearsal. Furthermore, the model replicates developmental progression through addition strategies in terms of reaction times and accuracy, and naturally explains observed symptoms of dyscalculia. (shrink)
Quantum probability (QP) theory can be seen as a type of vector symbolic architecture (VSA): mental states are vectors storing structured information and manipulated using algebraic operations. Furthermore, the operations needed by QP match those in other VSAs. This allows existing biologically realistic neural models to be adapted to provide a mechanistic explanation of the cognitive phenomena described in the target article by Pothos & Busemeyer (P&B).
In this article, we highlight three questions: (1) Does human cognition rely on structured internal representations? (2) How should theories, models and data relate? (3) In what ways might embodiment, action and dynamics matter for understanding the mind and the brain?
There has been a long-standing debate between symbolicists and connectionists concerning the nature of representation used by human cognizers. In general, symbolicist commitments have allowed them to provide superior models of high-level cognitive function. In contrast, connectionist distributed representations are preferred for providing a description of low-level cognition. The development of Holographic Reduced Representations (HRRs) has opened the possibility of one representational medium unifying both low-level and high-level descriptions of cognition. This paper describes the relative strengths and weaknesses of symbolic (...) and distributed representations. HRRs are shown to capture the important strengths of both types of representation. These properties of HRRs allow a rebuttal of Fodor and McLaughlin's (1990) criticism that distributed representations are not adequately structure sensitive to provide a full account of human cognition. (shrink)
Many contemporary philosophers favor coherence theories of knowledge (Bender 1989, BonJour 1985, Davidson 1986, Harman 1986, Lehrer 1990). But the nature of coherence is usually left vague, with no method provided for determining whether a belief should be accepted or rejected on the basis of its coherence or incoherence with other beliefs. Haack's (1993) explication of coherence relies largely on an analogy between epistemic justification and crossword puzzles. We show in this paper how epistemic coherence can be understood in terms (...) of maximization of constraint satisfaction, in keeping with computational models that have had a substantial impact in cognitive science. A coherence problem can be defined in terms of a set of elements and sets of positive and negative constraints between pairs of those elements. Algorithms are available for computing coherence by determining how to accept and reject elements in a way that satisfies the most constraints. Knowledge involves at least five different kinds of coherence - explanatory, analogical, deductive, perceptual, and conceptual - each requiring different sorts of elements and constraints. (shrink)