The posterior parietal cortex and frontal eye field contain maps of visual salience on which the decision to choose a saccade may be based. However, an averaging express saccade is not represented by a victorious unimodal representation in the superior colliculus. Normalization as described by Findlay & Walker is not necessary for the generation of saccades.
A standing challenge for the science of mind is to account for the datum that every mind faces in the most immediate – that is, unmediated – fashion: its phenomenal experience. The complementary tasks of explaining what it means for a system to give rise to experience and what constitutes the content of experience (qualia) in computational terms are particularly challenging, given the multiple realizability of computation. In this paper, we identify a set of conditions that a computational theory must (...) satisfy for it to constitute not just a sufficient but a necessary, and therefore naturalistic and intrinsic, explanation of qualia. We show that a common assumption behind many neurocomputational theories of the mind, according to which mind states can be formalized solely in terms of instantaneous vectors of activities of representational units such as neurons, does not meet the requisite conditions, in part because it relies on inactive units to shape presently experienced qualia and implies a homogeneous representation space, which is devoid of intrinsic structure. We then sketch a naturalistic computational theory of qualia, which posits that experience is realized by dynamical activity-space trajectories (rather than points) and that its richness is measured by the representational capacity of the trajectory space in which it unfolds. (shrink)
The theoretical debate in linguistics during the past half-century bears an uncanny parallel to the politics of the (now defunct) Communist Bloc. The parallels are not so much in the revolutionary nature of Chomsky's ideas as in the Bolshevik manner of his takeover of linguistics (Koerner 1994) and in the Trotskyist (“permanent revolution”) flavor of the subsequent development of the doctrine of Transformational Generative Grammar (TGG) (Townsend & Bever 2001, pp. 37–40). By those standards, Jackendoff is quite a party faithful (...) (a Khrushchev or a Dubcek, rather than a Solzhenitsyn or a Sakharov) who questions some of the components of the dogma, yet stops far short of repudiating it. (shrink)
A view is put forward, according to which various aspects of the structure of the world as internalized by the brain take the form of “neural spaces,” a concrete counterpart for Shepard's “abstract” ones. Neural spaces may help us understand better both the representational substrate of cognition and the processes that operate on it. [Shepard].
Supposing the symbol system postulated by Barsalou is perceptual through and through -- what then? The target article outlines an intriguing and exciting theory of cognition in which (1) wellspecified, event- or object-linked percepts assume the role traditionally allotted to abstract and arbitrary symbols, and (2) perceptual simulation is substituted for processes traditionally believed to require symbol manipulation, such as deductive reasoning. We take a more extreme stance on the role of perception (in particular, vision) in shaping cognition, and propose, (...) in addition to Barsalou's postulates, that (3) spatial frames, endowed with a perceptual structure not unlike that of the retinotopic space, pervade all sensory modalities and are used to support compositionality. (shrink)
The three commentaries of Van Orden, Spivey and Anderson, and Dietrich (with Markman’s as a backdrop) form a tableau that reminds me of a fable by Ivan Andreevich Krylov (1769 - 1844), in which a swan, a pike, and a crawﬁsh undertake jointly to move a cart laden with goods. What transpires then is not unexpected: the swan strives skyward, the pike pulls toward the river, and the crawﬁsh scrambles backward. The call for papers for the present ecumenically minded special (...) issue of JETAI was designed to minimize this kind of discord, by charging the authors to examine the possibility of epistemological pluralism in cognitive science — a ﬁeld whose very diversity makes fundamental disagreement more likely than in other sciences. No doubt, the road mapped out by the editor had been conceived with good intentions in mind, but where did it lead us? It has been said that no good intention must go unpunished. To celebrate this venerable academic tradition (and also because I have a reputation to maintain), the following remarks will therefore be mostly other than conciliatory; caveat lector. (shrink)
The computational program for theoretical neuroscience initiated by Marr and Poggio (1977) calls for a study of biological information processing on several distinct levels of abstraction. At each of these levels — computational (deﬁning the problems and considering possible solutions), algorithmic (specifying the sequence of operations leading to a solution) and implementational — signiﬁcant progress has been made in the understanding of cognition. In the past three decades, computational principles have been discovered that are common to a wide range of (...) functions in perception (vision, hearing, olfaction) and action (motor control). More recently, these principles have been applied to the analysis of cognitive tasks that require dealing with structured information, such as visual scene understanding and analogical reasoning. Insofar as language relies on cognition-general principles and mechanisms, it should be possible to capitalize on the recent advances in the computational study of cognition by extending its methods to linguistics. (shrink)
A metaphor that has dominated linguistics for the entire duration of its existence as a discipline views sentences as ediﬁces consisting of Lego-like building blocks. It is assumed that each sentence is constructed (and, on the receiving end, parsed) ab novo, starting (ending) with atomic constituents, to logical semantic speciﬁcations, in a recursive process governed by a few precise algebraic rules. The assumptions underlying the Lego metaphor, as it is expressed in generative grammar theories, are: (1) perfect regularity of what (...) Saussure called langue, (2) inﬁnite potential recursivity of syntactic structures, (3) unlimited human capacity for linguistic creativity, (4) the impossibility of acquiring structural knowledge from examples, and (5) the impossibility of such knowledge being stored in a memory-intensive form (ensembles of exemplars). (shrink)
The Dynamic Core and Global Workspace hypotheses were independently put forward to provide mechanistic and biologically plausible accounts of how brains generate conscious mental content. The Dynamic Core proposes that reentrant neural activity in the thalamocortical system gives rise to conscious experience. Global Workspace reconciles the limited capacity of momentary conscious content with the vast repertoire of long term memory. In this paper we show the close relationship between the two hypotheses. This relationship allows for a strictly biological account of (...) phenomenal experience and subjectivity that is consistent with mounting experimental evidence. We examine the constraints on causal analyses of consciousness and suggest that there is now sufficient evidence to consider the design and construction of a conscious artifact. (shrink)
(a) Learn a grammar GA for the source language (A). (b) Estimate a structural statistical language model SSLMA for (A). Given a grammar (consisting of terminals and nonterminals) and a partial sentence (sequence of terminals (t1 . . . ti)), an SSLM assigns probabilities to the possible choices of the next terminal ti+1.
Although computational considerations suggest that a resource-limited memory system may have to trade oﬀ capacity for generalization ability, such a trade-oﬀ has not been demonstrated in the past. We describe a simple model of memory that exhibits this trade-oﬀ and describe its performance in a variety of tasks.
Lasnik’s review of the Minimalist program in syntax  offers cognitive scientists help in navigating some of the arcana of the current theoretical thinking in transformational generative grammar. One may observe, however, that this journey is more like a taxi ride gone bad than a free tour: it is the driver who decides on the itinerary, and questioning his choice may get you kicked out. Meanwhile, the meter in the cab of the generative theory of grammar is running, and (...) has been since the publication of Chomsky’s Syntactic Structures in 1957. The fare that it ran up is none the less daunting for the detours made in his Aspects of Theory of Syntax in 1965, Government and Binding in 1981, and now The Minimalist Program, in 1995. Paraphrasing Winston Churchill, it seems like never in the ﬁeld of cognitive science was so much owed by so many of us to so few (the generative linguists). For most of us in the cognitive sciences this situation will appear quite benign (that is, if we don’t hold a grudge for having been taken for a longer than necessary ride), if we realize that it is the generative linguists who should by rights be paying this bill. The reason for that is simple and is well known in the philosophy of science: putting forward a theory is like taking out a loan, to be repayed by gleaning an empirical basis for it; theories that fail to do so (or their successors that may have bought their debts) are declared bankrupt. In the sciences of the mind, this maxim translates into the need to demonstrate the psychological (behavioral), and, eventually, the neurobiological, reality of the theoretical constructs. Many examples of this process can be found in the study of human vision, where, as in language, direct observation of the underlying mechanisms is difﬁcult; for instance, the concept of multiple parallel spatial frequency channels, introduced in the late 1960s, was completely vindicated by purely behavioral means over the following decade; see, e.g., . In linguistics, the nature of the requisite evidence is well described by Townsend and Bever: “What do we test today if we want to explore the behavioral implications of syntax? .. (shrink)
We describe a pattern acquisition algorithm that learns, in an unsupervised fashion, a streamlined representation of linguistic structures from a plain natural-language corpus. This paper addresses the issues of learning structured knowledge from a large-scale natural language data set, and of generalization to unseen text. The implemented algorithm represents sentences as paths on a graph whose vertices are words (or parts of words). Signiﬁcant patterns, determined by recursive context-sensitive statistical inference, form new vertices. Linguistic constructions are represented by (...) trees composed of signiﬁcant patterns and their associated equivalence classes. An input module allows the algorithm to be subjected to a standard test of English as a Second Language (ESL) proﬁ- ciency. The results are encouraging: the model attains a level of performance considered to be “intermediate” for 9th-grade students, despite having been trained on a corpus (CHILDES) containing transcribed speech of parents directed to small children. (shrink)
We describe a uniﬁed framework for the understanding of structure representation in primate vision. A model derived from this framework is shown to be effectively systematic in that it has the ability to interpret and associate together objects that are related through a rearrangement of common “middle-scale” parts, represented as image fragments. The model addresses the same concerns as previous work on compositional representation through the use of what+where receptive ﬁelds and attentional gain modulation. It does not require (...) prior exposure to the individual parts, and avoids the need for abstract symbolic binding. (shrink)
This article discusses the merits of teaching legal analysis and writing and of developing a legal writing program at a faculty of law, and recommends that law faculties around the world incorporate this subject. Once absent from the American law school curriculum, this subject has become a required subject in all American law schools over the past 25+ years. The article suggests steps for implementing a legal writing course or program, and offers a variety of resources for doing so.
Language is a rewarding ﬁeld if you are in the prediction business. A reader who is ﬂuent in English and who knows how academic papers are typically structured will readily come up with several possible guesses as to where the title of this section could have gone, had it not been cut short by the ellipsis. Indeed, in the more natural setting of spoken language, anticipatory processing is a must: performance of machine systems for speech interpretation depends critically on the (...) availability of a good predictive model of how utterances unfold in time (Baker, 1975; Jelinek, 1990; Goodman, 2001), and there is strong evidence that prospective uncertainty affects human sentence processing too (Jurafsky, 2003; Hale, 2006; Levy, 2008). The human ability to predict where the current utterance is likely to be going is just another adaptation to the general pressure to anticipate the future (Hume, 1748; Dewey, 1910; Craik, 1943), be it in perception, thinking, or action, which is exerted on all cognitive systems by evolution (Dennett, 2003). Look-ahead in language is, however, special in one key respect: language is a medium for communication, and in communication the most interesting (that is, informative) parts of the utterance that the speaker is working through are those that cannot be predicted by the listener ahead of time. (shrink)
Two of the premises of the target paper -- surface reconstruction as the goal of early vision, and inaccessibility of intermediate stages in the process presumably leading to such reconstruction -- are questioned and found wanting.
We compare our model of unsupervised learning of linguistic structures, ADIOS , to some recent work in computational linguistics and in grammar theory. Our approach resembles the Construction Grammar in its general philosophy (e.g., in its reliance on structural generalizations rather than on syntax projected by the lexicon, as in the current generative theories), and the Tree Adjoining Grammar in its computational characteristics (e.g., in its apparent afﬁnity with Mildly Context Sensitive Languages). The representations learned by our algorithm are truly (...) emergent from the (unannotated) corpus data, whereas those found in published works on cognitive and construction grammars and on TAGs are hand-tailored. Thus, our results complement and extend both the computational and the more linguistically oriented research into language acquisition. We conclude by suggesting how empirical and formal study of language can be best integrated. (shrink)
The standard behavioral index for human consciousness is the ability to report events with accuracy. While this method is routinely used for scientific and medical applications in humans, it is not easy to generalize to other species. Brain evidence may lend itself more easily to comparative testing. Human consciousness involves widespread, relatively fast low-amplitude interactions in the thalamocortical core of the brain, driven by current tasks and conditions. These features have also been found in other mammals, which suggests that consciousness (...) is a major biological adaptation in mammals. We suggest more than a dozen additional properties of human consciousness that may be used to test comparative predictions. Such homologies are necessarily more remote in non-mammals, which do not share the thalamocortical complex. However, as we learn more we may be able to make “deeper” predictions that apply to some birds, reptiles, large-brained invertebrates, and perhaps other species. (shrink)
This essay is a discussion of Aquinas’s argument "from motion" to the existence of God as the argument is found in his ’Summa Contra Gentiles’. The aim of the essay is to suggest an approach to Aquinas’s argument that emphasizes its particular context, where "context" signifies not so much the assumed Aristotelian physics as Aquinas’s larger project of carrying out "the office of a wise man," namely, "to order things." Construing the relevant "ordering" as a making sense of things -- (...) indeed of "the whole of things" -- the argument from motion is thus seen as part of an attempt to make sense of what, following Aristotle, can be called "the whole of life," that whole within which any one of us must live out his or her particular life. Several ideas found in Wittgenstein’s ’Tractatus Logico-Philosophicus’ are introduced in the conviction that they may help at least some of us to see the "strangeness" of the conclusion of Aquinas’s argument, the conclusion, namely, that the first principl. (shrink)
Merker's approach allows the formulation of an evolutionary view of consciousness that abandons a dependence on structural homology – in this case, the presence of a cerebral cortex – in favor of functional concordance. In contrast to Merker, though, I maintain that the emergence of complex, dynamic interactions, such as those which occur between thalamus and cortex, was central to the appearance of consciousness. (Published Online May 1 2007).
"Each man calls barbarism whatever is not his own practice; for indeed it seems we have no other test of truth and reason than the example and pattern of the opinions and customs of the country we live in" (1.31.152, VS205).1 Remarks such as this from the essay "Of cannibals" have led commentators to argue that Montaigne subscribes to the theory of moral relativism, and that he takes "reason" to be a subjective, rather than an objective, standard for judgment.2 Yet (...) later in that same essay, Montaigne condemns the cannibals' brutal treatment of their enemies (1.31.155, VS209) and concludes that "we may call these people barbarians, in respect to the rules of reason, but not in respect to ourselves, who surpass them .. (shrink)
Intelligent systems are faced with the problem of securing a principled (ideally, veridical) relationship between the world and its internal representation. I propose a unified approach to visual representation, addressing both the needs of superordinate and basic-level categorization and of identification of specific instances of familiar categories. According to the proposed theory, a shape is represented by its similarity to a number of reference shapes, measured in a high-dimensional space of elementary features. This amounts to embedding the stimulus in a (...) low-dimensional proximal shape space. That space turns out to support representation of distal shape similarities which is veridical in the sense of Shepard's (1968) notion of second-order isomorphism (i.e., correspondence between distal and proximal similarities among shapes, rather than between distal shapes and their proximal representations). Furthermore, a general expression for similarity between two stimuli, based on comparisons to reference shapes, can be used to derive models of perceived similarity ranging from continuous, symmetric, and hierarchical, as in the multidimensional scaling models (Shepard, 1980), to discrete and non-hierarchical, as in the general contrast models (Tversky, 1977; Shepard and Arabie, 1979). (shrink)
It is proposed to conceive of representation as an emergent phenomenon that is supervenient on patterns of activity of coarsely tuned and highly redundant feature detectors. The computational underpinnings of the outlined concept of representation are (1) the properties of collections of overlapping graded receptive fields, as in the biological perceptual systems that exhibit hyperacuity-level performance, and (2) the sufficiency of a set of proximal distances between stimulus representations for the recovery of the corresponding distal contrasts between stimuli, as in (...) multidimensional scaling. The present preliminary study appears to indicate that this concept of representation is computationally viable, and is compatible with psychological and neurobiological data. (shrink)
The distinction between receptive field and conceptual field is appealing and heuristically useful. Conceptually, it is more satisfactory to distinguish between information from the environment and from the brain. We emphasize here a selectionist view that considers information transmission within the brain as modulated by a stimulus, rather than information transmission from a stimulus as modulated by the context.
differentiaily rated pairwise similarity when confronted with two pairs of objects, each revolving in a separate window on a computer screen. Subject data were pooled using individually weighted MDS (ref. 11; in all the experiments, the solutions were consistent among subjects). In each trial, the subject had to select among two pairs of shapes the one consisting of the most similar shapes. The subjects were allowed to respond at will; most responded within 10 sec. Proximity (that is, perceived similarity) tables (...) derived from the judgments were processed to verify their degree of transitivity (4% of all triplets were found intransitive) and then submitted to MDS. In the long-term memory (LTM) variant of this experiment, the subjects were first trained to associate a label (a three-letter nonsensical string, such as "BON" or "POM") with each object and then carried out the pairs of pairs comparison task from memory, prompted by the object labels rather than by the objects themselves. Six subjects participated in each of the two LTM experiments (Star and Triangle). The subjects were taught each shape in a separate session and had to discriminate between that shape and six similar nontargets from various viewpoints. Training continued until the recognition rate reached 90%, over a period of several days. The subjects were never exposed to more than one target in one session and were not told the ultimate purpose of the experiment. After 2 to 3 days of rest, they were tested with questions such as: "is the BON more similar to POM than TOC to ROX?", for all pairs of pairs of stimuli. In the LTM experiments, 8% of the.. (shrink)
This book has two basic aims: to provide a clear and comprehensive account of the most prominent moral philosophies of ancient Greece and Rome, and to explain how for their adherents, these philosophies both motivated and constituted distinctive ways of life. Cooper succeeds admirably in achieving the first aim: he gives clear and concise accounts of the moral philosophies of Socrates, Aristotle, the Stoics, the Epicureans, the Pyrrhonists, and the Platonists. Each chapter explores not only the basic theories of the (...) school in question, but also some lingering questions readers may have about those theories’ implications. Cooper aims for his book to be both accessible to readers with little formal .. (shrink)
Nearest-neighbor correlation-based similarity computation in the space of outputs of complex-type receptive elds can support robust recognition of 3D objects. Our experiments with four collections of objects resulted in mean recognition rates between 84% (for subordinate-level discrimination among 15 quadruped animal shapes) and 94% (for basic-level recognition of 20 everyday objects), over a 40 40 range of viewpoints, centered on a stored canonical view and related to it by rotations in depth. This result has interesting implications for the design of (...) a front end to an arti cial object recognition system, and for the understanding of the faculty of object recognition in primate vision. (shrink)
The metacognitive stance of Smith et al. (2003) risks ignoring sensory consciousness. Although Smith et al. rightly caution against the tendency to preserve the uniqueness of the human mind at all costs, their reasoned stance is undermined by a selective association of consciousness with high-level cognitive operations. Neurobiological evidence may offer a more general, and hence more inclusive, basis for the systematic study of animal consciousness.
By what empirical means can a person determine whether he or she is presently awake or dreaming? Subjecting the experienced reality to a statistical test for bizarreness requires a set of baseline measurements. In a dream or in a simulation, those would be vulnerable to tampering by the same processes that give rise to the experienced reality, making the outcome of a reality test impossible to trust. Moreover, cryptographic defenses against tampering cannot be relied upon, because of the potentially unlimited (...) reach of reality modification, which may range from the integrity of the verification keys to the declared outcome of the entire process. Although the rational course of action in the face of this double predicament is to take reality at face value, even the most revealing insight that a person may gain into the ultimate nature of reality (for instance, by attaining enlightenment) is ultimately unreliable, for the reasons just mentioned. However, to adhere to this principle, one has to be aware of it, which may not be possible in various states of altered cognitive function (e.g., dreaming). Thus, a subjectively enlightened person may still lack the one truly important piece of the puzzle concerning his or her existence. (shrink)
We report a quantitative analysis of the cross-utterance coordination observed in child-directed language, where successive utterances often overlap in a manner that makes their constituent structure more prominent, and describe the application of a recently published unsupervised algorithm for grammar induction to the largest available corpus of such language, producing a grammar capable of accepting and generating novel wellformed sentences. We also introduce a new corpus-based method for assessing the precision and recall of an automatically acquired generative grammar without recourse (...) to human judgment. The present work sets the stage for the eventual development of more powerful unsupervised algorithms for language acquisition, which would make use of the coordination structures present in natural child-directed speech. (shrink)
SUMMARY. This paper examines four current theoretical approaches to the representation and recognition of visual objects: structural descriptions, geometric constraints, multidimensional feature spaces, and shape-space approximation. The strengths and the weaknesses of the theories are considered, with a special focus on their approach to categorization — a computationally challenging task which is not widely addressed in computer vision (where the stress is rather on the generalization of recognition across changes of viewpoint).
The distributional principle according to which morphemes that occur in identical contexts belong, in some sense, to the same category  has been advanced as a means for extracting syntactic structures from corpus data. We extend this principle by applying it recursively, and by using mutual information for estimating category coherence. The resulting model learns, in an unsupervised fashion, highly structured, distributed representations of syntactic knowledge from corpora. It also exhibits promising behavior in tasks usually thought to require representations anchored (...) in a grammar, such as systematicity. (shrink)
What insights does comparative biology provide for furthering scienti¿ c understanding of the evolution of dynamic coordination? Our discussions covered three major themes: (a) the fundamental unity in functional aspects of neurons, neural circuits, and neural computations across the animal kingdom; (b) brain organization –behavior relationships across animal taxa; and (c) the need for broadly comparative studies of the relationship of neural structures, neural functions, and behavioral coordination. Below we present an overview of neural machinery and computations that are shared (...) by all nervous systems across the animal kingdom, and the related fact that there really are no “simple” relationships in coordination between nervous systems and the behavior they produce. The simplest relationships seen in living organisms are already fairly complex by computational standards. These realizations led us to think about ways that brain similarities and differences could be used to produce new insights into complex brain–behavior phenomena (including a critical appraisal of the roles of cortical and noncortical structures in mammalian behavior), and to think brieÀy about how future studies could best exploit comparative methods to elucidate better general principles underlying the neural mechanisms associated with behavioral coordination. In our view, it is unlikely that the intricacies interrelating neural and behavioral coordination are due to one particular manifestation (such as neural oscillation or the possession of a six-layered cortex). Instead of considering the human cortex to be the standard against which all things are measured (and thus something to crow about), both broad and focused comparative studies on behavioral similarities and differences will be necessary to elucidate the fundamental principles underlying dynamic coordination. (shrink)
Shanahan’s eloquently argued version of the global workspace theory ﬁts well into the emerging understanding of consciousness as a computational phenomenon. His disinclination toward metaphysics notwithstanding, Shanahan’s book can also be seen as supportive of a particular metaphysical stance on consciousness — the computational identity theory.
The publication in 1982 of David Marr’s Vision has delivered a singular boost and a course correction to the science of vision. Thirty years later, cognitive science is being transformed by the new ways of thinking about what it is that the brain computes, how it does that, and, most importantly, why cognition requires these computations and not others. This ongoing process still owes much of its impetus and direction to the sound methodology, engaging style, and unique voice of Marr’s (...) Vision. (shrink)