In the first section of the article, we examine some recent criticisms of the connectionist enterprise: first, that connectionist models are fundamentally behaviorist in nature (and, therefore, non-cognitive), and second that connectionist models are fundamentally associationist in nature (and, therefore, cognitively weak). We argue that, for a limited class of connectionist models (feed-forward, pattern-associator models), the first criticism is unavoidable. With respect to the second criticism, we propose that connectionist modelsare fundamentally associationist but that this is appropriate for building models (...) of human cognition. However, we do accept the point that there are cognitive capacities for which any purely associative model cannot provide a satisfactory account. The implication that we draw from is this is not that associationist models and mechanisms should be scrapped, but rather that they should be enhanced.In the next section of the article, we identify a set of connectionist approaches which are characterized by “active symbols” — recurrent circuits which are the basis of knowledge representation. We claim that such approaches avoid criticisms of behaviorism and are, in principle, capable of supporting full cognition. In the final section of the article, we speculate at some length about what we believe would be the characteristics of a fully realized active symbol system. This includes both potential problems and possible solutions (for example, mechanisms needed to control activity in a complex recurrent network) as well as the promise of such systems (in particular, the emergence of knowledge structures which would constitute genuine internal models). (shrink)
The Turing Test, originally proposed as a simple operational definition of intelligence, has now been with us for exactly half a century. It is safe to say that no other single article in computer science, and few other articles in science in general, have generated so much discussion. The present article chronicles the comments and controversy surrounding Turing's classic article from its publication to the present. The changing perception of the Turing Test over the last fifty years has paralleled the (...) changing attitudes in the scientific community towards artificial intelligence: from the unbridled optimism of 1960's to the current realization of the immense difficulties that still lie ahead. I conclude with the prediction that the Turing Test will remain important, not only as a landmark in the history of the development of intelligent machines, but also with real relevance to future generations of people living in a world in which the cognitive capacities of machines will be vastly greater than they are now. (shrink)
High-level perception--”the process of making sense of complex data at an abstract, conceptual level--”is fundamental to human cognition. Through high-level perception, chaotic environmen- tal stimuli are organized into the mental representations that are used throughout cognitive pro- cessing. Much work in traditional artificial intelligence has ignored the process of high-level perception, by starting with hand-coded representations. In this paper, we argue that this dis- missal of perceptual processes leads to distorted models of human cognition. We examine some existing artificial-intelligence models--”notably (...) BACON, a model of scientific discovery, and the Structure-Mapping Engine, a model of analogical thought--”and argue that these are flawed pre- cisely because they downplay the role of high-level perception. Further, we argue that perceptu- al processes cannot be separated from other cognitive processes even in principle, and therefore that traditional artificial-intelligence models cannot be defended by supposing the existence of a --œrepresentation module--� that supplies representations ready-made. Finally, we describe a model of high-level perception and analogical thought in which perceptual processing is integrated with analogical mapping, leading to the flexible build-up of representations appropriate to a given context. (shrink)
Computational modeling has long been one of the traditional pillars of cognitive science. Unfortunately, the computer models of cognition being developed today have not kept up with the enormous changes that have taken place in computer technology and, especially, in human-computer interfaces. For all intents and purposes, modeling is still done today as it was 25, or even 35, years ago. Everyone still programs in his or her own favorite programming language, source code is rarely made available, accessibility of models (...) to non-programming researchers is essentially non-existent, and even for other modelers, the profusion of source code in a multitude of programming languages, written without programming guidelines, makes it almost impossible to access, check, explore, re-use, or continue to develop. It is high time to change this situation, especially since the tools are now readily available to do so. We propose that the modeling community adopt three simple guidelines that would ensure that computational models would be accessible to the broad range of researchers in cognitive science. We further emphasize the pivotal role that journal editors must play in making computational models accessible to readers of their journals. (shrink)
Implicit Learning and Consciousness challenges conventional wisdom and presents the most up-to-date studies to define, quantify and test the predictions of the main models of implicit learning. The chapters include a variety of research from computer modeling, experimental psychology and neural imaging to the clinical data resulting from work with amnesics. The result is a topical book that provides an overview of the debate on implicit learning, and the various philosophical, psychological and neurological frameworks in which it can be placed. (...) It will be of interest to undergraduates, postgraduates and the philosophical, psychological and modeling research community. (shrink)
Aldo Leopold was a pragmatist in the vernacular sense of the word. Bryan G. Norton claims that Leopold was also heavily influenced by American Pragmatism, a formal school of philosophy. As evidence, Norton offers Leopold's misquotation of a definition of right (as truth) by political economist, A.T. Hadley, who was an admirer of the philosophy of William James. A search of Leopold's digitised literary remains reveals no other evidence that Leopold was directly influenced by any actual American Pragmatist or by (...) Pragmatism (although he may have been indirectly influenced by Pragmatism early in his career). A 1923 reference, by Leopold, to Hadley and Hadley's putative definition of truth, cited by Norton, is dripping with irony. Leopold, as he matured philosophically, regarded a profound cultural shift from anthropocentric dominionism and consumerism to an evolutionary-ecological worldview and an associated non-anthropocentric 'land ethic' to be necessary for successful and sustainable conservation. Hadley espoused a brutal form of Social Darwinism and his philosophy, as expressed in the book of Hadley's that Norton cites, is politically reactionary, militaristic and unconcerned with conservation. Leopold's mature philosophy and Hadley's – far from consonant, as Norton claims – are diametrically opposed. (shrink)
In this thesis, I both analyze the phenomenology of vision from a geometrical point of view, and also develop certain connections between that geometrical analysis and the mind body problem. In order to motivate the need for such an analysis, I first show, by means of a refutation of direct realism, that visual space is never identical with any of the physical objects being indirectly "seen" by constituting color arrangements in it. It thus follows that the geometry of visual space (...) may be quite different from the Euclidean geometry of physical space, and I proceed to analyze that geometry. ;I argue that topologically, visual space is two dimensional, inasmuch as regions of it are capable of being bounded by a line, such as the borders around the various objects constituted in it. An apparent paradox arises here though, inasmuch as we posess phenomenal depth perception, which is particularly striking during binocular vision, and thus the question arises as to how the binocular depth cue of retinal disparity is registered phenomenally in a two dimensional space. I resolve this apparent paradox by arguing that the internal metric struture of this space can be apprehended phenomenally, and can serve as such a phenomenal depth cue. It is shown that holistically, this metric structure is elliptical, since for example marginal distortions in wide-angle photography are not present in visual space, and it is also noted that there is a tendency towards size constancy in visual perception. It is shown from these geometrical considerations that visual space posseses a variable curvature, with that curvature being determined by the physical depths of objects constituted in the space. ;Finally, I investigate what bearing the preceding geometrical conclusions have on the question of how events in visual space may be causally determined by neural events in the brain. The classical isomorphic theory of Gestalt psychology is then reinterpreted in light of my analysis of the geometry of visual space. (shrink)
No computer that had not experienced the world as we humans had could pass a rigorously administered standard Turing Test. We show that the use of “subcognitive” questions allows the standard Turing Test to indirectly probe the human subcognitive associative concept network built up over a lifetime of experience with the world. Not only can this probing reveal differences in cognitive abilities, but crucially, even differences in _physical aspects_ of the candidates can be detected. Consequently, it is unnecessary to propose (...) even harder versions of the Test in which all physical and behavioral aspects of the two candidates had to be indistinguishable before allowing the machine to pass the Test. Any machine that passed the “simpler” symbols- in/symbols-out test as originally proposed by Turing would be intelligent. The problem is that, even in its original form, the Turing Test is already too hard and too anthropocentric for any machine that was not a physical, social, and behavioral carbon copy of ourselves to actually pass it. Consequently, the Turing Test, even in its standard version, is not a reasonable test for general machine intelligence. There is no need for an even stronger version of the Test. (shrink)
The term “ethics” covers a multitude of virtues and possibly some sins where ethical perspectives differ. Given the diversity of ethical philosophies there is a question about what common ground can, or should, inform health research ethics. At a minimum it must be consistent with the law. Beyond that, ethics embraces a variety of possible approaches. This raises the question—what criteria are applied in determining the appropriate approach and what standards by way of quality control are applied to its decisional (...) application by ethics committees or other authorities exercising responsibility in this difficult area. The particular issue of ethical perspectives on the use of “big data” in medical research also raises complex issues for consideration. (shrink)
David Marr's three-level analysis of computational cognition argues for three distinct levels of cognitive information processing—namely, the computational, representational, and implementational levels. But Marr's levels are—and were meant to be—descriptive, rather than interactive and dynamic. For this reason, we suggest that, had Marr been writing today, he might well have gone even farther in his analysis, including the emergence of structure—in particular, explicit structure at the conceptual level—from lower levels, and the effect of explicit emergent structures on the level that (...) gave rise to them. The message is that today's cognitive scientists need not only to understand how emergent structures—in particular, explicit emergent structures at the cognitive level—develop but also to understand how they feed back on the sub-structures from which they emerged. (shrink)
In a series of radio broadcasts, one of which is translated for the first time in this issue (pp. 21-34), Adorno and Becker claimed that modern education is profoundly inadequate. Their views on education draw heavily on Kant’s notion of Enlightenment as a process for the development of personal and social maturity and responsibility. As such, education cannot just be a training but must itself be a developmental process which takes into account not only social and political realities but also (...) the complex psychodynamics involved in learning. However, Adorno and Becker arrive at a position that is close to self-contradictory, unable to solve the paradox inherent in the idea of an education that is at once authoritative and non-conformist. This might arise from their failure to reflect on the nature of their own dialogue, and it is suggested that friendship offers the social model of a dynamic relationship of the type they sought to articulate. Despite the fact that the discussion took place in 1969, in a climate of educational debate radically different from today’s, their work raises issues and poses questions of the profoundest importance 30 years on. (shrink)
Relational priming is argued to be a deeply inadequate model of analogy-making because of its intrinsic inability to do analogies where the base and target domains share no common attributes and the mapped relations are different. Leech et al. rely on carefully handcrafted representations to allow their model to make a complex analogy, seemingly unaware of the debate on this issue fifteen years ago. Finally, they incorrectly assume the existence of fixed, context-independent relations between objects.
While we agree that the frame problem, as initially stated by McCarthy and Hayes (1969), is a problem that arises because of the use of representations, we do not accept the anti-representationalist position that the way around the problem is to eliminate representations. We believe that internal representations of the external world are a necessary, perhaps even a defining feature, of higher cognition. We explore the notion of dynamically created context-dependent representations that emerge from a continual interaction between working memory, (...) external input, and long-term memory. We claim that only this kind of representation, necessary for higher cognitive abilities such as counterfactualization, will allow the combinatorial explosion inherent in the frame problem to be avoided. (shrink)
This commentary attempts to show that the inverted Turing Test could be simulated by a standard Turing test and, most importantly, claims that a very simple program with no intelligence whatsoever could be written that would pass the inverted Turing test. For this reason, the inverted Turing test in its present form must be rejected.
As a conservation policy advocate and practitioner, Leopold was a pragmatist (in the vernacular sense of the word). He was not, however, a member of the school of philosophy known as American Pragmatism, nor was his environmental philosophy informed by any members of that school. Leopold's environmental philosophy was radically non-anthropocentric; he was an intellectual revolutionary and aspired to transform social values and institutions.
Direct versus Indirect Realism: A Neurophilosophical Debate on Consciousness brings together leading neuroscientists and philosophers to explain and defend their theories on consciousness. The book offers a one-of-a-kind look at the radically opposing theories concerning the nature of the objects of immediate perception-whether these are distal physical objects or phenomenal experiences in the conscious mind. Each side-neuroscientists and philosophers-offers accessible, comprehensive explanations of their points-of-view, with each side also providing a response to the other that offers a unique approach on (...) opposing positions. It is the only book available that combines thorough discussion of the arguments behind both direct and indirect realism in a single resource, and is required reading for neuroscientists, neurophilosophers, cognitive scientists and anyone interested in conscious perception and the mind-brain connection. Combines discussion of both direct realism and indirect realism in a single, accessible resource Provides a thorough, well-rounded understanding of not only the opposing views of neuroscientists and philosophers on the nature of conscious perception, but also insight into why the opposition persists Offers a unique "dialog" approach, with neuroscientists and philosophers providing responses and rebuttals to one another's contributions. (shrink)
Starting with the hypothesis that analogical reasoning consists of a search of semantic space, we used eye-tracking to study the time course of information integration in adults in various formats of analogies. The two main questions we asked were whether adults would follow the same search strategies for different types of analogical problems and levels of complexity and how they would adapt their search to the difficulty of the task. We compared these results to predictions from the literature. Machine learning (...) techniques, in particular support vector machines (SVMs), processed the data to find out which sets of transitions best predicted the output of a trial (error or correct) or the type of analogy (simple or complex). Results revealed common search patterns, but with local adaptations to the specifics of each type of problem, both in terms of looking-time durations and the number and types of saccades. In general, participants organized their search around source-domain relations that they generalized to the target domain. However, somewhat surprisingly, over the course of the entire trial, their search included, not only semantically related distractors, but also unrelated distractors, depending on the difficulty of the trial. An SVM analysis revealed which types of transitions are able to discriminate between analogy tasks. We discuss these results in light of existing models of analogical reasoning. (shrink)
In this paper I contrast the geometric structure of phenomenal visual space with that of photographic images. I argue that topologically both are two-dimensional and that both involve central projections of scenes being depicted. However, I also argue that the metric structures of the spaces differ inasmuch as two types of “apparent distortions”—marginal distortion in wide-angle photography and close-up distortions—which occur in photography do not occur in the corresponding visual experiences. In particular, I argue that the absence of marginal distortions (...) in vision is evidence for a holistic metric of visual space that is spherical, and that the absence of close-up distortions shows that the local metric structure possesses a dynamic variable curvature which is dependent upon the distance away of objects being viewed at a given time. (shrink)
Natura non facit saltum (Nature does not make leaps) was the lovely aphorism on which Darwin based his work on evolution. It applies as much to the formation of mental representations as to the formation of species, and therein lies our major disagreement with the SOC model proposed by Perruchet & Vinter.
Two categorization arguments pose particular problems for localist connectionist models. The internal representations of localist networks do not reflect the variability within categories in the environment, whereas networks with distributed internal representations do reflect this essential feature of categories. We provide a real biological example of perceptual categorization in the monkey that seems to require population coding (i.e., distributed internal representations).
The fixed-feature viewpoint Schyns et al. are opposing is not a widely held theoretical position but rather a working assumption of cognitive psychologists – and thus a straw man. We accept their demonstration of new-feature acquisition, but question its ubiquity in category learning. We suggest that new-feature learning (at least in adults) is rarer and more difficult than the authors suggest.
Green's target article is an attack on most current connectionist models of cognition. Our commentary will suggest that there is an essential component missing in his discussion of modeling, namely, the idea that the appropriate level of the model needs to be specified. We will further suggest that the precise form of connectionist networks will fall out as ever more detailed constraints are placed on their function.
Taking to heart Massaro's [(1988) Some criticisms of connectionist models of human performance, Journal of Memory and Language, 27, 213-234] criticism that multi-layer perceptrons are not appropriate for modeling human cognition because they are too powerful (i.e. they can simulate just about anything, which gives them little explanatory power), Regier develops the notion of constrained connectionism. The model that he discusses is a distributed network but with numerous constraints added that are (more or less) motivated by real psychophysical and neurophysical (...) constraints. His model learns static prepositions of spatial location such as in, above, to the left of, to the right of, under, etc., as well as dynamic prepositions such as through and the Russian iz-pod, meaning out from under. The network learns these prepositions by viewing a number of examples of them. Very importantly, this book tackles-and goes a long way towards resolving-the problem of the lack of negative exemplars (i.e. we are only very rarely told when something is not above something else), which should lead to overgeneralization, but does not. This book is a significant contribution to connectionist literature. (shrink)
This book is an excellent manifesto for future work in child development. It presents a multidisciplinary approach that clearly demonstrates the value of integrating modeling, neuroscience, and behavior to explore the mechanisms underlying development and to show how internal context-dependent representations arise and are modified during development. Its only major flaw is to have given short shrift to the study of the role of genetics on development.
What new implications does the dynamical hypothesis have for cognitive science? The short answer is: None. The _Behavior and Brain Sciences _target article, “The dynamical hypothesis in cognitive science” by Tim Van Gelder is basically an attack on traditional symbolic AI and differs very little from prior connectionist criticisms of it. For the past ten years, the connectionist community has been well aware of the necessity of using (and understanding) dynamically evolving, recurrent network models of cognition.