According to classical arguments, language learning is both facilitated and constrained by cognitive biases. These biases are reflected in linguistic typology—the distribution of linguistic patterns across the world's languages—and can be probed with artificial grammar experiments on child and adult learners. Beginning with a widely successful approach to typology (Optimality Theory), and adapting techniques from computational approaches to statistical learning, we develop a Bayesian model of cognitive biases and show that it accounts for the detailed pattern of (...) results of artificial grammar experiments on noun-phrase word order (Culbertson, Smolensky, & Legendre, 2012). Our proposal has several novel properties that distinguish it from prior work in the domains of linguistic theory, computational cognitive science, and machine learning. This study illustrates how ideas from these domains can be synthesized into a model of language learning in which biases range in strength from hard (absolute) to soft (statistical), and in which language-specific and domain-general biases combine to account for data from the macro-level scale of typological distribution to the micro-level scale of learning by individuals. (shrink)
One of the central issues in linguistics is whether or not language should be considered a self-contained, autonomous formal system, essentially reducible to the syntactic algorithms of meaning construction (as Chomskyan grammar would have it), or a holistic-functional system serving the means of expressing pre-organized intentional contents and thus accessible with respect to features and structures pertaining to other cognitive subsystems or to human experience as such (as Cognitive Linguistics would have it). The latter claim depends critically (...) on the existence of principles governing the composition of semantic contents. Husserl''s fourth Logical Investigation is well known as a genuine precursor for Chomskyan grammar. However, I will establish the heterogeneous character of the Investigation and show that the whole first part of it is devoted to the exposition of a semantic combinatorial system cognate to the one elaborated within Cognitive Linguistics. I will thus show how theoretical results in linguistics may serve to corroborate and shed light on those parts of Husserl''s Fourth Investigation that have traditionally been dismissed as vague or simply ignored. (shrink)
While the PREDICATE(x) structure requires close coordination of subject and predicate, both represented in consciousness, the cognitive (ventral), and sensorimotor (dorsal) pathways operate in parallel. Sensorimotor information is unconscious and can contradict cognitive spatial information. A more likely origin of linguistic grammar lies in the mammalian action planning process. Neurological machinery evolved for planning of action sequences becomes applied to planning communicatory sequences.
The psycholinguistic literature has identified two syntactic adaptation effects in language production: rapidly decaying short-term priming and long-lasting adaptation. To explain both effects, we present an ACT-R model of syntactic priming based on a wide-coverage, lexicalized syntactic theory that explains priming as facilitation of lexical access. In this model, two well-established ACT-R mechanisms, base-level learning and spreading activation, account for long-term adaptation and short-term priming, respectively. Our model simulates incremental language production and in a series of modeling studies, we show (...) that it accounts for (a) the inverse frequency interaction; (b) the absence of a decay in long-term priming; and (c) the cumulativity of long-term adaptation. The model also explains the lexical boost effect and the fact that it only applies to short-term priming. We also present corpus data that verify a prediction of the model, that is, that the lexical boost affects all lexical material, rather than just heads. (shrink)
We explore the formal foundations of recent studies comparing aural pattern recognition capabilities of populations of human and non-human animals. To date, these experiments have focused on the boundary between the Regular and Context-Free stringsets. We argue that experiments directed at distinguishing capabilities with respect to the Subregular Hierarchy, which subdivides the class of Regular stringsets, are likely to provide better evidence about the distinctions between the cognitive mechanisms of humans and those of other species. Moreover, the classes of (...) the Subregular Hierarchy have the advantage of fully abstract descriptive (model-theoretic) characterizations in addition to characterizations in more familiar grammar- and automata-theoretic terms. Because the descriptive characterizations make no assumptions about implementation, they provide a sound basis for drawing conclusions about potential cognitive mechanisms from the experimental results. We review the Subregular Hierarchy and provide a concrete set of principles for the design and interpretation of these experiments. (shrink)
Semantic Leaps explores how people combine knowledge from different domains in order to understand and express new ideas. Concentrating on dynamic aspects of on-line meaning construction, Coulson identifies two related sets of processes: frame-shifting and conceptual blending. Frame-shifting is semantic reanalysis in which existing elements in the contextual representation are reorganized into a new frame. Conceptual blending is a set of cognitive operations for combining partial cognitive models. By addressing linguistic phenomena often ignored in traditional meaning research, Coulson (...) explains how processes of cross-domain mapping, frame-shifting, and conceptual blending enhance the explanatory adequacy of traditional frame-based systems for natural language processing. The focus is on how the constructive processes speakers use to assemble, link, and adapt simple cognitive models underlie a broad range of productive language behavior. (shrink)
What is common to all languages is notation, so Universal Grammar can be understood as a system of notational types. Given that infants acquire language, it can be assumed to arise from some a priori mental structure. Viewing language as having the two layers of calculus and protocol, we can set aside the communicative habits of speakers. Accordingly, an analysis of notation results in the three types of Identifier, Modifier and Connective. Modifiers are further interpreted as Quantifiers and Qualifiers. (...) The resulting four notational types constitute the categories of Universal Grammar. Its ontology is argued to consist in the underlying cognitive schema of Essence, Quantity, Quality and Relation. The four categories of Universal Grammar are structured as polysemous fields and are each constituted as a radial network centred on some root concept which, however, need not be lexicalized. The branches spread out along troponymic vectors and together map out all possible lexemes. The notational typology of Universal Grammar is applied in a linguistic analysis of the ‘parts of speech’ using the English language. The analysis constitutes a ‘proof of concept’ in (1) showing how the schema of Universal Grammar is capable of classifying the so-called ‘parts of speech’, (2) presenting a coherent analysis of the verb, and (3) showing how the underlying cognitive schema allows for a sub-classification of the auxiliaries. (shrink)
The Extent of the Literal develops a strikingly new approach to metaphor and polysemy in their relation to the conceptual structure. In a straightforward narrative style, the author argues for a reconsideration of standard assumptions concerning the notion of literal meaning and its relation to conceptual structure. She draws on neurophysiological and psychological experimental data in support of a view in which polysemy belongs to the level of words but not to the level of concepts, and thus challenges some seminal (...) work on metaphor and polysemy within cognitive linguistics, lexical semantics and analytical philosophy. (shrink)
Mental Spaces is the classic introduction to the study of mental spaces and conceptual projection, as revealed through the structure and use of language. It examines in detail the dynamic construction of connected domains as discourse unfolds. The discovery of mental space organization has modified our conception of language and thought: powerful and uniform accounts of superficially disparate phenomena have become available in the areas of reference, presupposition projection, counterfactual and analogical reasoning, metaphor and metonymy, and time and aspect in (...) discourse. The present work lays the foundation for this research. It uncovers simple and general principles that lie behind the awesome complexity of everyday logic. (shrink)
Prototype theory makes a crucial distinction between central and peripheral sense of words. Geeraerts explores the implications of this model for a theory of semantic change, in the first full-scale treatment of the impact of the most recent developments in lexicological theory on the study of meaning change. He identifies structural features of the development of word meanings which follow from a prototype-theoretical model of semantic structure, and incorporates these diachronic prototypicality effects into a theory of meaning change.
Is the science of moral cognition usefully modeled on aspects of Universal Grammar? Are human beings born with an innate "moral grammar" that causes them to analyze human action in terms of its moral structure, with just as little awareness as they analyze human speech in terms of its grammatical structure? Questions like these have been at the forefront of moral psychology ever since John Mikhail revived them in his influential work on the linguistic analogy and its implications (...) for jurisprudence and moral theory. In this seminal book, Mikhail offers a careful and sustained analysis of the moral grammar hypothesis, showing how some of John Rawls' original ideas about the linguistic analogy, together with famous thought experiments like the trolley problem, can be used to improve our understanding of moral and legal judgment. The book will be of interest to philosophers, cognitive scientists, legal scholars, and other researchers in the interdisciplinary field of moral psychology. (shrink)
Recent research suggests that language evolution is a process of cultural change, in which linguistic structures are shaped through repeated cycles of learning and use by domain-general mechanisms. This paper draws out the implications of this viewpoint for understanding the problem of language acquisition, which is cast in a new, and much more tractable, form. In essence, the child faces a problem of induction, where the objective is to coordinate with others (C-induction), rather than to model the structure of the (...) natural world (N-induction). We argue that, of the two, C-induction is dramatically easier. More broadly, we argue that understanding the acquisition of any cultural form, whether linguistic or otherwise, during development, requires considering the corresponding question of how that cultural form arose through processes of cultural evolution. This perspective helps resolve the “logical” problem of language acquisition and has far-reaching implications for evolutionary psychology. (shrink)
This paper explains how mathematical computation can be constructed from weaker recursive patterns typical of natural languages. A thought experiment is used to describe the formalization of computational rules, or arithmetical axioms, using only orally-based natural language capabilities, and motivated by two accomplishments of ancient Indian mathematics and linguistics. One accomplishment is the expression of positional value using versified Sanskrit number words in addition to orthodox inscribed numerals. The second is Pāṇini’s invention, around the fifth century BCE, of a formal (...)grammar for spoken Sanskrit, expressed in oral verse extending ordinary Sanskrit, and using recursive methods rediscovered in the twentieth century. The Sanskrit positional number compounds and Pāṇini’s formal system are construed as linguistic grammaticalizations relying on tacit cognitive models of symbolic form. The thought experiment shows that universal computation can be constructed from natural language structure and skills, and shows why intentional capabilities needed for language use play a role in computation across all media. The evolution of writing and positional number systems in Mesopotamia is used to transfer the thought experiment of “oral arithmetic” to inscribed computation. The thought experiment and historical evidence combine to show how and why mathematical computation is a cognitive technology extending generic symbolic skills associated with language structure, usage, and change. (shrink)
Much of what we know and love about music is based on implicitly acquired mental representations of musical pitches and the relationships between them. While previous studies have shown that these mental representations of music can be acquired rapidly and can influence preference, it is still unclear which aspects of music influence learning and preference formation. This article reports two experiments that use an artificial musical system to examine two questions: (1) which aspects of music matter most for learning, and (...) (2) which aspects of music matter most for preference formation. Two aspects of music are tested: melody and harmony. In Experiment 1 we tested the learning and liking of a new musical system that is manipulated melodically so that only some of the possible conditional probabilities between successive notes are presented. In Experiment 2 we administered the same tests for learning and liking, but we used a musical system that is manipulated harmonically to eliminate the property of harmonic whole-integer ratios between pitches. Results show that disrupting melody (Experiment 1) disabled the learning of music without disrupting preference formation, whereas disrupting harmony (Experiment 2) does not affect learning and memory but disrupts preference formation. Results point to a possible dissociation between learning and preference in musical knowledge. (shrink)
Research with computer systems and musical grammars into improvisation as found in the tabla drumming system of North India has indicated that certain musical sentences comprise (a) variable prefixes, and (b) fixed suffixes (or cadences) identical with those of their original rhythmic themes. It was assumed that the cadence functioned as a kind of target in linear musical space, and yet experiments showed that defining what exactly constituted the cadence was problematic. This paper addresses the problem of the status of (...) cadential patterns, and demonstrates the need for a better understanding and formalization of ambiguity in musico-cognitive processing. It would appear from the discussion that the cadence is not a discrete unit in itself, but just part of an ever-present underlying framework comprising the entire original rhythmic theme. Improvisations (variations), it is suggested, merely break away from and rejoin this framework at important structural points. This endorses the theory of simultaneity. However, the general cognitive implications are still unclear, and further research is required to explore musical ambiguity and the interaction of musical, linguistic, and spatio-motor grammars. (shrink)
Since the seventies, it has been customary to assume that intentionality is independent of consciousness. Recently, a number of philosophers have rejected this assumption, claiming intentionality is closely tied to consciousness, inasmuch as non- conscious intentionality in some sense depends upon conscious intentionality. Within this alternative framework, the question arises of how to account for unconscious intentionality, and different authors have offered different accounts. In this paper, I compare and contrast four possible accounts of unconscious intentionality, which I call potentialism, (...) inferentialism, eliminativism, and interpretivism. The first three are the leading accounts in the existing literature, while the fourth is my own proposal, which I argue to be superior. I then argue that an upshot of interpretivism is that all unconscious intentionality is ultimately grounded is a specific kind of cognitive phenomenology. (shrink)
This article clarifies three principles that should guide the development of any cognitive ontology. First, that an adequate cognitive ontology depends essentially on an adequate task ontology; second, that the goal of developing a cognitive ontology is independent of the goal of finding neural implementations of the processes referred to in the ontology; and third, that cognitive ontologies are neutral regarding the metaphysical relationship between cognitive and neural processes.
Cognitive systems research has predominantly been guided by the historical distinction between emotion and cognition, and has focused its efforts on modelling the “cognitive” aspects of behaviour. While this initially meant modelling only the control system of cognitive creatures, with the advent of “embodied” cognitive science this expanded to also modelling the interactions between the control system and the external environment. What did not seem to change with this embodiment revolution, however, was the attitude towards affect (...) and emotion in cognitive science. This paper argues that cognitive systems research is now beginning to integrate these aspects of natural cognitive systems into cognitive science proper, not in virtue of traditional “embodied cognitive science”, which focuses predominantly on the body’s gross morphology, but rather in virtue of research into the interoceptive, organismic basis of natural cognitive systems. (shrink)
This article tries to create a bridge of understanding between cognitive scientists and phenomenologists who work on attention. In light of a phenomenology of attention and current psychological and neuropsychological literature on attention, I translate and interpret into phenomenological terms 20 key cognitive science concepts as examined in the laboratory and used in leading journals. As a preface to the lexicon, I outline a phenomenology of attention, especially as a dynamic three-part structure, which I have freely amended from (...) the work of phenomenologist and Gestalt philosopher Aron Gurwitsch (1901â1973). As a conclusion, I discuss the nature of subjectivity in attention and attention research, and whether attention might be the same as consciousness. (shrink)
This article deals with the cognitive relationship between a speaker and her internal grammar. In particular, it takes issue with the view that such a relationship is one of belief or knowledge (I call this view the ‘Propositional Attitude View’, or PAV). I first argue that PAV entails that all ordinary speakers (tacitly) possess technical concepts belonging to syntactic theory, and second, that most ordinary speakers do not in fact possess such concepts. Thus, it is concluded that speakers (...) do not literally ‘know’ or ‘believe’ much of the contents of their grammars, and moreover, that these contents can only be attributed at a subpersonal level. (shrink)
In Book I, Part I, Section VII of the Treatise, Hume sets out to settle, once and for all, the early modern controversy over abstract ideas. In order to do so, he tries to accomplish two tasks: (1) he attempts to defend an exemplar-based theory of general language and thought, and (2) he sets out to refute the rival abstraction-based account. This paper examines the successes and failures of these two projects. I argue that Hume manages to articulate a plausible (...) theory of general ideas; indeed, a version of his account has defenders in contemporary cognitive science. But Hume fails to refute the abstraction-based account, and as a result, the early modern controversy ends in a stalemate, with both sides able to explain how we manage to speak and think in general terms. Although Hume fails to settle the controversy, he nevertheless advances it to a point from which we have yet to progress: the contemporary debate over abstract ideas in cognitive science has stalled on precisely this point. (shrink)
Cognitive science is, more than anything else, a pursuit of cognitive mechanisms. To make headway towards a mechanistic account of any particular cognitive phenomenon, a researcher must choose among the many architectures available to guide and constrain the account. It is thus fitting that this volume on contemporary debates in cognitive science includes two issues of architecture, each articulated in the 1980s but still unresolved:
• Just how modular is the mind? (section 1) – a (...) debate initially pitting encapsulated mechanisms (Fodorian modules that feed their ultimate outputs to a nonmodular central cognition) against highly interactive ones (e.g., connectionist networks that continuously feed streams of output to one another). • Does the mind process language-like representations according to formal rules? (this section) – a debate initially pitting symbolic architectures (such as Chomsky’s generative grammar or Fodor’s language of thought) against less language-like architectures (such as connectionist or dynamical ones).
Our project here is to consider the second issue within the broader context of where cognitive science has been and where it is headed. The notion that cognition in general—not just language processing—involves rules operating on language-like representations actually predates cognitive science. In traditional philosophy of mind, mental life is construed as involving propositional attitudes—that is, such attitudes towards propositions as believing, fearing, and desiring that they be true—and logical inferences from them. On this view, if a person desires that a proposition be true and believes that if she performs a certain action it will become true, she will make the inference and (absent any overriding consideration) perform the action. (shrink)
The past 25 years have witnessed an increasing awareness of the importance of cognitive control in the regulation of complex behavior. It now sits alongside attention, memory, language, and thinking as a distinct domain within cognitive psychology. At the same time it permeates each of these sibling domains. This introduction reviews recent work on cognitive control in an attempt to provide a context for the fundamental question addressed within this topic: Is cognitive control to be understood (...) as resulting from the interaction of multiple distinct control processes, or are the phenomena of cognitive control emergent? (shrink)
Philosophers of mind and cognitive scientists have recently taken renewed interest in cognitive penetration, in particular, in the cognitive penetration of perceptual experience. The question is whether cognitive states like belief influence perceptual experience in some important way. Since the possible phenomenon is an empirical one, the strategy for analysis has, predictably, proceeded as follows: define the phenomenon and then, definition in hand, interpret various psychological data. However, different theorists offer different and apparently inconsistent definitions. And (...) so in addition to the usual problems (e.g., definitions being challenged by counterexample), an important result is that different theorists apply their definitions and accordingly get conflicting answers to the question “Is this a genuine case of cognitive penetration?”. This hurdle to philosophical and scientific progress can be remedied, I argue, by returning attention to the alleged consequences of the possible phenomenon. There are three: theory-ladenness of perception in contexts of scientific theory choice, a threat to the general epistemic role of perception, and implications for mental architecture. Any attempt to characterize or define, and then empirically test for, cognitive penetration should be constrained by these consequences. This is a method for interpreting and acquiring experimental data in a way that is agreeable to both sides of the cognitive penetration debate. Put crudely, the question shifts to “Is this a cognitive-perceptual relation that results in (or constitutes) one or more of the relevant consequences?” In answering this question, relative to various data, it may turn out that there is no single unified phenomenon of cognitive penetration. But this should be no matter, since it is the consequences that are of central importance to philosophers and scientists alike. (shrink)
Could a computer be programmed to make moral judgments about cases of intentional harm and unreasonable risk that match those judgments people already make intuitively? If the human moral sense is an unconscious computational mechanism of some sort, as many cognitive scientists have suggested, then the answer should be yes. So too if the search for reflective equilibrium is a sound enterprise, since achieving this state of affairs requires demarcating a set of considered judgments, stating them as explanandum sentences, (...) and formulating a set of algorithms from which they can be derived. The same is true for theories that emphasize the role of emotions or heuristics in moral cognition, since they ultimately depend on intuitive appraisals of the stimulus that accomplish essentially the same tasks. Drawing on deontic logic, action theory, moral philosophy, and the common law of tort, particularly Terry's five-variable calculus of risk, I outline a formal model of moral grammar and intuitive jurisprudence along the foregoing lines, which defines the abstract properties of the relevant mapping and demonstrates their descriptive adequacy with respect to a range of common moral intuitions, which experimental studies have suggested may be universal or nearly so. Framing effects, protected values, and implications for the neuroscience of moral intuition are also discussed. (shrink)
The goal of this study is to reintegrate the theory of generative grammar into the cognitive sciences. Generative grammar was right to focus on the child's acquisition of language as its central problem, leading to the hypothesis of an innate Universal Grammar. However, generative grammar was mistaken in assuming that the syntactic component is the sole course of combinatoriality, and that everything else is “interpretive.” The proper approach is a parallel architecture, in which phonology, syntax, (...) and semantics are autonomous generative systems linked by interface components. The parallel architecture leads to an integration within linguistics, and to a far better integration with the rest of cognitive neuroscience. It fits naturally into the larger architecture of the mind/brain and permits a properly mentalistic theory of semantics. It results in a view of linguistic performance in which the rules of grammar are directly involved in processing. Finally, it leads to a natural account of the incremental evolution of the language capacity. Key Words: evolution of language; generative grammar; parallel architecture; semantics; syntax. (shrink)
In this chapter we consider unsupervised learning from two perspectives. First, we briefly look at its advantages and disadvantages as an engineering technique applied to large corpora in natural language processing. While supervised learning generally achieves greater accuracy with less data, unsupervised learning offers significant savings in the intensive labour required for annotating text. Second, we discuss the possible relevance of unsupervised learning to debates on the cognitive basis of human language acquisition. In this context we explore the implications (...) of recent work on grammar induction for poverty of stimulus arguments that purport to motivate a strong bias model of language learning, commonly formulated as a theory of Universal Grammar (UG). We examine the second issue both as a problem in computational learning theory, and with reference to empirical work on unsupervised Machine Learning (ML) of syntactic structure. We compare two models of learning theory and the place of unsupervised learning within each of them. Looking at recent work on part of speech tagging and the recognition of syntactic structure, we see how far unsupervised ML methods have come in acquiring different kinds of grammatical knowledge from raw text. (shrink)
We discuss the development of cognitive neuroscience in terms of the tension between the greater sophistication in cognitive concepts and methods of the cognitive sciences and the increasing power of more standard biological approaches to understanding brain structure and function. There have been major technological developments in brain imaging and advances in simulation, but there have also been shifts in emphasis, with topics such as thinking, consciousness, and social cognition becoming fashionable within the brain sciences. The discipline (...) has great promise in terms of applications to mental health and education, provided it does not abandon the cognitive perspective and succumb to reductionism. (shrink)
Recent theories in cognitive science have begun to focus on the active role of organisms in shaping their own environment, and the role of these environmental resources for cognition. Approaches such as situated, embedded, ecological, distributed and particularly extended cognition look beyond ‘what is inside your head’ to the old Gibsonian question of ‘what your head is inside of’ and with which it forms a wider whole—its internal and external cognitive niche. Since these views have been treated as (...) a radical departure from the received view of cognition, their proponents have looked for support to similar extended views within (the philosophy of) biology, most notably the theory of niche construction. This paper argues that there is an even closer and more fruitful parallel with developmental systems theory and developmental niche construction. These ask not ‘what is inside the genes you inherited’, but ‘what the inherited genes are inside of’ and with which they form a wider whole—their internal and external ontogenetic niche, understood as the set of epigenetic, social, ecological, epistemic and symbolic legacies inherited by the organism as necessary developmental resources. To the cognizing agent, the epistemic niche presents itself not just as a partially self-engineered selective niche, as the niche construction paradigm will have it, but even more so as a partially self-engineered ontogenetic niche, a problem-solving resource and scaffold for individual development and learning. This move should be beneficial for coming to grips with our own (including cognitive) nature: what is most distinctive about humans is their developmentally plastic brains immersed into a well-engineered, cumulatively constructed cognitive–developmental niche. (shrink)
Grammar is now widely regarded as a substantially biological phenomenon, yet the problem of language evolution remains a matter of controversy among Linguists, Cognitive Scientists, and Evolutionary Theorists alike. In this paper, I present a new theoretical argument for one particular hypothesis—that a Language Acquisition Device of the sort first posited by Noam Chomsky might have evolved via the so-called Baldwin Effect . Close attention to the workings of that mechanism, I argue, helps to explain a previously mysterious (...) feature of the Language Acquisition Device—the sheer variety of languages it allows the child to learn—thereby revealing a far stronger case than adherents of the hypothesis have previously supposed. A further unheralded consequence of the hypothesis is a conceptual shift in the Chomskyan understanding of language, wherein the essentially public nature of language is freshly emphasised. This has the effect of bringing the Chomskyan view into closer accord with Saussurean accounts of language, as well as with recent trends in evolutionary theory. (shrink)
There is much good work for philosophers to do in cognitive science if they adopt the constructive attitude that prevails in science, work toward testable hypotheses, and take on the task of clarifying the relationship between the scientiﬁc concepts and the everyday concepts with which we conduct our moral lives.
Allen Newell (1973) once observed that psychology researchers were playing “twenty questions with nature,” carving up human cognition into hundreds of individual phenomena but shying away from the difficult task of integrating these phenomena with unifying theories. We argue that research on cognitive control has followed a similar path, and that the best approach toward unifying theories of cognitive control is that proposed by Newell, namely developing theories in computational cognitive architectures. Threaded cognition, a recent theory developed (...) within the ACT-R cognitive architecture, offers promise as a unifying theory of cognitive control that addresses multitasking phenomena for both laboratory and applied task domains. (shrink)
Theories concerning the structure, or format, of mental representation should (1) be formulated in mechanistic, rather than metaphorical terms; (2) do justice to several philosophical intuitions about mental representation; and (3) explain the human capacity to predict the consequences of worldly alterations (i.e., to think before we act). The hypothesis that thinking involves the application of syntax-sensitive inference rules to syntactically structured mental representations has been said to satisfy all three conditions. An alternative hypothesis is that thinking requires the construction (...) and manipulation of the cognitive equivalent of scale models. A reading of this hypothesis is provided that satisfies condition (1) and which, even though it may not fully satisfy condition (2), turns out (in light of the frame problem) to be the only known way to satisfy condition (3). (shrink)
Cognitive architectures are unified theories of cognition that take the form of computational formalisms. They support computational models that collectively account for large numbers of empirical regularities using small numbers of computational mechanisms. Empirical coverage and parsimony are the most prominent criteria by which architectures are designed and evaluated, but they are not the only ones. This paper considers three additional criteria that have been comparatively undertheorized. (a) Successful architectures possess subjective and intersubjective meaning, making cognition comprehensible to individual (...)cognitive scientists and organizing groups of like-minded cognitive scientists into genuine communities. (b) Successful architectures provide idioms that structure the design and interpretation of computational models. (c) Successful architectures are strange: They make provocative, often disturbing, and ultimately compelling claims about human information processing that demand evaluation. (shrink)