In just a few years, children achieve a stable state of linguistic competence, making them effectively adults with respect to: understanding novel sentences, discerning relations of paraphrase and entailment, acceptability judgments, etc. One familiar account of the language acquisition process treats it as an induction problem of the sort that arises in any domain where the knowledge achieved is logically underdetermined by experience. This view highlights the cues that are available in the input to children, as well as childrens skills (...) in extracting relevant information and forming generalizations on the basis of the data they receive. Nativists, on the other hand, contend that language-learners project beyond their experience in ways that the input does not even suggest. Instead of viewing language acqusition as a special case of theory induction, nativists posit a UniversalGrammar, with innately specified linguistic principles of grammar formation. The nature versus nurture debate continues, as various poverty of stimulus arguments are challenged or supported by developments in linguistic theory and by findings from psycholinguistic investigations of child language. In light of some recent challenges to nativism, we rehearse old poverty-of stimulus arguments, and supplement them by drawing on more recent work in linguistic theory and studies of child language. (shrink)
What is common to all languages is notation, so UniversalGrammar can be understood as a system of notational types. Given that infants acquire language, it can be assumed to arise from some a priori mental structure. Viewing language as having the two layers of calculus and protocol, we can set aside the communicative habits of speakers. Accordingly, an analysis of notation results in the three types of Identifier, Modifier and Connective. Modifiers are further interpreted as Quantifiers and (...) Qualifiers. The resulting four notational types constitute the categories of UniversalGrammar. Its ontology is argued to consist in the underlying cognitive schema of Essence, Quantity, Quality and Relation. The four categories of UniversalGrammar are structured as polysemous fields and are each constituted as a radial network centred on some root concept which, however, need not be lexicalized. The branches spread out along troponymic vectors and together map out all possible lexemes. The notational typology of UniversalGrammar is applied in a linguistic analysis of the ‘parts of speech’ using the English language. The analysis constitutes a ‘proof of concept’ in (1) showing how the schema of UniversalGrammar is capable of classifying the so-called ‘parts of speech’, (2) presenting a coherent analysis of the verb, and (3) showing how the underlying cognitive schema allows for a sub-classification of the auxiliaries. (shrink)
Language and Ontology: Linguistic Relativism (Sapir-Whorf Hypothesis) vs. UniversalGrammarUniversal Ontology vs. Ontological Relativity Semiotics and Ontology: Annotated Bibliography of John Deely. First part: 1965-1998 Annotated Bibliography of John Deely. Second part: 1999-2010 The Rediscovery of John Poinsot (John of St. Thomas).
In just a few years, children achieve a stable state of linguistic competence, making them effectively adults with respect to: understanding novel sentences, discerning relations of paraphrase and entailment, acceptability judgments, etc. One familiar account of the language acquisition process treats it as an induction problem of the sort that arises in any domain where the knowledge achieved is logically underdetermined by experience. This view highlights the 'cues' that are available in the input to children, as well as children's skills (...) in extracting relevant information and forming generalizations on the basis of the data they receive. Nativists, on the other hand, content that language-learners project beyond their experience in ways that the input does not even suggest. Instead of viewing language acqusition as a special case of theory induction, nativists posit a UniversalGrammar, with innately specified linguistic principles of grammar formation. The 'nature versus nurture' debate continues, as various "poverty of stimulus" arguments are challenged or supported by developments in linguistic theory and by findings from psycholinguistic investigations of child language. In light of some recent challenges to nativism, we rehearse old poverty-of stimulus arguments, and supplement them by drawing on more recent work in linguistic theory and studies of child language. (shrink)
This commentary aims to highlight what exactly is controversial about the traditional UniversalGrammar (UG) hypothesis and what is not. There is widespread agreement that we are not born that language universals exist, that grammar exists, and that adults have domain-specific representations of language. The point of contention is whether we should assume that there exist unlearned syntactic universals that are arbitrary and specific to Language.
Christiansen & Chater (C&C) suggest that language is an organism, like us, and that our brains were not selected for UniversalGrammar (UG) capacity; rather, languages were selected for learnability with minimal trial-and-error experience by our brains. This explanation is circular: Where did our brain's selective capacity to learn all and only UG-compliant languages come from?
Differences of opinion between Epstein, Flynn & Martohardjono (1996) and some commentators can be traced to different interpretations of UniversalGrammar (UG) form or strategy. Potential full access to the form of linguistic universals in second language acquisition may be distinguished from access to UG strategy, but Epstein et al.'s dismissal of the Critical Age Hypothesis clouds their central argument.
UniversalGrammar (UG) can be interpreted as a constraint on the form of possible grammars (hypothesis space) or as a constraint on acquisition strategies (selection procedures). In this response to Herschensohn we reiterate the position outlined in Epstein et al. (1996a, r), that in the evaluation of L2 acquisition as a UG- constrained process the former (possible grammars/ knowledge states) is critical, not the latter. Selection procedures, on the other hand, are important in that they may have a (...) bearing on development in language acquisition. We raise the possibility that differences in first and second language acquisition pertaining to both attainment of the end-state and course of development may derive from differences in selection procedures. We further suggest that for these reasons age effects in the attainment of nativelike proficiency must necessarily be separated from UG effects. (shrink)
The paper takes a look at the history of the idea of universalgrammar and compares it with multilingual grammars, as formalized in the Grammatical Framework, GF. The constructivist idea of formalizing math- ematics piece by piece, in a weak logical framework, rather than trying to reduce everything to one single strong theory, is the model that guides the development of grammars in GF.
In this paper, through Hegel’s account of the predicative judgment in the Greater Logic, I develop an immanent, presuppositionless deduction ofgrammatical form from the very idea of language in general. In other words, I argue that Hegel’s account of the judgment can be read as a demonstrationof a truly universal (rather than empirically “common” or “general”) grammar through which any and all determinate thought must be expressed. In so doing, I seek to resolve the problem that linguistic contingency (...) poses for systematic philosophy by deducing a necessary linguistic form from a contingent linguistic content. (shrink)
Scientists from various disciplines have begun to focus attention on the psychology and biology of human morality. One research program that has recently gained attention is universal moral grammar (UMG). UMG seeks to describe the nature and origin of moral knowledge by using concepts and models similar to those used in Chomsky's program in linguistics. This approach is thought to provide a fruitful perspective from which to investigate moral competence from computational, ontogenetic, behavioral, physiological and phylogenetic perspectives. In (...) this article, I outline a framework for UMG and describe some of the evidence that supports it. I also propose a novel computational analysis of moral intuitions and argue that future research on this topic should draw more directly on legal theory. (shrink)
Grammar is now widely regarded as a substantially biological phenomenon, yet the problem of language evolution remains a matter of controversy among Linguists, Cognitive Scientists, and Evolutionary Theorists alike. In this paper, I present a new theoretical argument for one particular hypothesis—that a Language Acquisition Device of the sort first posited by Noam Chomsky might have evolved via the so-called Baldwin Effect . Close attention to the workings of that mechanism, I argue, helps to explain a previously mysterious feature (...) of the Language Acquisition Device—the sheer variety of languages it allows the child to learn—thereby revealing a far stronger case than adherents of the hypothesis have previously supposed. A further unheralded consequence of the hypothesis is a conceptual shift in the Chomskyan understanding of language, wherein the essentially public nature of language is freshly emphasised. This has the effect of bringing the Chomskyan view into closer accord with Saussurean accounts of language, as well as with recent trends in evolutionary theory. (shrink)
Issues concerning UG access for L2 acquisition as formulated by Epstein et al. are misleading as well as poorly discussed. UG accessibility can only be fully evaluated with respect to the steady state gram mar reached by the learner. The steady state for LI learners is self evidently the adult grammar in the speech community. For L2 learners, however, the steady state is not obvious. Yet, without its clear characterization, debates concerning stages of L2 acquisition and direct and indirect (...) UG accessibility cannot be resolved. (shrink)
A new framework for the study of the human moral faculty is currently receiving much attention: the so-called ‘universal moral grammar' framework. It is based on an intriguing analogy, first pointed out by Rawls, between the study of the human moral sense and Chomsky's research program into the human language faculty. In order to assess UMG, we ask: is moral competence modular? Does it have an underlying hierarchical grammatical structure? Does moral diversity rest on culture-dependent parameters? We review (...) evidence that supports negative answers and argue that formal grammatical concepts are of limited value for the study of moral judgments, moral development and moral diversity. (shrink)
I will argue that the logical form of illocutionary acts imposes certain formal constraints on the logical structure of a possible natural language as well as on the mind of competent speakers. In particular, certain syntactic, semantic and pragmatic features are universal because they are indispensable. Moreover, in order to perform and understand illocutionary acts, competent speakers and hearers must have certain mental states and abilities which are in general traditionally related to the faculty of reason. (edited).
Despite problems with statistical significance, ancillary hypotheses, and integration into an overall view of cognition, Grodzinsky's demotion of Broca's area to a mechanism for tracking moved constituents is intrinsically plausible and fits a realistic picture of how syntax works.
In this paper, we explore the possibility that machine learning approaches to naturallanguage processing being developed in engineering-oriented computational linguistics may be able to provide specific scientific insights into the nature of human language. We argue that, in principle, machine learning results could inform basic debates about language, in one area at least, and that in practice, existing results may offer initial tentative support for this prospect. Further, results from computational learning theory can inform arguments carried on within linguistic theory (...) as well. (shrink)
In this paper, we explore the possibility that machine learning approaches to naturallanguage processing being developed in engineering-oriented computational linguistics may be able to provide speciﬁc scientiﬁc insights into the nature of human language. We argue that, in principle, machine learning results could inform basic debates about language, in one area at least, and that in practice, existing results may oﬀer initial tentative support for this prospect. Further, results from computational learning theory can inform arguments carried on within linguistic theory (...) as well. (shrink)
This target article's handling of theory and data and the range of evidence surveyed for its main contention fall short of normal BBS standards. However, the contention itself is reasonable and can be supported if one rejects the metaphor for linguistic competence and accepts that are no more than the way the brain does language.
To what extent, if any, does UniversalGrammar (UG) constrain second language (L2) acquisition? This is not only an empirical question, but one which is currently investigable. In this context, L2 acquisition is emerging as an important new domain of psycholinguistic research. Three logical possibilities have been articulated regarding the role of UG in L2 acquisition: The first is the hypothesis that claims that no aspect of UG is available to the L2 learner. The second is the hypothesis (...) that claims that only LI instantiated principles and LI instantiated parameter-values of UG are available to the learner. The third, called the hypothesis, asserts that UG in its entirety constrains L2 acquisition. (shrink)
Talk of linguistic universals has given cognitive scientists the impression that languages are all built to a common pattern. In fact, there are vanishingly few universals of language in the direct sense that all languages exhibit them. Instead, diversity can be found at almost every level of linguistic organization. This fundamentally changes the object of enquiry from a cognitive science perspective. This target article summarizes decades of cross-linguistic work by typologists and descriptive linguists, showing just how few and unprofound the (...)universal characteristics of language are, once we honestly confront the diversity offered to us by the world's 6,000 to 8,000 languages. After surveying the various uses of we illustrate the ways languages vary radically in sound, meaning, and syntactic organization, and then we examine in more detail the core grammatical machinery of recursion, constituency, and grammatical relations. Although there are significant recurrent patterns in organization, these are better explained as stable engineering solutions satisfying multiple design constraints, reflecting both cultural-historical factors and the constraints of human cognition. (shrink)
A nativist moral psychology, modeled on the successes of theoretical linguistics, provides the best framework for explaining the acquisition of moral capacities and the diversity of moral judgment across the species. After a brief presentation of a poverty of the moral stimulus argument, this chapter sketches a view according to which a so-called Universal Moral Grammar provides a set of parameterizable principles whose specific values are set by the child's environment, resulting in the acquisition of a moral idiolect. (...) The principles and parameters approach predicts moral diversity, but does not entail moral relativism. (shrink)
Since its formal definition over sixty years ago, category theory has been increasingly recognized as having a foundational role in mathematics. It provides the conceptual lens to isolate and characterize the structures with importance and universality in mathematics. The notion of an adjunction (a pair of adjoint functors) has moved to center-stage as the principal lens. The central feature of an adjunction is what might be called “determination through universals” based on universal mapping properties. A recently developed “heteromorphic” theory (...) about adjoints suggests a conceptual structure, albeit abstract and atemporal, for how new relatively autonomous behavior can emerge within a system obeying certain laws. The focus here is on applications in the life sciences (e.g., selectionist mechanisms) and human sciences (e.g., the generative grammar view of language). (shrink)
The present paper studies the general implications of theprinciple of compositionality for the organization of grammar.It will be argued that Janssen''s (1986) requirement that syntax andsemantics be similar algebras is too strong, and that the moreliberal requirement that syntax be interpretable into semanticsleads to a formalization that can be motivated and applied more easily,while it avoids the complications that encumber Janssen''s formalization.Moreover, it will be shown that this alternative formalization evenallows one to further complete the formal theory of compositionality, (...) inthat it is capable of clarifying the role played by translation,model-theoretic interpretation and meaning postulates,of which the latter two aspects received little or no attention inMontague (1970) and Janssen (1986). (shrink)
Lewis’s view of the way conventions are passed on may have some especially interesting consequences for the study of language. I’ll start by briefly discussing agreements and disagreements that I have with Lewis’s general views on conventions and then turn to how linguistic conventions spread. I’ll compare views of main stream generative linguistics, in particular, Chomsky’s views on how syntactic forms are passed on, with the sort of view of language acquisition and language change advocated by usage-based or construction grammars, (...) which seem to fit better with Lewis’s ideas. Then I will illustrate the interest of Lewis’s perspective on the dissemination of conventions with a variety of linguistic examples. (shrink)
Recent research suggests that language evolution is a process of cultural change, in which linguistic structures are shaped through repeated cycles of learning and use by domain-general mechanisms. This paper draws out the implications of this viewpoint for understanding the problem of language acquisition, which is cast in a new, and much more tractable, form. In essence, the child faces a problem of induction, where the objective is to coordinate with others (C-induction), rather than to model the structure of the (...) natural world (N-induction). We argue that, of the two, C-induction is dramatically easier. More broadly, we argue that understanding the acquisition of any cultural form, whether linguistic or otherwise, during development, requires considering the corresponding question of how that cultural form arose through processes of cultural evolution. This perspective helps resolve the “logical” problem of language acquisition and has far-reaching implications for evolutionary psychology. (shrink)
Two of the most fundamental questions about language are these: what are languages?; and, what is it to know a given language? Many philosophers who have reflected on these questions have presented answers that attribute a central role to conventions. In one of its boldest forms such a view runs as follows. Languages are either social entities constituted by networks of social conventions or abstract objects where when a particular community speaks a given language they do so in virtue of (...) the conventions operative within that community. Consequently, for an individual to know a given language is for them to be party to the relevant conventions. Call this view conventionalism. In this article my aim is to evaluate conventionalism. I will argue that although there are linguistic conventions and that they do play an important role in language development and communication conventionalism should be rejected in favour of a more psychologistically orientated position. (shrink)
According to classical arguments, language learning is both facilitated and constrained by cognitive biases. These biases are reflected in linguistic typology—the distribution of linguistic patterns across the world's languages—and can be probed with artificial grammar experiments on child and adult learners. Beginning with a widely successful approach to typology (Optimality Theory), and adapting techniques from computational approaches to statistical learning, we develop a Bayesian model of cognitive biases and show that it accounts for the detailed pattern of results of (...) artificial grammar experiments on noun-phrase word order (Culbertson, Smolensky, & Legendre, 2012). Our proposal has several novel properties that distinguish it from prior work in the domains of linguistic theory, computational cognitive science, and machine learning. This study illustrates how ideas from these domains can be synthesized into a model of language learning in which biases range in strength from hard (absolute) to soft (statistical), and in which language-specific and domain-general biases combine to account for data from the macro-level scale of typological distribution to the micro-level scale of learning by individuals. (shrink)
The approach to generative grammar originating with Chomsky (1957) has been enormously successful within linguistics. Seeing such success, one wonders whether a similar approach might help us understand other human domains besides language. One such domain is morality. Could there be universal generative moral grammar? More specifically, might it be useful to moral theory to develop an explicit generative account of parts of particular moralities in the way it has proved useful to linguistics to produce generative grammars (...) for parts of particular languages? Should moral theorists attempt to develop a theory of moral universals that is analogous to the theory of universalgrammar in linguistics? Can moral theorists develop a “principles and parameters” account of possible moralities inspired by the principles and parameters approach to language in current linguistics? Could there be a “minimalist” program for moral theory inspired by the minimalist program in linguistics? In this chapter we offer a preliminary account of some analogies, focusing on clarifying issues, making distinctions, and considering how—in a general way—such analogies might yield a fruitful research program for moral theory. There are two main parts to our discussion, one focusing on an analogy between generative grammar and moral theory, the other focusing on analogies between universalgrammar and theories of moral universals. In the first part, we say a little about the background and say how we are going to understand morality and moral theory. We describe certain aspects of generative grammar and how claims about generative grammars are tested, allowing for a distinction between “competence” and “performance”. We then try to say what a corresponding “generative moral grammar” would be and how it would be tested. We next discuss a number of objections to the analogy between moral theory and generative grammar and indicate possible responses. In the second part, we discuss certain universal constraints on grammars and consider whether there might be similar constraints on moralities. Then we discuss how linguists describe core aspects of languages in terms of principles and parameters and consider what aspects of moralities might be described in similar terms. After that we make some brief remarks about minimalism. (shrink)
I reject Jackendoff's view of UniversalGrammar as something that evolved biologically but applaud his integration of blackboard architectures. I thus recall the HEARSAY speech understanding system—the AI system that introduced the concept of “blackboard”—to provide another perspective on Jackendoff's architecture.
Conceptual primitives and semantic universals are the cornerstones of a semantic theory which Anna Wierzbicka has been developing for many years. Semantics: Primes and Universals is a major synthesis of her work, presenting a full and systematic exposition of that theory in a non-technical and readable way. It delineates a full set of universal concepts, as they have emerged from large-scale investigations across a wide range of languages undertaken by the author and her colleagues. On the basis of empirical (...) cross-linguistic studies it vindicates the old notion of the "psychic unity of mankind", while at the same time offering a framework for the rigorous description of different languages and cultures. (shrink)
The goal of this study is to reintegrate the theory of generative grammar into the cognitive sciences. Generative grammar was right to focus on the child's acquisition of language as its central problem, leading to the hypothesis of an innate UniversalGrammar. However, generative grammar was mistaken in assuming that the syntactic component is the sole course of combinatoriality, and that everything else is “interpretive.” The proper approach is a parallel architecture, in which phonology, syntax, (...) and semantics are autonomous generative systems linked by interface components. The parallel architecture leads to an integration within linguistics, and to a far better integration with the rest of cognitive neuroscience. It fits naturally into the larger architecture of the mind/brain and permits a properly mentalistic theory of semantics. It results in a view of linguistic performance in which the rules of grammar are directly involved in processing. Finally, it leads to a natural account of the incremental evolution of the language capacity. Key Words: evolution of language; generative grammar; parallel architecture; semantics; syntax. (shrink)
In this chapter we consider unsupervised learning from two perspectives. First, we briefly look at its advantages and disadvantages as an engineering technique applied to large corpora in natural language processing. While supervised learning generally achieves greater accuracy with less data, unsupervised learning offers significant savings in the intensive labour required for annotating text. Second, we discuss the possible relevance of unsupervised learning to debates on the cognitive basis of human language acquisition. In this context we explore the implications of (...) recent work on grammar induction for poverty of stimulus arguments that purport to motivate a strong bias model of language learning, commonly formulated as a theory of UniversalGrammar (UG). We examine the second issue both as a problem in computational learning theory, and with reference to empirical work on unsupervised Machine Learning (ML) of syntactic structure. We compare two models of learning theory and the place of unsupervised learning within each of them. Looking at recent work on part of speech tagging and the recognition of syntactic structure, we see how far unsupervised ML methods have come in acquiring different kinds of grammatical knowledge from raw text. (shrink)