After reviewing some major features of theinteractions between Linguistics and Philosophyin recent years, I suggest that the depth and breadthof current inquiry into semanticshas brought this subject into contact both with questionsof the nature of linguistic competence and with modern andtraditional philosophical study of the nature ofour thoughts, and the problems of metaphysics.I see this development as promising for thefuture of both subjects.
This article surveys the philosophical literature on theoretical linguistics. The focus of the paper is centred around the major debates in the philosophy of linguistics, past and present, with specific relation to how they connect to the philosophy of science. Specific issues such as scientific realism in linguistics, the scientific status of grammars, the methodological underpinnings of formal semantics, and the integration of linguistics into the larger cognitive sciences form the crux of the discussion.
This is a teaching guide companion to the main article published in Philosophy Compass. It offers insights into how one might go about designing a course in the philosophy of linguistics at advanced undergrad/graduate level. Readings and possible core questions are included.
This book deals with a major problem in the study of language: the problem of reference. The ease with which we refer to things in conversation is deceptive. Upon closer scrutiny, it turns out that we hardly ever tell each other explicitly what object we mean, although we expect our interlocutor to discern it. Amichai Kronfeld provides an answer to two questions associated with this: how do we successfully refer, and how can a computer be programmed to achieve this? Beginning (...) with the major theories of reference, Dr Kronfeld provides a consistent philosophical view which is a synthesis of Frege's and Russell's semantic insights with Grice's and Searle's pragmatic theories. This leads to a set of guiding principles, which are then applied to a computational model of referring. The discussion is made accessible to readers from a number of backgrounds: in particular, students and researchers in the areas of computationallinguistics, artificial intelligence and the philosophy of language will want to read this book. (shrink)
What is philosophy of science? Numerous manuals, anthologies or essays provide carefully reconstructed vantage points on the discipline that have been gained through expert and piecemeal historical analyses. In this paper, we address the question from a complementary perspective: we target the content of one major journal of the field—Philosophy of Science—and apply unsupervised text-mining methods to its complete corpus, from its start in 1934 until 2015. By running topic-modeling algorithms over the full-text corpus, we identified 126 key (...) research topics that span across 82 years. We also tracked their evolution and fluctuating significance over time in the journal articles. Our results concur with and document known and lesser-known episodes of the philosophy of science, including the rise and fall of logic and language-related topics, the relative stability of a metaphysical and ontological questioning (space and time, causation, natural kinds, realism), the significance of epistemological issues about the nature of scientific knowledge as well as the rise of a recent philosophy of biology and other trends. These analyses exemplify how computational text-mining methods can be used to provide an empirical large-scale and data-driven perspective on the history of philosophy of science that is complementary to other current historical approaches. (shrink)
In this fascinating work, Scott Soames offers a new conception of the relationship between linguistic meaning and assertions made by utterances. He gives meanings of proper names and natural kind predicates and explains their use in attitude ascriptions. He also demonstrates the irrelevance of rigid designation in understanding why theoretical identities containing such predicates are necessary, if true.
This paper argues that truth values of sentences containing predicates of “personal taste” such as fun or tasty must be relativized to individuals. This relativization is of truth value only, and does not involve a relativization of semantic content: If you say roller coasters are fun, and I say they are not, I am negating the same content which you assert, and directly contradicting you. Nonetheless, both our utterances can be true (relative to their separate contexts). A formal semantic theory (...) is presented which gives this result by introducing an individual index, analogous to the world and time indices commonly used, and by treating the pragmatic context as supplying a particular value for this index. The context supplies this value in the derivation of truth values from content, not in the derivation of content from character. Predicates of personal taste therefore display a kind of contextual variation in interpretation which is unlike the familiar variation exhibited by pronouns and other indexicals. (shrink)
This paper is aimed to analyze some grounds bridging the explanatory gap in philosophy of mind and linguistic sign theory. It's noted that the etymological ties between the notions of “consciousness", “cognition", “sign" are emphasized in the works on cognitive linguistics. This connection rises from the understanding of the symbolic nature of consciousness and the sign of semiosis as the key cognitive process. On the one hand, it is impossible to realize the communication procedures, knowledge, understanding, decisionmaking, orientation (...) and even navigation without the process of signification. On the other hand, the human mind has its unique ability to produce the meaning apart from the “signal markings" available to other living beings. The ontology of this ability should be considered as a part of the complex problem of consciousness which includes the mind-body problem, the free will problem, image memory, narrative worlds, introspection, etc. In this paper, special attention is paid to the ontological and epistemological aspects of linguistic sign within the context of the bio-semiotic approach. The author argues that the linguistic nature of consciousness is inseparably connected with the ontological properties of consciousness. The author makes an attempt to prove the link between the mind-body problem and the problem of the explanatory gap in the ontology of linguistic sign. The analysis is focused on the following question: Is it possible that the rank of random names could cause the behavior of the organism in changing environment? (shrink)
Extract from Hofstadter's revew in Bulletin of American Mathematical Society : http://www.ams.org/journals/bull/1980-02-02/S0273-0979-1980-14752-7/S0273-0979-1980-14752-7.pdf -/- "Aaron Sloman is a man who is convinced that most philosophers and many other students of mind are in dire need of being convinced that there has been a revolution in that field happening right under their noses, and that they had better quickly inform themselves. The revolution is called "Artificial Intelligence" (Al)-and Sloman attempts to impart to others the "enlighten- ment" which he clearly regrets not having (...) experienced earlier himself. Being somewhat of a convert, Sloman is a zealous campaigner for his point of view. Now a Reader in Cognitive Science at Sussex, he began his academic career in more orthodox philosophy and, by exposure to linguistics and AI, came to feel that all approaches to mind which ignore AI are missing the boat. I agree with him, and I am glad that he has written this provocative book. The tone of Sloman's book can be gotten across by this quotation (p. 5): "I am prepared to go so far as to say that within a few years, if there remain any philosophers who are not familiar with some of the main developments in artificial intelligence, it will be fair to accuse them of professional incom- petence, and that to teach courses in philosophy of mind, epistemology, aesthetics, philosophy of science, philosophy of language, ethics, metaphysics, and other main areas of philosophy, without discussing the relevant aspects of artificial intelligence will be as irresponsible as giving a degree course in physics which includes no quantum theory." -/- (The author now regrets the extreme polemical tone of the book.). (shrink)
In a recent paper (Linguistics and Philosophy 23, 4, June 2000), Jason Stanley argues that there are no `unarticulated constituents', contrary to what advocates of Truth-conditional pragmatics (TCP) have claimed. All truth-conditional effects of context can be traced to logical form, he says. In this paper I maintain that there are unarticulated constituents, and I defend TCP. Stanley's argument exploits the fact that the alleged unarticulated constituents can be `bound', that is, they can be made to vary with (...) the values introduced by operators in the sentence. I show that Stanley's argument rests on a fallacy, and I provide alternative analyses of the data. (shrink)
This groundbreaking collection, the most thorough treatment of the philosophy of linguistics ever published, brings together philosophers, scientists and historians to map out both the foundational assumptions set during the second half of ...
In light of the sharp linguistic turn philosophy has taken in this century, this collection provides a much-needed and long-overdue reference for philosophical discussion. The first collection of its kind, it explores questions of the nature and existence of linguistic objects--including sentences and meanings--and considers the concept of truth in linguistics. The status of linguistics and the nature of language now take a central place in discussions of the nature of philosophy; the essays in this volume (...) both inform these discussions and lay the groundwork for further examination. (shrink)
An important debate in the current literature is whether “all truth-conditional effects of extra-linguistic context can be traced to [a variable at; LM] logical form” (Stanley, ‘Context and Logical Form’, Linguistics and Philosophy, 23 (2000) 391). That is, according to Stanley, the only truth-conditional effects that extra-linguistic context has are localizable in (potentially silent) variable-denoting pronouns or pronoun-like items, which are represented in the syntax/at logical form (pure indexicals like I or today are put aside in this discussion). (...) According to Recanati (‘Unarticulated Constituents’, Linguistics and Philosophy, 25 (2002) 299), extra-linguistic context can have additional truth-conditional effects, in the form of optional pragmatic processes like ‘free enrichment’. This paper shows that Recanati’s position is not warranted, since there is an alternative line of analysis that obviates the need to assume free enrichment. In the alternative analysis, we need Stanley’s variables, but we need to give them the freedom to be or not to be generated in the syntax/present at logical form, a kind of optionality that has nothing to do with the pragmatics-related optionality of free enrichment. (shrink)
In this paper, I argue for a modified version of what Devitt calls the Representational Thesis. According to RT, syntactic rules or principles are psychologically real, in the sense that they are represented in the mind/brain of every linguistically competent speaker/hearer. I present a range of behavioral and neurophysiological evidence for the claim that the human sentence processing mechanism constructs mental representations of the syntactic properties of linguistic stimuli. I then survey a range of psychologically plausible computational models of (...) comprehension and show that they are all committed to RT. I go on to sketch a framework for thinking about the nature of the representations involved in sentence processing. My claim is that these are best characterized not as propositional attitudes but, rather, as subpersonal states whose representational properties are determined by their functional role. Finally, I distinguish between explicit and implicit representations and argue that the latter can be drawn on as data by the algorithms that constitute our sentence processing routines. I conclude that skepticism concerning the psychological reality of grammars cannot be sustained. (shrink)
According to pancomputationalism, everything is a computing system. In this paper, I distinguish between different varieties of pancomputationalism. I find that although some varieties are more plausible than others, only the strongest variety is relevant to the philosophy of mind, but only the most trivial varieties are true. As a side effect of this exercise, I offer a clarified distinction between computational modelling and computational explanation.<br><br>.
What are facts, situations, or events? When Situation Semantics was born in the eighties, I objected because I could not swallow the idea that situations might be chunks of information. For me, they had to be particulars like sticks or bricks. I could not imagine otherwise. The first manuscript of “An Investigation of the Lumps of Thought” that I submitted to Linguistics and Philosophy had a footnote where I distanced myself from all those who took possible situations to (...) be units of information. In that context and at that time, this meant Jon Barwise and John Perry. (shrink)
In this paper we explore how compositional semantics, discourse structure, and the cognitive states of participants all contribute to pragmatic constraints on answers to questions in dialogue. We synthesise formal semantic theories on questions and answers with techniques for discourse interpretation familiar from computationallinguistics, and show how this provides richer constraints on responses in dialogue than either component can achieve alone.
Grice’s distinction between what is said and what is implicated has greatly clarified our understanding of the boundary between semantics and pragmatics. Although border disputes still arise and there are certain difficulties with the distinction itself (see the end of §1), it is generally understood that what is said falls on the semantic side and what is implicated on the pragmatic side. But this applies only to what is..
In this article, we reflect on the use of formal methods in the philosophy of science. These are taken to comprise not just methods from logic broadly conceived, but also from other formal disciplines such as probability theory, game theory, and graph theory. We explain how formal modelling in the philosophy of science can shed light on difficult problems in this domain.
Ways of Scope Taking is concerned with syntactic, semantic and computational aspects of scope. Its starting point is the well-known but often neglected fact that different types of quantifiers interact differently with each other and other operators. The theoretical examination of significant bodies of data, both old and novel, leads to two central claims. (1) Scope is a by-product of a set of distinct Logical Form processes; each quantifier participates in those that suit its particular features. (2) Scope interaction (...) is further constrained by the semantics of the interacting operators. The arguments are developed using Minimalist syntax, Generalized Quantify theory, Discourse Representation Theory, and algebraic semantics. The contributors (Beghelli, Ben-Shalom, Doetjes, Farkas, Gutiérrez Rexach, Honcoop, Stabler, Stowell, Szabolcsi and Zwarts) make tightly related theoretical assumptions and focus on related empirical phenomena, which include the direct and inverse scope of quantifiers, distributivity, negation, modal and intensional contexts, weak islands, event-related readings, interrogatives, wh/quantifier interactions, and Hungarian syntax. An introduction to the formal semantics background is provided. Audience: Linguists, philosophers, computational and psycholinguists; advanced undergraduates, graduate students and researchers in these fields. (shrink)
This paper discusses several case studies that illustrate the relationship between the philosophy of language and three branches of linguistics: syntax, semantics, and pragmatics. Among other things, I identify binding arguments in the linguistics literature preceding (Stanley 2000), and I invent binding arguments to evaluate various semantic and pragmatic theories of belief ascriptions.
Computational models can aid in the development of philosophical views concerning the structure and growth of scientific knowledge. In cognitive psychology, computational models have proved valuable for describing the structures and processes of thought and for testing these models by writing and running computer programs using the techniques of artificial intelligence. Similarly, in the philosophy of science models can be developed that shed light on the structure, discovery, and justification of scientific theories. This paper briefly describes a (...)computational model of problem solving and learning that has been used to simulate several kinds of scientific reasoning. (shrink)
Higher order unification is a way of combining information (or equivalently, solving equations) expressed as terms of a typed higher order logic. A suitably restricted form of the notion has been used as a simple and perspicuous basis for the resolution of the meaning of elliptical expressions and for the interpretation of some non-compositional types of comparative construction also involving ellipsis. This paper explores another area of application for this concept in the interpretation of sentences containing intonationally marked focus, or (...) various semantic constructs which are sensitive to focus.Similarities and differences between this approach, and theories using alternative semantics, structured meanings, or flexible categorial grammars, are described. The paper argues that the higher order unification approach offers descriptive advantages over these alternatives, as well as the practical advantage of being capable of fairly direct computational implementation. (shrink)
Linguistic phenomena of tense and aspect have been investigated in a great deal of theoretical work in linguistics, philosophy and computer science. Modern tense logics, established by Prior, are part of this effort. Point tense logics offer an intuitive representation of tense but lack the expressiveness to represent many aspectual structures. Interval tense logics offer more expressiveness but in the general case can be computationally intractable. From a linguistic perspective there is the problem of precisely how to formalise (...) the aspectual structures, such as a culmination and a culminated process. In this paper we define a computationally tractable augmented fragment of Halpern and Shoham's interval tense logic HS and apply it to represent a core set of aspectual structures, which are incorporated into a temporal semantics of a simple fragment of English. We model the logic fragment using timelines and define two procedures, one for constructing the minimal timelines that satisfy a formula and one for checking semantic entailments between one formula and another by comparing their timelines. The former is applied to compute models of temporal readings and the latter to check entailments between them. Possible extensions to the logic fragment and timeline models are discussed as ways of accounting for a wider range of linguistic behaviour. (shrink)
In this paper, I defend the thesis that alleffects of extra-linguistic context on thetruth-conditions of an assertion are traceable toelements in the actual syntactic structure of thesentence uttered. In the first section, I develop thethesis in detail, and discuss its implications for therelation between semantics and pragmatics. The nexttwo sections are devoted to apparent counterexamples.In the second section, I argue that there are noconvincing examples of true non-sentential assertions.In the third section, I argue that there are noconvincing examples of what (...) John Perry has called`unarticulated constituents''. I conclude by drawingsome consequences of my arguments for appeals tocontext-dependence in the resolution of problems inepistemology and philosophical logic. (shrink)
Computational techniques comparing co-occurrences of city names in texts allow the relative longitudes and latitudes of cities to be estimated algorithmically. However, these techniques have not been applied to estimate the provenance of artifacts with unknown origins. Here, we estimate the geographic origin of artifacts from the Indus Valley Civilization, applying methods commonly used in cognitive science to the Indus script. We show that these methods can accurately predict the relative locations of archeological sites on the basis of artifacts (...) of known provenance, and we further apply these techniques to determine the most probable excavation sites of four sealings of unknown provenance. These findings suggest that inscription statistics reflect historical interactions among locations in the Indus Valley region, and they illustrate how computational methods can help localize inscribed archeological artifacts of unknown origin. The success of this method offers opportunities for the cognitive sciences in general and for computational anthropology specifically. (shrink)
In this paper an approach to the exhaustive interpretation of answers is developed. It builds on a proposal brought forward by Groenendijk and Stokhof (1984). We will use the close connection between their approach and McCarthy's (1980, 1986) predicate circumscription and describe exhaustive interpretation as an instance of interpretation in minimal models, well-known from work on counterfactuals (see for instance Lewis (1973)). It is shown that by combining this approach with independent developments in semantics/pragmatics one can overcome certain limitations of (...) Groenenedijk and Stokhof's (1984) proposal. In the last part of the paper we will provide a Gricean motivation for exhaustive interpretation building on work of Schulz (to appear) and van Rooij and Schulz (2004). (shrink)
In this paper, I discuss the distribution and interpretation of free choice items (FCIs) in Greek, a language exhibiting a lexical paradigm of such items distinct from that of negative polarity items. Greek differs in this respect from English, which uniformly employs any. FCIs are grammatical only in certain contexts that can be characterized as nonveridical (Giannakidou 1998, 1999), and although they yield universal-like interpretations in certain structures, they are not, I argue, universal quantifiers. Evidence will be provided that FCIsare (...) indefinites; the quasi-universal effect is shown to be the result of binding by an operator with universal force. Additionally, the limited distribution of FCIs in non veridical contexts can be accounted for by analyzing them as indefinites which must always be interpreted in an intensional type. The difference between ``regular'' indefinites and FCIs, therefore, is reduced to a type difference which captures the fact that only the latter exhibit limited distribution: because of their intensional type, FCIs will be grammatical only in contexts providing alternatives (worlds or situations), and nonveridical contexts do exactly this. By contrast, FCIs are excluded from veridical and episodic contexts because these provide no alternatives and hence do not satisfy the lexical semantic requirement ofFCIs. The proposed analysis is supported by data from other languages as well (Spanish, Catalan,French) and has important consequences regarding the analysis of English any. If FCIs are not universal quantifiers but indefinites, then the usual ambiguity thesis (free choice any being universal, negative polarity any an existential) can no longer be maintained, at least not as one in terms of quantificational force. (shrink)
In this paper an approach to the exhaustive interpretation of answers is developed. It builds on a proposal brought forward by Groenendijk and Stokhof. We will use the close connection between their approach and McCarthy’s predicate circumscription and describe exhaustive interpretation as an instance of interpretation in minimal models, well-known from work on counterfactuals ). It is shown that by combining this approach with independent developments in semantics/pragmatics one can overcome certain limitations of Groenenedijk and Stokhof’s proposal. In the last (...) part of the paper we will provide a Gricean motivation for exhaustive interpretation building on work of Schulz and van Rooij and Schulz. (shrink)
The semantics of directional prepositions is investigated from the perspective of aspect. What distinguishes telic PPs (like to the house) from atelic PPs (like towards the house), taken as denoting sets of paths, is their algebraic structure: atelic PPs are cumulative, closed under the operation of concatenation, telic PPs are not. Not only does this allow for a natural and compositional account of how PPs contribute to the aspect of a sentence, but it also guides our understanding of the lexical (...) semantics of prepositions in important ways. Semantically, prepositions turn out to be quite similar to nouns and verbs. Nominal distinctions (like singular and plural, mass and count) and verbal classes (like semelfactives and degree achievements) have their prepositional counterparts. (shrink)
The paper shows how ideas that explain the sense of an expression as a method or algorithm for finding its reference, preshadowed in Frege’s dictum that sense is the way in which a referent is given, can be formalized on the basis of the ideas in Thomason (1980). To this end, the function that sends propositions to truth values or sets of possible worlds in Thomason (1980) must be replaced by a relation and the meaning postulates governing the behaviour of (...) this relation must be given in the form of a logic program. The resulting system does not only throw light on the properties of sense and their relation to computation, but also shows circular behaviour if some ingredients of the Liar Paradox are added. The connection is natural, as algorithms can be inherently circular and the Liar is explained as expressing one of those. Many ideas in the present paper are closely related to those in Moschovakis (1994), but receive a considerably lighter formalization. (shrink)
The English perfect involves two fundamental components of meaning: a truth-conditional one involving temporal notions and a current relevance presupposition best expressed in terms drawn from the analysis of modality. The proposal made here draws much for the Extended Now theory, but improves on it by showing that many aspects of the perfect's meaning may be factored out into independent semantic or pragmatic principles.
Some of the systems used in natural language generation (NLG), a branch of applied computationallinguistics, have the capacity to create or assemble somewhat original messages adapted to new contexts. In this paper, taking Bernard Williams’ account of assertion by machines as a starting point, I argue that NLG systems meet the criteria for being speech actants to a substantial degree. They are capable of authoring original messages, and can even simulate illocutionary force and speaker meaning. Background intelligence (...) embedded in their datasets enhances these speech capacities. Although there is an open question about who is ultimately responsible for their speech, if anybody, we can settle this question by using the notion of proxy speech, in which responsibility for artificial speech acts is assigned legally or conventionally to an entity separate from the speech actant. (shrink)
The paper investigates an elliptical construction, Clarification Ellipsis, that occurs in dialogue. We suggest that this provides data that demonstrates that updates resulting from utterances cannot be defined in purely semantic terms, contrary to the prevailing assumptions of existing approaches to dynamic semantics. We offer a computationally oriented analysis of the resolution of ellipsis in certain cases of dialogue clarification. We show that this goes beyond standard techniques used in anaphora and ellipsis resolution and requires operations on highly structured, linguistically (...) heterogeneous representations. We characterize these operations and the representations on which they operate. We offer an analysis couched in a version of Head-Driven Phrase Structure Grammar combined with a theory of information states (IS) in dialogue. We sketch an algorithm for the process of utterance integration in IS which leads to grounding or clarification. The account proposed here has direct applications to the theory of attitude reports, an issue which is explored briefly in the concluding remarks of the paper. (shrink)
This article discusses pragmatic aspects of our interpretation of intensional constructions like questions and prepositional attitude reports. In the first part, it argues that our evaluation of these constructions may vary relative to the identification methods operative in the context of use. This insight is then given a precise formalization in a possible world semantics. In the second part, an account of actual evaluations of questions and attitudes is proposed in the framework of bi-directional optimality theory. Pragmatic meaning selections are (...) explained as the result of specific rankings of potentially conflicting generation and interpretation constraints. (shrink)
We study the computational complexity of polyadic quantifiers in natural language. This type of quantification is widely used in formal semantics to model the meaning of multi-quantifier sentences. First, we show that the standard constructions that turn simple determiners into complex quantifiers, namely Boolean operations, iteration, cumulation, and resumption, are tractable. Then, we provide an insight into branching operation yielding intractable natural language multi-quantifier expressions. Next, we focus on a linguistic case study. We use computational complexity results to (...) investigate semantic distinctions between quantified reciprocal sentences. We show a computational dichotomy<br>between different readings of reciprocity. Finally, we go more into philosophical speculation on meaning, ambiguity and computational complexity. In particular, we investigate a possibility to<br>revise the Strong Meaning Hypothesis with complexity aspects to better account for meaning shifts in the domain of multi-quantifier sentences. The paper not only contributes to the field of the formal<br>semantics but also illustrates how the tools of computational complexity theory might be successfully used in linguistics and philosophy with an eye towards cognitive science. (shrink)
In this paper the Centering model of anaphoraresolution and discourse coherence(Grosz et al. 1983, 1995)is reformulated in terms of Optimality Theory (OT)(Prince and Smolensky 1993). One version of the reformulated modelis proven to be descriptively equivalent to an earlier algorithmicstatement of Centering due to Brennan, Friedman and Pollard(1987). However, the new model is stated declaratively, and makesclearer the status of the various constraints used in the theory. Inthe second part of the paper, the model is extended, demonstratingthe advantages of the (...) OT reformulation, and capturing formallyideas originally described by Grosz, Joshi and Weinstein. Three newapplications of the extended OT Centering model are described:generation of linguistic forms from meanings, the evaluation andoptimization of extended texts, and the interpretation of accentedpronouns. (shrink)
The present paper deals with the semantics of locative expressions. Our approach is essentially model-theoretic, using basic geometrical properties of the space-time continuum. We shall demonstrate that locatives consist of two layers: the first layer defines a location and the second a type of movement with respect to that location. The elements defining these layers, called localisersand modalisers, tend to form a unit, which is typically either an adposition or a case marker. It will be seen that this layering is (...) not only semantically but in many languages also morphologically manifest. There are numerous languages in which the morphology is sufficiently transparent with respect to the layering. The consequences of this theory are manifold. For example, we shall show that it explains the contrast between English and Finnish concerning directionals, which is discussed in Fong (1997). In addition, we shall be concerned with the question of orientation of locatives, as discussed in Nam (1995). We propose that nondirectional locatives are oriented to the event, while directional locatives are oriented to certain arguments, called movers. (shrink)
Linguists intuitions about language change can be captured by adynamical systems model derived from the dynamics of language acquisition.Rather than having to posit a separate model for diachronic change, as hassometimes been done by drawing on assumptions from population biology (cf.Cavalli-Sforza and Feldman, 1973; 1981; Kroch, 1990), this new modeldispenses with these independent assumptions by showing how the behavior ofindividual language learners leads to emergent, global populationcharacteristics of linguistic communities over several generations. As thesimplest case, we formalize the example of (...) two grammars and show that eventhis situation leads directly to a nonlinear (quadratic) dynamical system.We study this one parameter model in a variety of situations for differentkinds of acquisition algorithms and maturational times, showing howdifferent learning theories can have very different evolutionaryconsequences. This allows us to formulate an evolutionary criterion for theadequacy of grammatical and learning theories. An application of thecomputational model to the historical loss of Verb Second from Old French toModern French is described showing how otherwise adequate grammaticaltheories might fail the evolutionary criterion. (shrink)
I present an analysis of Free Choice Items (FCIs), based on Scandinavian, where FCIs are complex and distinct from polarity sensitive items. Scandinavian FCIs are argued to have two components. One is a universal quantifying into modal contexts. The other is an operator mapping a type (s,t) expression onto itself, adjoining to the closest type t or (s,t) expression. Thus invoking Intensional Functional Application, this operator requires the presence of a modal in the scope of the universal quantifier. Facts concerning (...) ‘essential connections’ and ‘existential import’ are accounted for by assuming that the FC determiner has the option of acting like a quantifier. (shrink)