Persons registered to vote in Seattle, Washington for the November, 1986 general election and a September, 1987 primary election were randomly assigned to treatments in two telephoneconducted experiments that sought to increase voter tumout. The experiments applied and extended a "self-prophecy” technique, in which respondents are asked simply to predict whether or not they will perform a target action. In the present studies, voting registrants were asked to predict whether or not they would vote in an election that was less (...) than 48 hours away. This technique, which previously increased turnout in a small study done during the 1984 U.S. Presidential election, was again effective among moderate prior-turnout voters in the second of the present much larger experiments. The failure of the effect in Experiment 1 was plausibly a ceiling effect due to very high turnout for a U.S. Senate contest in the 1986 election. Successful applications of the self· prophecy technique are facilitated by social desirability of the target action (which leads subjects to predict that they will perform it). However, social desirability of the target behavior is not a sufficient condition for the effect, as indicated by an unexpected nonoccurrence of the effect among low prior-tumout voters in Experiment 2. (shrink)
This paper presents a sound and complete proof system for the first order fragment of Discourse Representation Theory. Since the inferences that human language users draw from the verbal input they receive for the most transcend the capacities of such a system, it can be no more than a basis on which more powerful systems, which are capable of producing those inferences, may then be built. Nevertheless, even within the general setting of first order logic the structure of the (...) formulas of DRS-languages, i.e. of the Discourse Representation Structures suggest for the components of such a system inference rules that differ somewhat from those usually found in proof systems for the first order predicate calculus and which are, we believe, more in keeping with inference patterns that are actually employed in common sense reasoning.This is why we have decided to publish the present exercise, in spite of the fact that it is not one for which a great deal of originality could be claimed. In fact, it could be argued that the problem addressed in this paper was solved when Gödel first established the completeness of the system of Principia Mathematica for first order logic. For the DRS-languages we consider here are straightforwardly intertranslatable with standard formulations of the predicate calculus; in fact the translations are so straightforward that any sound and complete proof system for first order logic can be used as a sound and complete proof system for DRSs: simply translate the DRSs into formulas of predicate logic and then proceed as usual. As a matter of fact, this is how one has chosen to proceed in some implementations of DRT, which involve inferencing as well as semantic representation; an example is the Lex system developed jointly by IBM and the University of Tübingen (see in particular (Guenthner et al. 1986)). (shrink)
In [HKL00] (henceforth HKL), Hamm, Kamp and van Lambalgen declare ‘‘there is no opposition between formal and cognitive semantics,’’ notwithstanding the realist/mentalist divide. That divide separates two sides Jackendo¤ has (in [Jac96], following Chomsky) labeled E(xternalized)-semantics, relating language to a reality independent of speakers, and I(nternalized)-semantics, revolving around mental representations and thought. Although formal semanticists have (following David Lewis) traditionally leaned towards E-semantics, it is reasonable to apply formal methods also to I-semantics. This point is made clear in HKL (...) via two computational approaches to natural language semantics, Discourse Representation Theory (DRT, [KR93]) and the Event Calculus (EC) presented in [LH05]. In this short note, I wish to raise certain questions about EC that can be traced to the applicability of formal methods to E-semantics and I-semantics alike. These opposing orientations suggest di¤erent notions of time, event and representation. (shrink)
Abstract According to an influential theory, English tenses are anaphoric to an aforementioned reference point. This point is sometimes construed as a time (e.g. Reichenbach 1947, Partee 1973, Stone 1997) and sometimes as an event (e.g. Kamp 1979, 1981, Webber 1988). Moreover, some researchers draw semantic parallels between tenses and pronouns (e.g. Partee 1973, 1984, Stone 1997), whereas others draw parallels between tenses and anaphorically anchored (in)definite descriptions (e.g.
The standard way to represent anaphoric dependencies is to co-index the anaphor with its antecedent in the syntactic input to semantic rules, which then interpret such indices as variables. Dynamic theories (e.g. Kamp’s DRT, Heim’s File Change Semantics, Muskens’s Compositional DRT, etc) combine syntactic co-indexation with semantic left-to-right asymmetry. This captures the fact that the anaphor gets its referent from the antecedent and not vice versa. Formally, a text updates the input state of information to the output state. In (...) particular, an indexed antecedent updates the entity assigned to its index, and the output entity is then picked up as the referent by any subsequent co-indexed anaphor. (shrink)
1 There is a Standard Objection to the idea that concepts might be prototypes (or exemplars, or stereotypes): Because they are productive, concepts must be compositional. Prototypes aren't compositional, so concepts can't be prototypes (see, e.g., Margolis, 1994).2 However, two recent papers (Osherson and Smith, 1988; Kamp and Partee, 1995) reconsider this consensus. They suggest that, although the Standard Objection is probably right in the long run, the cases where prototypes fail to exhibit compositionality are relatively exotic and involve (...) phenomena which any account of compositionality is likely to find hard to deal with; for example, the effects of quantifiers, indexicals, contextual constraints, etc. KP are even prepared to indulge a guarded optimism: "... when a suitably rich compositional theory... is developed, prototypes will be seen ... as one property among many which only when taken altogether can support a compositional theory of combination" (p.56). In this paper, we argue that the Standard Objection to prototype theory was right after all: The problems about compositionality are insuperable in even the most trivial sorts of examples; it is therefore as near to certain as anything in cognitive science ever gets that the structure of concepts is not statistical. Theories of categorization, concept acquisition, lexical meaning and the like, which assume the contrary simply don't work. We commence with a general discussion of the constraints that an account of concepts must meet if their compositionality is to explain their productivity. We'll then turn to a criticism of proposals that OS2 and KP make for coping with some specific cases. (shrink)
This paper is about the semantic analysis of referentially opaque verbs like seek and owe that give rise to nonspecific readings. It is argued that Montague's categorization (based on earlier work by Quine) of opaque verbs as properties of quantifiers runs into two serious difficulties: the first problem is that it does not work with opaque verbs like resemble that resist any lexical decomposition of the seek ap try to find kind; the second one is that it wrongly predicts de (...) dicto (i.e. narrow scope) readings due to quantified noun phrases in the object positions of such verbs. It is shown that both difficulties can be overcome by an analysis of opaque verbs as operating on properties. This is a strongly modified version of a paper entitled lsquoDo We Bear Attitudes towards Quantifiers?rsquo that I have presented at conferences in Gosen (Gesellschaft für Sprachwissenschaft), Ithaca (SALT I), and Konstanz (Lexikon). I owe a special debt to Hans Kamp and Arnim von Stechow for shaping my views on the subject of this paper during the past ten years or so. Comments from and discussions with the following friends and colleagues have also led to considerable improvements: Heinrich Beck, Steve Berman, David Dowty, Veerle van Geenhoven, Fritz Hamm, Irene Heim, Wolfgang Klein, Angelika Kratzer, Michael Morreau, Barbara Partee, Mats Rooth, Roger Schwarzschild, Wolfgang Sternefeld, Emil Weydert, Henk Zeevat, and three referees. (shrink)
This paper consists principally of selections from a much longer work on the semantics of English. It discusses some problems concerning how to represent grammatical modifiers (e.g. slowly in x drives slowly) in a logically perspicuous notation. A proposal of Reichenbach's is given and criticized; then a new theory (apparently discovered independently by myself, Romain Clark, and Richard Montague and Hans Kamp) is given, in which grammatical modifiers are represented by operators added to a first-order predicate calculus. Finally some (...) problems concerning applications of adjectives to that-clauses and gerundive-clauses are discussed. (shrink)
Events and situations are represented by strings of temporally ordered observations, on the basis of which the events and situations are recognized. Allen’s basic interval relations are derived from superposing strings that mark interval boundaries, and Kamp’s event structures are constructed as projective limits of strings. Observations are generalized to temporal propositions, leading to event-types that classify event-instances. Working with sets of strings built from temporal propositions, we obtain natural notions of bounded entailment from set inclusions. These inclusions (...) are decidable if the sets are accepted by finite automata. (shrink)
The relationships between logic and natural language are multiverse. On the one hand, logic is a theory of argumentation, proving and giving reasons, and such activities are primarily carried out in natural language. This means that logic is, in a certain loose sense, about natural language. On the other hand, logic has found it useful to develop its own linguistic means which sometimes in a sense compete with those of natural language. This has led to the situation where the systems (...) of logic can be taken as interesting "models" of various aspects of natural language. Â Â Â Â Â Â Â The alliance of logic and linguistics has ﬂowered especially from the beginning of the seventies, when scholars like Montague, Lewis, Cresswell, Partee and others showed how semantics of natural language can be explicated with the help certain suitable logical calculi and the corresponding model theory. (Montague went so far as to claim that in view of this, there is no principal diﬀerence between natural and formal languages - but this is, as far as I can see, rather misguiding.) Since that time, the interdisciplinary movement of formal semantics (associating not only linguists and logicians, but also philosophers, computer scientists, cognitive psychologists and others) has yielded a rich repertoire of formal theories of natural language, some of them (like Hintikka's game-theoretical semantics or the dynamic logic of Groenendijk and Stokhof) being based directly on logic, others (like the situation semantics of Barwise and Perry or DRT of Kamp) exploiting diﬀerent formal strategies. Â Â Â Â Â Â Â Moreover, although the enterprise of formal semantics (i.e. of modeling natural language semantics by means of certain formal structures) seems to be the principal point of contact between linguistics and logic, there are also other cooperative enterprises. One of the most fruitful ones seems to be the logical analysis of syntax, which has resulted from elaboration of what was originally called categorial grammar. (However, even this enterprise can be seen as importantly stimulated by Montague.) Â Â Â Â Â Â Â All in all, the region in which logic and theoretical linguistics overlap has grown both in size and fertility.. (shrink)
The meanings of donkey sentences cannot be captured using a procedure which, like Montague’s, uses the existential quantiﬁers of classical logic to translate indeﬁnites and the variables to translate pronouns. The treatment of these examples requires meanings which depend on the context in which sentences appear, and thus necessitates a logic which models this context to some extent. If context is represented as the information conveyed in discourse, and the meanings of pronouns are enriched to depend on this information, the (...) result is the E-Type approach (ETA) adapted by Heim (1990) from proposals in Evans (1980) and Cooper (1979). If the context is represented as a list of potential referents, and the meanings of indeﬁnites are enriched to introduce new referents into this list, the result is a compositional formulation like Groenendijk and Stokhof’s (1990) of the discourse representation theory (DRT) of Kamp (1981) and Heim (1982). Either tack sufﬁces to capture the way in which the referents of he and it systematically correspond to the alternative possibilities described by the antecedent. Disjunction offers a parallel way of introducing alternatives in the antecedent of a conditional, as shown in (2). (shrink)
Hauser, H. La response de Jean Bodin à M. de Malestroit.--Levron, J. Jean Bodin et sa famille.--Kamp, M. E. Die Staatswirtschaftslehre Jean Bodins.--Mesnard, P. La pensée religieuse de Bodin.--Bezold, F. von, Jean Bodin als Occultist und seine Démonomanie.--Bezold, F. von. Jean Bodins Colloquium Heptaplomeres und der Altheismus des 16.--Feist, E. Weltbild und Staatsidee bei Jean Bodin.--Mayer, J. P. Jefferson as reader of Bodin.
Both formal semantics and cognitive semantics are the source of important insights about language. By developing precise statements of the rules of meaning in fragmentary, abstract languages, formalists have been able to offer perspicuous accounts of how we might come to know such rules and use them to communicate with others. Conversely, by charting the overall landscape of interpretations, cognitivists have documented how closely interpretations draw on the commonsense knowledge that lets us make our way in the world. There is (...) no opposition between these insights. Sooner or later we will have a semantics that responds to both. However, developing such a semantics is profoundly difficult, because there are certain tensions to be overcome in reconciling the two perspectives. For one thing, the overall landscape of meaning does seem to be characterized by a much richer ontology and more dynamic categories than are exhibited by the fragments typically studied in the formal tradition. One sign of strain is the recent tendency to talk of “procedural”, “non-compositional”, or “computational” semantics, as in Hamm, Kamp and van Lambalgen 2006, hereafter HK&vL. We think such locutions can serve as useful reminders to keep semantics fixed on the central question of how language allows us to share information that some have and others need to get. However, there is some danger that formalists will merely by put off by an idea that, taken literally, may not be such a good one. In this short article, we want to explore and defend the traditional realist view attributed by HK&vL to Lewis among others. In fact, this view offers a well-developed, extremely straightforward and robust account of the relation between semantics and cognition. Moreover, while the realist view has ways of accommodating the representationalist insights of DRT (Lewis 1979; Thomason 1990; Stalnaker 1998), it remains unclear how “computational” semantics can account for the key data for the realist view: cases where we judge interlocutors to be ignorant about aspects of meaning in their native language (Kripke 1972; Putnam 1975; Stalnaker 1979; Williamson 1994).. (shrink)
Although van der Velde's de Kamps's (vdV&dK) attempt to put syntactic processing into a broader context of combinatorial cognition is promising, their coverage of neuroscientific evidence is disappointing. Neither their case against binding by temporal coherence nor their arguments against recurrent neural networks are compelling. As an alternative, vdV&dK propose a blackboard model that is based on the assumption of special processors (e.g., lexical versus grammatical), but evidence from the cognitive neuroscience of language, which is, overall, less than supportive of (...) such special processors, is not considered. As a consequence, vdV&dK's may be a clever model of syntactic processing, but it remains unclear how much we can learn from it with regard to biologically based human language. (shrink)
van der Velde & de Kamps argue for the importance of considering the binding problem in accounts of human mental representation. However, their proposed solution fails as a complete account because it represents the bindings between roles and their fillers through associations (or connections). In addition, many criticisms leveled by the authors towards synchrony-based bindings models do not hold.