According to one view of linguistic information, a speaker can convey contextually new information in one of two ways: by asserting the content as new information; or by presupposing the content as given information which would then have to be accommodated. This distinction predicts that it is conversationally more appropriate to assert implausible information rather than presuppose it. A second view rejects the assumption that presuppositions are accommodated; instead, presuppositions are assimilated into asserted content and both are correspondingly open to (...) challenge. Under this view, we should not expect to find a difference in conversational appropriateness between asserting implausible information and presupposing it. To distinguish between these two views of linguistic information, we performed two self-paced reading experiments with an on-line stops-making-sense judgment. The results of the two experiments—using the presupposition triggers the and too—show that accommodation is inappropriate relative to non-presuppositional controls when the presupposed information is implausible but not when it is plausible. These results provide support for the first view of linguistic information: the contrast in implausible contexts can only be explained if there is a presupposition-assertion distinction and accommodation is a mechanism dedicated to reasoning about presuppositions. (shrink)
Although the mapping between form and meaning is often regarded as arbitrary, there are in fact well-known constraints on words which are the result of functional pressures associated with language use and its acquisition. In particular, languages have been shown to encode meaning distinctions in their sound properties, which may be important for language learning. Here, we investigate the relationship between semantic distance and phonological distance in the large-scale structure of the lexicon. We show evidence in 100 languages from a (...) diverse array of language families that more semantically similar word pairs are also more phonologically similar. This suggests that there is an important statistical trend for lexicons to have semantically similar words be phonologically similar as well, possibly for functional reasons associated with language learning. (shrink)
Hackl, Koster-Hale & Varvoutis provide data that suggest that in a null context, antecedent-contained deletion relative clause structures modifying a quantified object noun phrase are easier to process than those modifying a definite object NP. HKV argue that this pattern of results supports a quantifier-raising analysis of both ACD structures and quantified NPs in object position: under the account they advocate, both ACD resolution and quantified NPs in object position require movement of the object NP to a higher syntactic position. (...) The processing advantage for quantified object NPs in ACD is hypothesized to derive from the fact that—at the point where ACD resolution must take place—the quantified NP has already undergone QR, whereas this is not the case for definite NPs. Here, we question these conclusions. In particular, our analyses of HKV’s reading time data reveal several unreported choice points, errors and concerns regarding multiple comparisons in the original HKV data analysis. Importantly, most other plausible ways to analyze these data that we describe here result in the crucial interaction being non-significant. Putting this observation together with the failure to observe the crucial interaction in Gibson & Levy, we conclude that the experiments reported by HKV should not be viewed as providing evidence for the ACD quantifier-raising processing effect. (shrink)
Results from two self-paced reading experiments in English are reported in which subject- and object-extracted relative clauses (SRCs and ORCs, respectively) were presented in contexts that support both types of relative clauses (RCs). Object-extracted versions were read more slowly than subject-extracted versions across both experiments. These results are not consistent with a decay-based working memory account of dependency formation where the amount of decay is a function of the number of new discourse referents that intervene between the dependents (Gibson, 1998; (...) Warren & Gibson, 2002). Rather, these results support interference-based accounts and decay-based accounts where the amount of decay depends on the number of words or on the type of noun phrases that intervene between the dependents. In Experiment 2, presentation in supportive contexts was directly contrasted with presentation in null contexts. Whereas in the null context the extraction effect was only observed during the RC region, in a supportive context the extraction effect was numerically larger and persisted into the following region, thus showing that extraction effects are enhanced in supportive contexts. A sentence completion study demonstrated that the rate of SRCs versus ORCs was similar across null and supportive contexts (with most completions being subject-extractions), ruling out the possibility that an enhanced extraction effect in supportive contexts is due to ORCs being less expected in such contexts. However, the content of the RCs differed between contexts in the completions, such that the RCs produced in supportive contexts were more constrained, reflecting the lexical and semantic content of the preceding context. This effect, which we discuss in terms of expectations/lexico-syntactic priming, suggests that the enhancement of the extraction effect in supportive contexts is due to the facilitation of the subject-extracted condition. (shrink)
Absolute linguistic universals are often justified by cross-linguistic analysis: If all observed languages exhibit a property, the property is taken to be a likely universal, perhaps specified in the cognitive or linguistic systems of language learners and users. In many cases, these patterns are then taken to motivate linguistic theory. Here, we show that cross-linguistic analysis will very rarely be able to statistically justify absolute, inviolable patterns in language. We formalize two statistical methods—frequentist and Bayesian—and show that in both it (...) is possible to find strict linguistic universals, but that the numbers of independent languages necessary to do so is generally unachievable. This suggests that methods other than typological statistics are necessary to establish absolute properties of human language, and thus that many of the purported universals in linguistics have not received sufficient empirical justification. (shrink)
We discuss several issues raised by Caplan & Waters's distinction between interpretative and post-interpretative processes in sentence comprehension, including the nature and properties of the two systems, problems with measuring their respective capacities, and the relationship between the hypothesized separate-language-interpretation-resource (SLIR) and the general verbal working memory system that supports post-interpretive processing.