In the normal course of events, children manifest linguistic competence equivalent to that of adults in just a few years. Children can produce and understand novel sentences, they can judge that certain strings of words are true or false, and so on. Yet experience appears to dramatically underdetermine the competence children so rapidly achieve, even given optimistic assumptions about children’s nonlinguistic capacities to extract information and form generalizations on the basis of statistical regularities in the input. These considerations underlie various (...) (more specific) poverty of stimulus arguments for the innate specification of linguistic principles. But in our view, certain features of nativist arguments have not yet been fully appreciated. We focus here on three (related) kinds of poverty of stimulus argument, each of which has been supported by the findings of psycholinguistic investigations of child language. (shrink)
Speakers can use sentences to make assertions. Theorists who reflect on this truism often say that sentences have linguistic meanings, and that assertions have propositional contents. But how are meanings related to contents? Are meanings less dependent on the environment? Are contents more independent of language? These are large questions, which must be understood partly in terms of the phenomena that lead theorists to use words like ‘meaning’ and ‘content’, sometimes in nonstandard ways. Opportunities for terminological confusion thus abound when (...) talking about the relations among semantics, pragmatics, and truth. As Stalnaker (2003) stresses, in Quinean fashion, it is hard to separate the task of evaluating hypotheses in these domains from the task of getting clear about what the hypotheses are. But after some stage-setting, I suggest that we combine Stalnaker’s (1970, 1978, 1984, 1999, 2003) externalist account of content with Chomsky’s (1965, 1977, 1993, 2000a) internalist conception of meaning. (shrink)
We think recent work in linguistics tells against the traditional claim that a string of words like (1) Every girl pushed some truck has two readings, indicated by the following formal language sentences (with restricted quantifiers): (1a) [!x:Gx]["y:Ty]Pxy (1b) ["y:Ty][!x:Gx]Pxy. In our view, (1) does not have any b-reading in which ‘some truck’ has widest scope.1 The issue turns on details concerning syntactic transformations and terms like ‘every’. This illustrates an important point for the study of natural language: ambiguity hypotheses (...) are indeed hypotheses—i.e., theoretical claims to be justified in light of various considerations, not theses whose truth can be directly observed by speakers. (shrink)
Paul Pietroski, McGill University The general topic of Mind and World, the written version of John McDowell's 1991 John Locke Lectures, is how `concepts mediate the relation between minds and the world'. And one of the main aims is `to suggest that Kant should still have a central place in our discussion of the way thought bears on reality' (1).1 In particular, McDowell urges us to adopt a thesis that he finds in Kant, or perhaps in Strawson's (...) class='Hi'>Kant: the content of experience is conceptualized; what we experience is always the kind of thing that we could also believe. When an agent has a veridical experience, she `takes in, for instance sees, that things are thus and so' (9). McDowell's argument for this thesis is indirect, but potentially powerful. He discusses a tension concerning the roles of experience and conceptual capacities in thought, and he claims that the only adequate resolution involves granting that experiences have conceptualized content. The tension, elaborated below, can be expressed roughly as follows: judgments must be somehow constrained by features of the external environment, else judgments would be utterly divorced from the world they purport to be about; yet our judgments must be somehow free of external control, else we could give no sense to the idea that we are responsible for our judgments. (shrink)
In my view, meanings are instructions to construct monadic concepts that can be conjoined with others, given a few thematic relations and an operation of existential closure. For example, ‘red ball’ is understood as—and has the semantic property of being—an instruction to fetch and conjoin two concepts that are linked, respectively, to ‘red’ and ‘ball’. Other expressions are more complex. But to a first approximation, ‘I stabbed it violently with this’ is an instruction to construct and existentially close a six-conjunct (...) concept of the form indicated below; AGENT(E, S) & STAB(E) & BEFORE(E, T) & PATIENT(E, 1) & VIOLENT(E) & INSTRUMENT(E, 2) where ‘S’, ‘T’, ‘1’ and ‘2’ stand for concepts of the relevant speaker, time, and things demonstrated. The verb and adverb correspond directly to conjoinable concepts of events. The pronouns correspond to such concepts via certain relational notions, reflected with the tense, preposition, and grammatical relations that ‘stab’ bears to its subject and object. I have argued elsewhere that this neo-Davidsonian conception of semantics is descriptively adequate, and yet.. (shrink)
Frege proved an important result, concerning the relation of arithmetic to second-order logic, that bears on several issues in linguistics. Frege’s Theorem illustrates the logic of relations like PRECEDES(x, y) and TALLER(x, y), while raising doubts about the idea that we understand sentences like ‘Carl is taller than Al’ in terms of abstracta like heights and numbers. Abstract paraphrase can be useful—as when we say that Carl’s height exceeds Al’s—without reflecting semantic structure. Related points apply to causal relations, and even (...) grammatical relations like DOMINATES(x, y). Perhaps surprisingly, Frege provides the resources needed to recursively characterize labelled expressions without characterizing them as sets. His theorem may also bear on questions about the meaning and acquisition of number words. (shrink)
It is, I suppose, a truism that an adequate theory of meaning for a natural language L will associate each sentence of L with its meaning. But the converse does not hold. A theory that associates each sentence with its meaning is not, by virtue of that fact, an adequate theory of meaning. For it is also a truism that a semantic theory should explain the (interesting and explicable) semantic facts. And one cannot decree that the relevant facts are all (...) reportable with instances of schemata like ‘S means that p’ or ‘S, by virtue of its meaning, is true iff p’. Investigation suggests that there is much more for semanticists to explain: natural languages exhibit synonymies, ambiguities, and entailments; for any string of words, there are endlessly many meanings it cannot have; there are semantic generalizations, including crosslinguistic generalizations, that go uncaptured and unexplained by merely associating sentences with their meanings; etc. Initially, one might think these facts are “peripheral” and can thus be ignored if the aim is to explain why sentences mean what they do. But the study of natural language suggests otherwise. (One can’t tell, in advance of investigation, which facts are peripheral to a given domain. It was initially tempting to think that one could ignore falling bodies, and the tides, if the aim was to explain why planets move as they do.). (shrink)
Nativists inspired by Chomsky are apt to provide arguments with the following general form: languages exhibit interesting generalizations that are not suggested by casual (or even intensive) examination of what people actually say; correspondingly, adults (i.e., just about anyone above the age of four) know much more about language than they could plausibly have learned on the basis of their experience; so absent an alternative account of the relevant generalizations and speakers' (tacit) knowledge of them, one should conclude that there (...) are substantive "universal" principles of human grammar and, as a result of human biology, children can only acquire languages that conform to these principles. According to Pullum and Scholz, linguists need not suppose that children are innately endowed with "specific contingent facts about natural languages." But Pullum and Scholz don't consider the kinds of facts that really impress nativists. Nor do they offer any plausible acquisition scenarios that would culminate in the acquisition of languages that exhibit the kinds of rich and interrelated generalizations that are exhibited by natural languages. As we stress, good poverty-of-stimulus arguments are based on specific principles - - confirmed by drawing on (negative and crosslinguistic) data unavailable to children -- that help explain a range of independently established linguistic phenomena. If subsequent psycholinguistic experiments show that very young children already know such principles, that strengthens the case for nativism; and if further investigation shows that children sometimes "try out" constructions that are unattested in the local language, but only if such constructions are attested in other human languages, then the case for nativism is made stronger still. We illustrate these points by considering an apparently disparate -- but upon closer inspection, interestingly related -- cluster of phenomena involving: negative polarity items, the interpretation of 'or', binding theory, and displays of Romance and Germanic constructions in child- English.. (shrink)
Words indicate concepts, which have various adicities. But words do not, in general, inherit the adicities of the indicated concepts. Lots of evidence suggests that when a concept is lexicalized, it is linked to an analytically related monadic concept that can be conjoined with others. For example, the dyadic concept CHASE(_,_) might be linked to CHASE(_), a concept that applies to certain events. Drawing on a wide range of extant work, and familiar facts, I argue that the (open class) lexical (...) items of a natural spoken language include neither names nor polyadic predicates. The paper ends with some speculations about the value of a language faculty that would impose uniform monadic analyses on all concepts, including the singular and relational concepts that we share with other animals. (shrink)
Paul M. Pietroski, University of Maryland I had heard it said that Chomsky’s conception of language is at odds with the truth-conditional program in semantics. Some of my friends said it so often that the point—or at least a point—finally sunk in.
Paul M. Pietroski, University of Maryland For any sentence of a natural language, we can ask the following questions: what is its meaning; what is its syntactic structure; and how is its meaning related to its syntactic structure? Attending to these questions, as they apply to sentences that provide evidence for Davidsonian event analyses, suggests that we reconsider some traditional views about how the syntax of a natural sentence is related to its meaning.
(in Chomsky and His Critics, edited [heroically] by Louise Antony and Norbert Hornstein, Blackwell 2003) You may need to “Rotate View, Clockwise” to get the .pdf file to appear properly. This paper was written in 1998, and so may be past its use-by date. Updated versions of various bits of the paper appear elsewhere; see note 1. More Truth in Advertising: I’m not criticizing Chomsky; though I am being critical, and Chomsky does figure prominently. The idea, as the subtitle suggests, (...) is that there are analytic truths–even if the notion of synonymy is suspect. The trick involves (can you guess?) combining, in the right way, a neo-Davidsonian event semantics with a Minimalist syntax. Blatant Advertising: get hold of the entire book if only for Chomsky’s replies; for anyone interested Chomsky’s conception of meaning (and his semantic internalism), see especially his replies to Egan, Rey, Ludlow, Horwich, and Pietroski. (shrink)
In just a few years, children achieve a stable state of linguistic competence, making them effectively adults with respect to: understanding novel sentences, discerning relations of paraphrase and entailment, acceptability judgments, etc. One familiar account of the language acquisition process treats it as an induction problem of the sort that arises in any domain where the knowledge achieved is logically underdetermined by experience. This view highlights the 'cues' that are available in the input to children, as well as children's skills (...) in extracting relevant information and forming generalizations on the basis of the data they receive. Nativists, on the other hand, content that language-learners project beyond their experience in ways that the input does not even suggest. Instead of viewing language acqusition as a special case of theory induction, nativists posit a Universal Grammar, with innately specified linguistic principles of grammar formation. The 'nature versus nurture' debate continues, as various "poverty of stimulus" arguments are challenged or supported by developments in linguistic theory and by findings from psycholinguistic investigations of child language. In light of some recent challenges to nativism, we rehearse old poverty-of stimulus arguments, and supplement them by drawing on more recent work in linguistic theory and studies of child language. (shrink)
In a recent paper, Bar-On and Risjord (henceforth, 'B&R') contend that Davidson provides no 1 good argument for his (in)famous claim that "there is no such thing as a language." And according to B&R, if Davidson had established his "no language" thesis, he would thereby have provided a decisive reason for abandoning the project he has long advocated--viz., that of trying to provide theories of meaning for natural languages by providing recursive theories of truth for such languages. For he would (...) have shown that there are no languages to provide truth (or meaning) theories of. Davidson thus seems to be in the odd position of arguing badly for a claim that would undermine his own work. (shrink)