In just a few years, children achieve a stable state of linguistic competence, making them effectively adults with respect to: understanding novel sentences, discerning relations of paraphrase and entailment, acceptability judgments, etc. One familiar account of the language acquisition process treats it as an induction problem of the sort that arises in any domain where the knowledge achieved is logically underdetermined by experience. This view highlights the cues that are available in the input to children, as well as childrens skills (...) in extracting relevant information and forming generalizations on the basis of the data they receive. Nativists, on the other hand, contend that language-learners project beyond their experience in ways that the input does not even suggest. Instead of viewing language acqusition as a special case of theory induction, nativists posit a Universal Grammar, with innately specified linguistic principles of grammar formation. The nature versus nurture debate continues, as various poverty of stimulus arguments are challenged or supported by developments in linguistic theory and by findings from psycholinguistic investigations of child language. In light of some recent challenges to nativism, we rehearse old poverty-of stimulus arguments, and supplement them by drawing on more recent work in linguistic theory and studies of child language. (shrink)
Paul Pietroski presents an original philosophical theory of actions and their mental causes. We often act for reasons: we deliberate and choose among options, based on our beliefs and desires. However, bodily motions always have biochemical causes, so it can seem that thinking and acting are biochemical processes. Pietroski argues that thoughts and deeds are in fact distinct from, though dependent on, underlying biochemical processes within persons.
Paul M. Pietroski, University of Maryland I had heard it said that Chomsky’s conception of language is at odds with the truth-conditional program in semantics. Some of my friends said it so often that the point—or at least a point—finally sunk in.
I argue that linguistic meanings are instructions to build monadic concepts that lie between lexicalizable concepts and truth-evaluable judgments. In acquiring words, humans use concepts of various adicities to introduce concepts that can be fetched and systematically combined via certain conjunctive operations, which require monadic inputs. These concepts do not have Tarskian satisfaction conditions. But they provide bases for refinements and elaborations that can yield truth-evaluable judgments. Constructing mental sentences that are true or false requires cognitive work, not just an (...) exercise of basic linguistic capacities. (shrink)
In this comment on Yli-Vakkuri and Hawthorne's illuminating book, Narrow Content, I address some issues related to externalist conceptions of linguistic meaning.
We propose that the generalizations of linguistic theory serve to ascribe beliefs to humans. Ordinary speakers would explicitly (and sincerely) deny having these rather esoteric beliefs about language--e.g., the belief that an anaphor must be bound in its governing category. Such ascriptions can also seem problematic in light of certain theoretical considerations having to do with concept possession, revisability, and so on. Nonetheless, we argue that ordinary speakers believe the propositions expressed by certain sentences of linguistic theory, and that linguistics (...) can therefore teach us something about belief as well as language. Rather than insisting that ordinary speakers lack the linguistic beliefs in question, philosophers should try to show how these empirically motivated belief ascriptions can be correct. We argue that Stalnaker's (1984) "pragmatic" account--according to which beliefs are dispositions, and propositions are sets of possible worlds--does just this. Moreover, our construal of explanation in linguistics motivates (and helps provide) responses to two difficulties for the pragmatic account of belief: the phenomenon of opacity, and the so-called problem of deduction. (shrink)
In a recent paper, Bar-On and Risjord (henceforth, 'B&R') contend that Davidson provides no 1 good argument for his (in)famous claim that "there is no such thing as a language." And according to B&R, if Davidson had established his "no language" thesis, he would thereby have provided a decisive reason for abandoning the project he has long advocated--viz., that of trying to provide theories of meaning for natural languages by providing recursive theories of truth for such languages. For he would (...) have shown that there are no languages to provide truth (or meaning) theories of. Davidson thus seems to be in the odd position of arguing badly for a claim that would undermine his own work. (shrink)
The event analysis of action sentences seems to be at odds with plausible (Davidsonian) views about how to count actions. If Booth pulled a certain trigger, and thereby shot Lincoln, there is good reason for identifying Booths' action of pulling the trigger with his action of shooting Lincoln; but given truth conditions of certain sentences involving adjuncts, the event analysis requires that the pulling and the shooting be distinct events. So I propose that event sortals like 'shooting' and 'pulling' are (...) true of complex events that have actions (and various effects of actions) as parts. Combining this view with some facts about so-called causative verbs, I then argue that paradigmatic actions are best viewed as tryings, where tryings are taken to be intentionally characterized events that typically cause peripheral bodily motions. The proposal turns on a certain conception of what it is to be the Agent of an event; and I conclude by elaborating this conception in the context of some recent discussions about the relation of thematic roles to grammatical categories. (shrink)
The philosophical problem of mental causation concerns a clash between commonsense and scientific views about the causation of human behaviour. On the one hand, commonsense suggests that our actions are caused by our mental states—our thoughts, intentions, beliefs and so on. On the other hand, neuroscience assumes that all bodily movements are caused by neurochemical events. It is implausible to suppose that our actions are causally overdetermined in the same way that the ringing of a bell may be overdetermined by (...) two hammers striking it at the same time. So how are we to reconcile these two views about the causal origins of human behaviour? One philosophical doctrine effects a nice reconciliation. Neuralism, or the token-identity theory, states that every particular mental event is a neurophysiological event and that every action is a physically specifiable bodily movement. If these identities hold, there is no problem of causal overdetermination: the apparently different causal pathways to the behaviour are actually one and the same pathway viewed from different perspectives. This attractively simple view is enjoying a recent revival in fortunes. (shrink)
Davidsonian analyses of action reports like ‘Alvin chased Theodore around a tree’ are often viewed as supporting the hypothesis that sentences of a human language H have truth conditions that can be specified by a Tarski-style theory of truth for H. But in my view, simple cases of adverbial modification add to the reasons for rejecting this hypothesis, even though Davidson rightly diagnosed many implications involving adverbs as cases of conjunct-reduction in the scope of an existential quantifier. I think the (...) puzzles in this vicinity reflect “framing effects,” which reveal the implausibility of certain assumptions about how linguistic meaning is related to truth and logical form. We need to replace these assumptions with alternatives, instead of positing implausible values of event-variables or implausible relativizations of truth to linguistic descriptions of actual events. (shrink)
Chomsky’s (1995, 2000a) Minimalist Program (MP) invites a perspective on semantics that is distinctive and attractive. In section one, I discuss a general idea that many theorists should find congenial: the spoken or signed languages that human children naturally acquire and use— henceforth, human languages—are biologically implemented procedures that generate expressions, whose meanings are recursively combinable instructions to build concepts that reflect a minimal interface between the Human Faculty of Language (HFL) and other cognitive systems. In sections two and three, (...) I develop this picture in the spirit of MP, in part by asking how much of the standard Frege-Tarski apparatus is needed in order to provide adequate and illuminating descriptions of the “concept assembly instructions” that human languages can generate. I’ll suggest that we can make do with relatively little, by treating all phrasal meanings as instructions to assemble number-neutral concepts that are monadic and conjunctive. But the goal is not to legislate what counts as minimal in semantics. Rather, by pursuing one line of Minimalist thought, I hope to show how such thinking can be fruitful. (shrink)
How can a speaker can explain that P without explaining the fact that P, or explain the fact that P without explaining that P, even when it is true (and so a fact) that P? Or in formal mode: what is the semantic contribution of 'explain' such that 'She explained that P' can be true, while 'She explained the fact that P' is false (or vice versa), even when 'P' is true? The proposed answer is that 'explained' is a semantically (...) monadic predicate, satisfied by events of explaining. But 'the fact that P' (a determiner phrase) and 'that P' (a complementizer phrase) get associated with different thematic roles, corresponding to the distinction between a thing explained and the content of a speech act. (shrink)
Words indicate concepts, which have various adicities. But words do not, in general, inherit the adicities of the indicated concepts. Lots of evidence suggests that when a concept is lexicalized, it is linked to an analytically related monadic concept that can be conjoined with others. For example, the dyadic concept CHASE(_,_) might be linked to CHASE(_), a concept that applies to certain events. Drawing on a wide range of extant work, and familiar facts, I argue that the (open class) lexical (...) items of a natural spoken language include neither names nor polyadic predicates. The paper ends with some speculations about the value of a language faculty that would impose uniform monadic analyses on all concepts, including the singular and relational concepts that we share with other animals. (shrink)
Nativists inspired by Chomsky are apt to provide arguments with the following general form: languages exhibit interesting generalizations that are not suggested by casual (or even intensive) examination of what people actually say; correspondingly, adults (i.e., just about anyone above the age of four) know much more about language than they could plausibly have learned on the basis of their experience; so absent an alternative account of the relevant generalizations and speakers' (tacit) knowledge of them, one should conclude that there (...) are substantive "universal" principles of human grammar and, as a result of human biology, children can only acquire languages that conform to these principles. According to Pullum and Scholz, linguists need not suppose that children are innately endowed with "specific contingent facts about natural languages." But Pullum and Scholz don't consider the kinds of facts that really impress nativists. Nor do they offer any plausible acquisition scenarios that would culminate in the acquisition of languages that exhibit the kinds of rich and interrelated generalizations that are exhibited by natural languages. As we stress, good poverty-of-stimulus arguments are based on specific principles - - confirmed by drawing on (negative and crosslinguistic) data unavailable to children -- that help explain a range of independently established linguistic phenomena. If subsequent psycholinguistic experiments show that very young children already know such principles, that strengthens the case for nativism; and if further investigation shows that children sometimes "try out" constructions that are unattested in the local language, but only if such constructions are attested in other human languages, then the case for nativism is made stronger still. We illustrate these points by considering an apparently disparate -- but upon closer inspection, interestingly related -- cluster of phenomena involving: negative polarity items, the interpretation of 'or', binding theory, and displays of Romance and Germanic constructions in child- English.. (shrink)
Paul M. Pietroski, University of Maryland For any sentence of a natural language, we can ask the following questions: what is its meaning; what is its syntactic structure; and how is its meaning related to its syntactic structure? Attending to these questions, as they apply to sentences that provide evidence for Davidsonian event analyses, suggests that we reconsider some traditional views about how the syntax of a natural sentence is related to its meaning.
This paper presents a slightly modified version of the compositional semantics proposed in Events and Semantic Architecture (OUP 2005). Some readers may find this shorter version, which ignores issues about vagueness and causal constructions, easier to digest. The emphasis is on the treatments of plurality and quantification, and I assume at least some familiarity with more standard approaches.
Here's one way this chapter could go. After defining the terms 'innate' and 'idea', we say whether Chomsky thinks any ideas are innate -- and if so, which ones. Unfortunately, we don't have any theoretically interesting definitions to offer; and, so far as we know, Chomsky has never said that any ideas are innate. Since saying that would make for a very short chapter, we propose to do something else. Our aim is to locate Chomsky, as he locates himself, in (...) a rationalist tradition where talk of innate ideas has often been used to express the following view: the general character of human thought is due largely to human nature. (shrink)
The meaning of a noun phrase like ‘brown cow’, or ‘cow that ate grass’, is somehow conjunctive. But conjunctive in what sense? Are the meanings of other phrases—e.g, ‘ate quickly’, ‘ate grass’, and ‘at noon’—similarly conjunctive? I suggest a possible answer, in the context of a broader conception of natural language semantics. But my main aim is to highlight some underdiscussed questions and some implications of our ignorance.
Davidson conjectured that suitably formulated Tarski-style theories of truth can “do duty” as theories of meaning for the spoken languages that humans naturally acquire. But this conjecture faces a pair of old objections that are, in my view, fatal when combined. Foster noted that given any theory of the sort Davidson envisioned, for a language L, there will be many equally true theories whose theorems pair endlessly many sentences of L with very different specifications of whether or not those sentences (...) are true. And if L includes words ‘true’, then for reasons stressed by Tarski, it’s hard to see how any truth theory for L could be correct. Moreover, each of these concerns amplifies the other. Appealing to possible worlds will not help with Foster’s Problem, for reasons that Chomsky discussed in the 1950s, and appealing to trivalent models of truth will not avoid concerns illustrated with Liar Sentences. (shrink)
In Conjoining Meanings, I argue that meanings are composable instructions for how to build concepts of a special kind. In this summary of the main line of argument, I stress that proposals about what linguistic meanings are should make room for the phenomenon of lexical polysemy. On my internalist proposal, a single lexical item can be used to access various concepts on different occasions of use. And if lexical items are often “conceptually equivocal” in this way, then some familiar arguments (...) for externalist conceptions of linguistic meaning need to be reevaluated. (shrink)
The general topic of "Mind and World", the written version of John McDowell's 1991 John Locke Lectures, is how `concepts mediate the relation between minds and the world'. And one of the main aims is `to suggest that Kant should still have a central place in our discussion of the way thought bears on reality' (1).1 In particular, McDowell urges us to adopt a thesis that he finds in Kant, or perhaps in Strawson's Kant: the content of experience is conceptualized; (...) _what_ we experience is always the kind of thing that we could also believe. When an agent has a veridical experience, she `takes in, for instance sees, _that things are thus and so_' (9). McDowell's argument for this thesis is indirect, but potentially powerful. He discusses a tension concerning the roles of experience and conceptual capacities in thought, and he claims that the only adequate resolution involves granting that experiences have conceptualized content. The tension, elaborated below, can be expressed roughly as follows: judgments must be somehow constrained by features of the external environment, else judgments would be utterly divorced from the world they purport to be about; yet our judgments must be somehow free of external control, else we could give no sense to the idea that we are responsible for our judgments. (shrink)
Some cases of implicit knowledge involve representations of (implicitly) known propositions, but this is not the only important type of implicit knowledge. Chomskian linguistics suggests another model of how humans can know more than is accessible to consciousness. Innate capacities to focus on a small range of possibilities, thereby ignoring many others, need not be grounded by inner representations of any possibilities ignored. This model may apply to many domains where human cognition “fills a gap” between stimuli and judgment.