What accounts for the offensive character of pejoratives and slurs, words like ‘kike’ and ‘nigger’? Is it due to a semantic feature of the words or to a pragmatic feature of their use? Is it due to a violation of a group’s desires to not be called by certain terms? Is it due to a violation of etiquette? According to one kind of view, pejoratives and the non-pejorative terms with which they are related—the ‘neutral counterpart’ terms—have different meanings or senses, (...) and this explains the offensiveness of the pejoratives. We call theories of this kind, semantic theories of the pejoratives. Our goal is broadly speaking two-fold. First, we will undermine the arguments that are supposed to establish the distinction in meaning between words like ‘African American’ and ‘nigger’. We will show that the arguments are suspect and generalize in untoward ways. Second, we will provide a series of arguments against semantic theories. For simplicity, we focus on a semantic theory that has been proposed by Hom and Hom and May. By showing the systematic ways in which their view fails we hope to provide general lessons about why we should avoid semantic theories of the pejoratives. (shrink)
What is the proper way to draw the semantics-pragmatics distinction, and is what is said by a speaker ever enriched by pragmatics? An influential but controversial answer to the latter question is that the inputs to semantic interpretation contains representations of every contribution from context that is relevant to determining what is said, and that pragmatics never enriches the output of semantic interpretation. The proposal is bolstered by a controversial argument from syntactic binding designed to detect hidden syntactic structure. The (...) following contains an exposition and consideration of the argument. (shrink)
Attempts to characterize unarticulated constituents (henceforth: UCs) by means of quantification over the parts of a sentence and the constituents of the proposition it expresses come to grief in more complicated cases than are commonly considered. In particular, UC definitions are inadequate when we consider cases in which the same constituent appears more than once in a proposition that only has one word with the constituent as its semantic value. This article explores some consequences of trying to repair the formal (...) definitions. (shrink)
Donnellan makes a convincing case for two distinct uses ofdefinite descriptions. But does the difference between the usesreflects an ambiguity in the semantics of descriptions? This paperapplies a linguistic test for ambiguity to argue that the differencebetween the uses is not semantically significant.
A provocative view has it that word meanings are underdetermined and dynamic, frustrating traditional approaches to theorizing about meaning. Peter Ludlow’s Living Words provides some of the philosophical reasons and motivations for accepting one such view, develops some of its details, and explores some of its ramifications. We critically examine some of the arguments in Living Words, paying particular attention to some of Ludlow’s views about the meanings of predicates, preservation of bivalence and the T-schema, and methods of modulating meaning.
This paper considers the connections between semantic shiftiness (plasticity), epistemic safety and an epistemic theory of vagueness as presented and defended by Williamson (1996a, b, 1997a, b). Williamson explains ignorance of the precise intension of vague words as rooted in insensitivity to semantic shifts: one’s inability to detect small shifts in intension for a vague word results in a lack of knowledge of the word’s intension. Williamson’s explanation, however, falls short of accounting for ignorance of intension.
Some left-nested indicative conditionals are hard to interpret while others seem fine. Some proponents of the view that indicative conditionals have No Truth Values (NTV) use their view to explain why some left-nestings are hard to interpret: the embedded conditional does not express the truth conditions needed by the embedding conditional. Left-nestings that seem fine are then explained away as cases of ad hoc, pragmatic interpretation. We challenge this explanation. The standard reasons for NTV about indicative conditionals (triviality results, Gibbardian (...) standoffs, etc.) extend naturally to NTV about biconditionals. So NTVers about conditionals should also be NTVers about biconditionals. But biconditionals embed much more freely than conditionals. If NTV explains why some left-nested conditionals are hard to interpret, why do biconditionals embed successfully in the very contexts where conditionals do not embed? (shrink)
No semantic theory is complete without an account of context sensitivity. But there is little agreement over its scope and limits even though everyone invokes intuition about an expression's behavior in context to determine its context sensitivity. Minimalists like Cappelen and Lepore identify a range of tests which isolate clear cases of context sensitive expressions, such as ‘I’, ‘here’, and ‘now’, to the exclusion of all others. Contextualists try to discredit the tests and supplant them with ones friendlier to their (...) positions. In this paper we will explore and evaluate Cappelen and Hawthorne's recent attempts to discredit Cappelen and Lepore's tests and replace them with others. We will argue they have failed to provide sufficient reason to abandon minimalism. If we are right, minimalism about context sensitivity is still viable. (shrink)
We argue there is a clash between the standard treatments of context sensitivity and presupposition triggering. We use this criticism to motivate a defense of an often-discarded view about how to represent context sensitivity, according to which there are more lexically implicit items in logical form than has been appreciated.
In an insightful and provocative paper, Jessica Rett (2006) claims that attempts to locate the (non-indexical, non-demonstrative) semantic contributions of context in syntax run into problems respecting compositionality. This is an especially biting problem for hidden indexical theorists such as Stanley (2000, 2002) who deploy hidden variables to provide a compositional theory of semantic interpretation. Fortunately for the hidden indexical theorists, her attack fails, albeit in interesting and subtle ways. The following paper is divided into four sections. Section I presents (...) a skeletal version of Rett’s argument. Those already familiar with Rett (2006) can skip ahead without shame. Section II offers a .. (shrink)
This paper is about the interface between two phenomena—context sensitivity and presupposition. I argue that favored competing treatments of context sensitivity are incompatible with the received view about presupposition triggering. In consequence, I will urge a reconsideration of a much-maligned view about how best to represent context s ensitivity.