In the first half of this paper, I argue that group belief ascriptions are highly ambiguous. What's more, in many cases, neither the available contextual factors nor known pragmatic considerations are sufficient to allow the audience to identify which of the many possible meanings is intended. In the second half, I argue that this ambiguity often has bad consequences when a group belief ascription is heard and taken as testimony. And indeed it has these consequences even when the ascription is (...) true on the speaker's intended interpretation, when the speaker does not intend to mislead and indeed intends to cooperatively inform, and when the audience incorporates the evidence from the testimony as they should. I conclude by arguing that these consequences should lead us to stop using such ascriptions. (shrink)
Polysemous concepts with multiple related meanings pervade natural languages, yet some philosophers argue that we should eliminate them to avoid miscommunication and pointless debates in scientific discourse. This paper defends the legitimacy of polysemous concepts in science against this eliminativist challenge. My approach analyses such concepts as patchworks with multiple scale-dependent, technique-involving, domain-specific and property-targeting uses (patches). I demonstrate the generality of my approach by applying it to "hardness" in materials science, "homology" in evolutionary biology, "gold" in chemistry and "cortical (...) column" in neuroscience. Such patchwork concepts are legitimate if the techniques used to apply them produce reliable results, the domains to which they are applied are homogenous, and the properties they refer to are significant to describe, classify or explain the behavior of entities in the extension of the concept. By following these normative constraints, researchers can avoid miscommunication and pointless debates without having to eliminate polysemous patchwork concepts in scientific discourse. (shrink)
The main goal of this paper is to show that there are many phenomena that pertain to the construction of truth-conditional compounds that follow characteristic patterns, and whose explanation requires appealing to knowledge structures organized in specific ways. We review a number of phenomena, ranging from non-homogenous modification and privative modification to polysemy and co-predication that indicate that knowledge structures do play a role in obtaining truth-conditions. After that, we show that several extant accounts that invoke rich lexical meanings to (...) explain such phenomena face problems related to inflexibility and lack of predictive power. We review different ways in which one might react to such problems as regards lexical meanings: go richer, go moderately richer, go thinner, and go moderately thinner. On the face of it, it looks like moderate positions are unstable, given the apparent lack of a clear cutoff point between the semantic and the conceptual, but also that a very thin view and a very rich view may turn out to be indistinguishable in the long run. As far as we can see, the most pressing open questions concern this last issue: can there be a principled semantic/world knowledge distinction? Where could it be drawn: at some upper level (e.g. enriched qualia structures) or at some basic level (e.g. constraints)? How do parsimony considerations affect these two different approaches? A thin meanings approach postulates intermediate representations whose role is not clear in the interpretive process, while a rich meanings approach to lexical meaning seems to duplicate representations: the same representations that are stored in the lexicon would form part of conceptual representations. Both types of parsimony problems would be solved by assuming a direct relation between word forms and (parts of) conceptual or world knowledge, leading to a view that has been attributed to Chomsky (e.g. by Katz 1980) in which there is just syntax and encyclopedic knowledge. (shrink)
This paper focuses on a discussion in Abu Nasr al-Farabi’s Book of Letters (Kitāb al-Ḥurūf), which has to do with the importation of philosophical (including scientific) discourse from one language or nation (ummah) to another. The question of importing philosophical discourse from one language or nation to another touches on Farabi’s views on a number of important philosophical questions. It reveals something about his views on the nature of philosophical and scientific concepts and their relation to concepts in non-philosophical or (...) ‘popular’ discourse, as well as the means of grasping previously unencountered concepts. In this article, I will discuss these issues both to ascertain Farabi’s views as well as to shed some light on them in their own right. I will argue that Farabi thinks that the understanding of some novel philosophical or scientific concepts sometimes depends on the grasp of related concepts from ordinary discourse, and that experts rely on these everyday concepts in acquiring the more specialized concepts. If the same linguistic terms are used to denote both concepts, they will be ambiguous, but this can be considered a case of ‘productive ambiguity,’ since it aids in the acquisition of novel concepts. (shrink)
Empirical evidence suggests that perceptual-motor simulations are often constitutively involved in language comprehension. Call this “the simulation view of language comprehension.” This paper applies the simulation view to illuminate the much-discussed phenomenon of copredication, where a noun permits multiple predications which seem to select different senses of the noun simultaneously. On the proposed account, the (in)felicitousness of a copredicational sentence is closely associated with the perceptual simulations that the language user deploys in comprehending the sentence.
Is anything good simpliciter? And can things count as ‘good’ independent of the context in which ‘good’ is used? Traditionally, a number of meta-ethicists have given positive answers. But more recently, some philosophers have used observations based on natural language to argue that things can only count as ‘good’ relative to ends and contextual thresholds. I will use work from contemporary linguistics to argue that ‘good’ is ambiguous, and that it has a moral disambiguation that attributes a fixed degree of (...) goodness. This implies that things can count as ‘good’ simpliciter, independent of context. Not only does this result provide support for the traditional view, but it also vindicates some aspects of the more recent view. (shrink)
Many word forms in natural language are polysemous, but only some of them allow for co-predication, that is, they allow for simultaneous predications selecting for two different meanings or senses of a nominal in a sentence. In this paper, we try to explain (i) why some groups of senses allow co-predication and others do not, and (ii) how we interpret co-predicative sentences. The paper focuses on those groups of senses that allow co-predication in an especially robust and stable way. We (...) argue, using these cases, but focusing particularly on the multiply polysemous word ‘school’, that the senses involved in co-predication form especially robust activation packages, which allow hearers and readers to access all the different senses in interpretation. (shrink)
The Cartesian thinking self may seem indisputably real. But if it is real, then so thinking, which would undercut mental fictionalism. Thus, in defense of mental fictionalism, this paper argues for fictionalism about the thinking self. In short form, the argument is: (1) If I exist outside of fiction, then I am identical to (some part of/) this biomass [= my body]. (2) If I die at t, I cease to exist at t. (3) If I die at t, no (...) part of this biomass ceases to exist at t. (4) Therefore, no part of this biomass is identical to me. [From (2), (3)] (5) Therefore, I do not exist outside of fiction. [From (4), (1)] One reply to the argument is that the self is an aggregate of electricity in the brain which disperses upon death. The rejoinder is that this, at best, describes the thoughts realized in the brain, and not the subject who thinks the thoughts. A second objection stresses the undeniable sense that the thinking self has a location. In reply, the extended thought-experiment from Dennett’s “Where Am I?” is used to show that the sense of self-location may well be illusory. (shrink)
This work addresses the critical discussion featured in the contemporary literature about two well-known paradoxes belonging to different philosophical traditions, namely Frege’s puzzling claim that “the concept horse is not a concept” and Gongsun Long’s “white horse is not horse”. We first present the source of Frege’s paradox and its different interpretations, which span from plain rejection to critical analysis, to conclude with a more general view of the role of philosophy as a fight against the misunderstandings that come from (...) the different uses of language (a point later developed by the “second” Wittgenstein). We then provide an overview of the ongoing discussions related to the Bai Ma Lun paradox, and we show that its major interpretations include—as in the case of Frege’s paradox—dismissive accounts that regard it as either useless or wrong, as well as attempts to interpret and repair the argument. Resting on our reading of Frege’s paradox as an example of the inescapability of language misunderstandings, we advance a similar line of interpretation for the paradox in the Bai Ma Lun: both the paradoxes, we suggest, can be regarded as different manifestations of similar concerns about language, and specifically about the difficulty of referring to concepts via language. (shrink)
Despite the large number of studies conducted on polysemy, they mostly compare the different methods and techniques to learn a language and establish the extent to which particular sense relations facilitate the learning of second language vocabulary. To our best knowledge, no research has been conducted to determine whether or not polysemy is emphasized in non-native English textbooks. The objective of the present research was to determine the degree to which polysemy is incorporated in English textbooks. Thus, the research question (...) guiding the current study is: To what extent is polysemy incorporated in non-native English textbooks? The study is corpus-based research that used a data set of 500 words, i.e., 250 words from each of the two books, utilizing the Sketch Engine word list tool and concordance. The polysemy of the resulting words in the concordance lines generated was semantically annotated manually using WordNet and English dictionaries. The results indicated that polysemy is barely stressed in the textbooks under investigation. The study’s results have substantial implications for polysemy in particular and second or foreign language teaching in general. (shrink)
Some generic generalizations have both a descriptive and a normative reading. The generic sentence “Philosophers care about the truth”, for instance, can be read as describing what philosophers in fact care about, but can also be read as prescribing philosophers to care about the truth. On Leslie’s account, this generic sentence has two readings due to the polysemy of the kind term “philosopher”. In this paper, I first argue against this polysemy account of descriptive/normative generics. In response, a contextualist semantic (...) theory for generic sentences is introduced. Based on this theory, I argue that descriptive/normative generics are contextually underspecified. (shrink)
Cross-domain descriptions are descriptions of features pertaining to one domain in terms of vocabulary primarily associated with another domain. Notably, we routinely describe psychological features in terms of the sensory domain and vice versa. Sorrow is said to be ‘bitter’ and fear ‘cold’. Music can be described as ‘happy’, ‘sad’, ‘mournful’, and so on. Such descriptions are rife in both everyday discourse and literary writings. What is it about psychological features that invites descriptions in sensory terms and what is it (...) about the sensory that invites descriptions in terms of the psychological? Drawing on the literature on polysemy, this paper sheds light on cross-domain descriptions pertaining to the sensory and the psychological domains. (shrink)
Philosophers disagree about what the folk concept of pain is. This paper criticises existing theories of the folk concept of pain, i.e. the mental view, the bodily view, and the recently proposed polyeidic view. It puts forward an alternative proposal – the polysemy view – according to which pain terms like “sore,” “ache” and “hurt” are polysemous, where one sense refers to a mental state and another a bodily state, and the type of polysemy at issue reflects two distinct but (...) related concepts of pain. Implications with respect to issues in philosophy of pain are also drawn. (shrink)
Causal pluralists hold that that there is not just one determinate kind of causation. Some causal pluralists hold that ‘cause’ is ambiguous among these different kinds. For example, Hall (2004) argues that ‘cause’ is ambiguous between two causal relations, which he labels dependence and production. The view that ‘cause’ is ambiguous, however, wrongly predicts zeugmatic conjunction reduction, and wrongly predicts the behaviour of ellipsis in causal discourse. So ‘cause’ is not ambiguous. If we are to disentangle causal pluralism from the (...) ambiguity claim, we need to consider what other linguistic approaches are available to the causal pluralist. I consider and reject proposals that ‘cause’ is a general term, that the term is an indexical, and that the term conveys different kinds of causation through implicature or presupposition. Finally, I argue that causal pluralism is better handled by treating ‘cause’ as a univocal term within a dynamic interpretation framework. (shrink)
This book is about the idea that some true statements would have been true no matter how the world had turned out, while others could have been false. It develops and defends a version of the idea that we tell the difference between these two types of truths in part by reflecting on the meanings of words. It has often been thought that modal issues—issues about possibility and necessity—are related to issues about meaning. In this book, the author defends the (...) view that the analysis of meaning is not just a preliminary to answering modal questions in philosophy; it is not merely that before we can find out whether something is possible, we need to get clear on what we are talking about. Rather, clarity about meaning often brings with it answers to modal questions. In service of this view, the author analyzes the notion of necessity and develops ideas about linguistic meaning, applying them to several puzzles and problems in philosophy of language. Meaning and Metaphysical Necessity will be of interest to scholars and advanced students working in metaphysics, philosophy of language, and philosophical logic. [See my homepage for a sample chapter.]. (shrink)
The aim of this paper is to examine the theoretical architecture of semantic atomism and its consequences with respect to natural language. In particular, it looks to explore the notion of possible concepts using the fundamental distinction between simple and complex concepts and expressions in Jerry Fodor’s atomism. The distinction is exploited to produce an unusual type of concept referred to as a correlate, which effectively mirrors complex concepts while maintaining a distinct underlying structure. Though harmless in and of themselves, (...) their presence in the context of polymorphemic expressions suggests that atomism harbors a tacit and unintuitive form of polysemy that is problematic in its own right and that leads to other complications, some of which may be demonstrated on the example of communication. These issues are tied to the way atomism is structured, and although they seem to have gone largely unnoticed, they appear to bear negatively on the adequacy of atomism where natural language is concerned. (shrink)
Recent research in psycholinguistics suggests that language processing frequently involves mental imagery. This paper focuses on visual imagery and discusses two issues regarding the processing of polysemous words (i.e. words with multiple related meanings or senses) – co-predication and sense-relatedness. It aims to show how mental imagery can illuminate these two issues.
ABSTRACT Copredication, as exhibited by sentences such as ‘That book is heavy but informative,’ is commonly seen as a phenomenon that is tied to sentences featuring polysemous expressions. David Liebesman and Ofra Magidor have recently attacked this view by arguing that ‘book’ has a single context-sensitive sense. The first aim of the present paper is to show that Liebesman and Magidor are wrong to claim that ‘book’ is univocal, but that they may nonetheless be right to question that copredication requires (...) polysemy. Its second aim is to consider implications of this result for the debates on copredication and on semantic variability. (shrink)
ABSTRACT In this paper, I present data involving the use of the Romanian slur ‘țigan’, consideration of which leads to the postulation of a sui-generis, irreducible type of use of slurs. This type of use is potentially problematic for extant theories of slurs. In addition, together with other well-established uses, it shows that there is more variation in the use of slurs than previously acknowledged. I explain this variation by construing slurs as polysemous. To implement this idea, I appeal to (...) a rich-lexicon account of polysemy. I show how such a theory can be applied to slurs and discuss several important issues that arise. (shrink)
The paradox of pain refers to the idea that the folk concept of pain is paradoxical, treating pains as simultaneously mental states and bodily states. By taking a close look at our pain terms, this paper argues that there is no paradox of pain. The air of paradox dissolves once we recognize that pain terms are polysemous and that there are two separate but related concepts of pain rather than one.
Viebahn (2018) has recently argued that several tests for ambiguity, such as the conjunction-reduction test, are not reliable as tests for polysemy, but only as tests for homonymy. I look at the more fine-grained distinction between regular and irregular polysemy and I argue for a more nuanced conclusion: the tests under discussion provide systematic evidence for homonymy and irregular polysemy but need to be used with more care to test for regular polysemy. I put this conclusion at work in the (...) context of the debate over the alleged referential-attributive ambiguity of the definite article. In reply to various criticisms, defenders of the ambiguity view argue that this is a case of polysemy. But opponents object that the dual use of the definite article fails tests for ambiguity. The debate seems to have come to stalemate, unless the relevance of the tests is determined for cases of alleged polysemy. I conclude that the balance of considerations incline towards rejecting the ambiguity thesis. (shrink)
Augustine famously claims every word is a name. Some readers take Augustine to thereby maintain a purely referentialist semantic account according to which every word is a referential expression whose meaning is its extension. Other readers think that Augustine is no referentialist and is merely claiming that every word has some meaning. In this paper, I clarify Augustine’s arguments to the effect that every word is a name and argue that ‘every word is a name’ amounts to the claim that (...) for any word, there exist tokens of that word which are autonymous nouns. Augustine takes this to be the result of universal lexical ambiguity or equivocity and I clarify how Augustine’s account of metalinguistic discourse, which is one of the most detailed to have survived from antiquity, differs from some ancient and modern theories. (shrink)
Goodman and Lederman (2020) argue that the traditional Fregean strategy for preserving the validity of Leibniz’s Law of substitution fails when confronted with apparent counterexamples involving proper names embedded under propositional attitude verbs. We argue, on the contrary, that the Fregean strategy succeeds and that Goodman and Lederman’s argument misfires.
We all make mistakes in pronunciation and spelling, but a common view is that there are limits beyond which a mistaken pronunciation or spelling becomes too dramatic to be recognized as of a particular word at all. These considerations have bolstered a family of accounts that invoke speaker intentions and standards for tolerance as determinants of which word, if any, an utterance tokens. I argue this is a mistake. Neither intentions nor standards of tolerance are necessary or sufficient (individually or (...) jointly) for determining which word an utterance tokens. Instead, drawing in part on empirical research on word production, I offer an alternative account, Originalism-plus-Transfer (OPT), according to which word tokening depends entirely on lexical selection during word production, and on how the selected lexical item is situated within the network of causal/historical connections leading back to its neologizing. Once the elements of my account are in place, as a bonus, we will have resources for a promising answer to the question of word individuation as well. (shrink)
Truth pluralism offers the latest extension in the tradition of substantive theorizing about truth. While various forms of this thesis are available, most frameworks commit to domain reliance. According to domain reliance, various ways of being true, such as coherence and correspondence, are tied to discourse domains rather than individual sentences. From this follows that the truth of different types of sentences is accounted for by their domain membership. For example, sentences addressing ethical matters are true if they cohere and (...) those addressing extensional states of affairs if they correspond. By tying distinct truth-grounding properties to domains rather than individual sentences, truth pluralists avoid certain issues with definitional ambiguity and indeterminacy. I argue that contrary to the ideal situation, domains fail to provide the sought-after benefits of achieving definitional unambiguity and determinacy in the standard domain reliant pluralist frameworks. The reason is that, when combined with the inherently ambiguous nature of certain truth-relevant terms of sentences, fringe cases emerge, causing some of them to count as members of multiple domains. Consequently, some sentences end up being both true and false in the standard domain reliant pluralist frameworks, thus conflicting with both standard laws of non-contradiction and identity. Finally, I argue that truth pluralists should pay closer attention to the hitherto neglected question of inherent natural language ambiguity. (shrink)
In this paper I try to show that semantics can explain word-to-world relations and that sentences can have meanings that determine truth-conditions. Critics like Chomsky typically maintain that only speakers denote, i.e., only speakers, by using words in one way or another, represent entities or events in the world. However, according to their view, individual acts of denotations are not explained just by virtue of speakers’ semantic knowledge. Against this view, I will hold that, in the typical cases considered, semantic (...) knowledge can account for the denotational uses of words of individual speakers. (shrink)
Statutory interpretation involves the reconstruction of the meaning of a legal statement when it cannot be considered as accepted or granted. This phenomenon needs to be considered not only from the legal and linguistic perspective, but also from the argumentative one - which focuses on the strategies for defending a controversial or doubtful viewpoint. This book draws upon linguistics, legal theory, computing, and dialectics to present an argumentation-based approach to statutory interpretation. By translating and summarizing the existing legal interpretative canons (...) into eleven patterns of natural arguments - called argumentation schemes - the authors offer a system of argumentation strategies for developing, defending, assessing, and attacking an interpretation. Illustrated through major cases from both common and civil law, this methodology is summarized in diagrams and maps for application to computer sciences. These visuals help make the structures, strategies, and vulnerabilities of legal reasoning accessible to both legal professionals and laypeople. (shrink)
This paper empirically raises and examines the question of ‘conceptual control’: To what extent are competent thinkers able to reason properly with new senses of words? This question is crucial for conceptual engineering. This prominently discussed philosophical project seeks to improve our representational devices to help us reason better. It frequently involves giving new senses to familiar words, through normative explanations. Such efforts enhance, rather than reduce, our ability to reason properly, only if competent language users are able to abide (...) by the relevant explanations, in language comprehension and verbal reasoning. This paper examines to what extent we have such ‘conceptual control’ in reasoning with new senses. The paper draws on psycholinguistic findings about polysemy processing to render this question empirically tractable and builds on recent findings from experimental philosophy to address it. The paper identifies a philosophically important gap in thinkers’ control over the key process of stereotypical enrichment and discusses how conceptual engineers can use empirical methods to work around this gap in conceptual control. The paper thus empirically demonstrates the urgency of the question of conceptual control and explains how experimental philosophy can empirically address the question, to render conceptual engineering feasible as an ameliorative enterprise. (shrink)
Most theories of concepts take concepts to be structured bodies of information used in categorization and inference. This paper argues for a version of atomism, on which concepts are unstructured symbols. However, traditional Fodorian atomism is falsified by polysemy and fails to provide an account of how concepts figure in cognition. This paper argues that concepts are generative pointers, that is, unstructured symbols that point to memory locations where cognitively useful bodies of information are stored and can be deployed to (...) resolve polysemy. The notion of generative pointers allows for unresolved ambiguity in thought and provides a basis for conceptual engineering. (shrink)
In this paper, we present a formalism for handling polysemy in spatial expressions based on supervaluation semantics called standpoint semantics for polysemy (SSP). The goal of this formalism is, given a prepositional phrase, to define its possible spatial interpretations. For this, we propose to characterize spatial prepositions by means of a triplet ⟨ image schema, semantic feature, spatial axis⟩. The core of SSP is predicate grounding theories, which are formulas of a first-order language that define a spatial preposition through the (...) semantic features of its trajector and landmark. Precisifications are also established, which are a set of formulae of a qualitative spatial reasoning formalism that aims to provide the spatial characterization of the trajector with respect to the landmark. In addition to the theoretical model, we also present results of a computational implementation of SSP for the preposition ‘in’. (shrink)
Intentionalism is the view that demonstratives, gradable adjectives, quantifiers, modals and other context‐sensitive expressions are intention‐sensitive: their semantic value on a given use is fixed by speaker intentions. The first aim of this paper is to defend Intentionalism against three recent objections, according to which speakers at least sometimes do not have suitable intentions when using supposedly intention‐sensitive expressions. Its second aim is to thereby shed light on the so far little‐explored question of which kinds of intentions can be semantically (...) relevant. (shrink)
I show that the act-type theories of Soames and Hanks entail that every sentence with alternative analyses (including every atomic sentence with a polyadic predicate) is ambiguous, many of them massively so. I assume that act types directed toward distinct objects are themselves distinct, plus some standard semantic axioms, and infer that act-type theorists are committed to saying that ‘Mary loves John’ expresses both the act type of predicating [loving John] of Mary and that of predicating [being loved by Mary] (...) of John. Since the two properties are distinct, so are the act types. Hence, the sentence expresses two propositions. I also discuss a non-standard “pluralist” act-type theory, as well as some retreat positions, which all come with considerable problems. Finally, I extrapolate to a general constraint on theories of structured propositions, and find that Jeffrey King’s theory has the same unacceptable consequence as the act-type theory. (shrink)
In this paper I present a version of meaning holism proposed by Henry Jackman (1999a, 1999b, 2005 and 2015) entitled "moderate holism". I will argue that this moderate version of holism, in addition to responding to much of the criticism attributed to traditional semantic holism (such as translation, disagreement, change of mind and communication), is also extremely useful to explain the occurrence of several, such as vagueness and polysemy.
The purpose of this chapter is twofold. On the one hand, our goal is theoretical, as we aim at providing an instrument for detecting, analyzing, and solving ambiguities based on the reasoning mechanism underlying interpretation. To this purpose, combining the insights from pragmatics and argumentation theory, we represent the background assumptions driving an interpretation as presumptions. Presumptions are then investigated as the backbone of the argumentative reasoning that is used to assess and solve ambiguities and drive (theoretically) interpretive mechanisms. On (...) the other hand, our goal is practical. By analyzing ambiguities as stemming from different presumptions concerning language or, more importantly, expected communicative roles and goals, we can use communicative misunderstandings as the signal of deeper disagreements concerning mutual expectations or cultural differences. This argumentation-based interpretive mechanism will be applied to the analysis of medical interviews in the area of diabetes care, and will be used to bring to light the sources of misunderstanding and the different presumptions that define distinct cultures. We will consequently illustrate the analytical tools by identifying and distinguishing the various types of ambiguity underlying misunderstandings, and we will address them by describing the communicative intentions ascribed to the ambiguous utterances. (shrink)
We offer an analysis of future morphemes as epistemic operators. The main empirical motivation comes from the fact that future morphemes have systematic purely epistemic readings—not only in Greek and Italian, but also in Dutch, German, and English will. The existence of epistemic readings suggests that the future expressions quantify over epistemic, not metaphysical alternatives. We provide a unified analysis for epistemic and predictive readings as epistemic necessity, and the shift between the two is determined compositionally by the lower tense. (...) Our account thus acknowledges a systematic interaction between modality and tense—but the future itself is a pure modal, not a mixed temporal/modal operator. We show that the modal base of the future is nonveridical, i.e. it includes p and ¬p worlds, parallel to epistemic modals such as must, and present arguments that future morphemes are a category that stands in between epistemic modals and predicates of personal taste. We identify, finally, a subclass of epistemic futures which are ratificational, and argue that will is a member of this class. (shrink)
There is an ongoing debate about the meaning of lexical words, i.e., words that contribute with content to the meaning of sentences. This debate has coincided with a renewal in the study of polysemy, which has taken place in the psycholinguistics camp mainly. There is already a fruitful interbreeding between two lines of research: the theoretical study of lexical word meaning, on the one hand, and the models of polysemy psycholinguists present, on the other. In this paper I aim at (...) deepening on this ongoing interbreeding, examine what is said about polysemy, particularly in the psycholinguistics literature, and then show how what we seem to know about the representation and storage of polysemous senses affects the models that we have about lexical word meaning. (shrink)
In arguing against a supposed ambiguity, philosophers often rely on the zeugma test. In an application of the zeugma test, a supposedly ambiguous expression is placed in a sentence in which several of its supposed meanings are forced together. If the resulting sentence sounds zeugmatic, that is taken as evidence for ambiguity; if it does not sound zeugmatic, that is taken as evidence against ambiguity. The aim of this article is to show that arguments based on the second direction of (...) the test are misguided: ambiguous expressions, and in particular philosophically contested ones, do not reliably lead to zeugmaticity, so an absence of zeugmaticity provides no meaningful evidence for an absence of ambiguity. (shrink)
What features will something have if it counts as an explanation? And will something count as an explanation if it has those features? In the second half of the 20th century, philosophers of science set for themselves the task of answering such questions, just as a priori conceptual analysis was generally falling out of favor. And as it did, most philosophers of science just moved on to more manageable questions about the varieties of explanation and discipline-specific scientific explanation. Often, such (...) shifts are sound strategies for problem-solving. But leaving fallow certain basic conceptual issues can also result in foundational debates. (shrink)
This book shows how research in linguistic pragmatics, philosophy of language, and rhetoric can be connected through argumentation to analyze a recognizably common strategy used in political and everyday conversation, namely the distortion of another’s words in an argumentative exchange. Straw man argumentation refers to the modification of a position by misquoting, misreporting or wrenching the original speaker’s statements from their context in order to attack them more easily or more effectively. Through 63 examples taken from different contexts (including political (...) and forensic discourses and dialogs) and 20 legal cases, the book analyzes the explicit and implicit types of straw man, shows how to assess the correctness of a quote or a report, and illustrates the arguments that can be used for supporting an interpretation and defending against a distortion. The tools of argumentation theory, a discipline aimed at investigating the uses of arguments by combining insights from pragmatics, logic, and communication, are applied to provide an original account of interpretation and reporting, and to describe and illustrate tactics and procedures that can be used and implemented for practical purposes.. This book will appeal to scholars in the fields of political communication, communication in general, argumentation theory, rhetoric and pragmatics, as well as to people working in public speech, speech writing, and discourse analysis. (shrink)
This paper argues that the semantic facts about ‘because’ are best explained via a metaphorical treatment of metaphysical explanation that treats causal explanation as explanation par excellence. Along the way, it defends a commitment to a unified causal sense of ‘because’ and offers a proprietary explanation of grounding skepticism. With the causal metaphor account of metaphysical explanation on the table, an extended discussion of the relationship between conceptual structure and metaphysics ends with a suggestion that the semantic facts about ‘because’ (...) tell against grounding-causation unity. (shrink)
Gricean intentionalists hold that what a speaker says and means by a linguistic utterance is determined by the speaker's communicative intention. On this view, one cannot really say anything without meaning it as well. Conventionalists argue, however, that malapropisms provide powerful counterexamples to this claim. I present two arguments against the conventionalist and sketch a new Gricean theory of speech errors, called the misarticulation theory. On this view, malapropisms are understood as a special case of mispronunciation. I argue that the (...) Gricean theory is supported by empirical work in phonetics and phonology and, also, that conventionalism inevitably fails to do this work justice. I conclude, from this, that the conventionalist fails to show that malapropisms constitute a counterexample to a Gricean theory. (shrink)
For many years, it has been common-ground in semantics and in philosophy of language that semantics is in the business of providing a full explanation about how propositional meanings are obtained. This orthodox picture seems to be in trouble these days, as an increasing number of authors now hold that semantics does not deal with thought-contents. Some of these authors have embraced a “thin meanings” view, according to which lexical meanings are too schematic to enter propositional contents. I will suggest (...) that it is plausible to adopt thin semantics for a class of words. However, I’ll also hold that some classes of words, like kind terms, plausibly have richer lexical meanings, and so that an adequate theory of word meaning may have to combine thin and rich semantics. (shrink)
The standard Kratzerian analysis of modal auxiliaries, such as ‘may’ and ‘can’, takes them to be univocal and context-sensitive. Our first aim is to argue for an alternative view, on which such expressions are polysemous. Our second aim is to thereby shed light on the distinction between semantic context-sensitivity and polysemy. To achieve these aims, we examine the mechanisms of polysemy and context-sensitivity and provide criteria with which they can be held apart. We apply the criteria to modal auxiliaries and (...) show that the default hypothesis should be that they are polysemous, and not merely context-sensitive. We then respond to arguments against modal ambiguity. Finally, we show why modal polysemy has significant philosophical implications. (shrink)
The renewed interest in concepts and their role in psychological theorizing is partially motivated by Machery’s claim that concepts are so heterogeneous that they have no explanatory role. Against this, pluralism argues that there is multiplicity of different concepts for any given category, while hybridism argues that a concept is constituted by a rich common representation. This article aims to advance the understanding of the hybrid view of concepts. First, we examine the main arguments against hybrid concepts and conclude that, (...) even if not successful, they challenge hybridism to find a robust criterion for concept individuation and to show an explanatory advantage for hybrid concepts. Then we propose such a criterion of individuation, which we will call ‘functional stable coactivation’. Finally, we examine the prospects of hybridism to understand what is involved in recent approaches to categorization and meaning extraction. 1 The Heterogeneity of Conceptual Representations2 Two Challenges for Hybrid Concepts: Individuation and Explanation2.1 The coordination criterion2.2 Concepts as constituents of thoughts3 Individuating Hybrids: Functional Stable Coactivation4 The Explanatory Power of Hybrid Concepts4.1 Categorization4.2 Meaning extraction4.2.1 Linguistic comprehension and rich lexical entries4.2.2 Polysemy and hybrid concepts5 Conclusion. (shrink)
Logic and humour tend to be mutually exclusive topics. Humour plays off ambiguity, while classical logic falters over it. Formalizing puns is therefore impossible, since puns have ambiguous meanings for their components. However, I will use Independence-Friendly logic to formally encode the multiple meanings within a pun. This will show a general strategy of how to logically represent ambiguity and reveals humour as an untapped source of novel logical structure.
In this paper, I present a very interesting observation about identity in fiction. I call it the phenomenon of identity without interchangeability. It is the phenomenon that two names that have the same referent cannot be used interchangeably in some context. I argue that the phenomenon of identity without interchangeability holds in the dream context, the fictional context in a narrow sense, and the fictional context in an extended sense. I then show one application of the phenomenon in defending Kendall (...) Walton’s account of fiction against Fred Kroon’s objections to him. (shrink)
In a recent article, Carlos Santana shows that in common interest signaling games when signals are costly and when receivers can observe contextual environmental cues, ambiguous signaling strategies outperform precise ones and can, as a result, evolve. I show that if one assumes a realistic structure on the state space of a common interest signaling game, ambiguous strategies can be explained without appeal to contextual cues. I conclude by arguing that there are multiple types of cases of payoff-beneficial ambiguity, some (...) of which are better explained by Santana’s models and some of which are better explained by models presented here. (shrink)